The gender imbalance in the free software world is largely mirrored in the related "open technology and culture" communities. Various efforts have been tried over the years to try to rebalance things, with varying degrees of success. The newly formed Ada Initiative is taking a different tack than those previous efforts: raising money to support full-time staff, along with various projects, rather than going the traditional all-volunteer route.
Valerie Aurora and Mary Gardiner, who are longtime advocates and organizers for "women in open source" projects, launched the Ada Initiative (TAI) on February 7 to "concentrate on focused, direct action programs, including recruitment and training for women, education for community members, and working with companies and projects to improve their outreach to women". While the first steps for the initiative are somewhat bureaucratic—filling out paperwork to put the organization on a sound legal footing along with raising the funds that it needs—TAI has some concrete plans for projects that it will be working on.
At the top of the priority list, according to Aurora, is a survey that will measure the participation of women in the open technology and culture communities. This would be something of an update to the FLOSSPOLS survey that was done in 2006. TAI is working on a methodology for the survey, so that it can be repeated over time to gauge progress. The survey is meant to answer a very fundamental question, Aurora said: "How bad is the problem, and is what we are doing making things better? If we can't answer these questions, we can't do a good job."
Another project in the works is "First Patch Week", which will be an effort to pair companies and projects with female developers to help get the new developers over the first hurdle in joining a development community: submitting their first patch. The idea is that the existing community supplies mentors who have been trained by TAI to bring these new developers along, and it will be beneficial to both sides: "Participating in First Patch Week is an excellent opportunity to get new developers working on your project (with the potential of hiring them later on, of course)." Like the survey, First Patch Week is going to take some time to get up and running, but once past the organizational set-up phase, TAI intends to put in "several months of full time effort" to find the right projects and train mentors.
So far, the response to the initiative has been "amazing", Aurora said, with inquiries from "enormous international corporations" as well as community organizations and individuals. TAI is in discussions with multiple sponsors, but it is really looking for more than money:
Linux Australia is the first TAI sponsor, and is providing some general sponsorship money that Aurora described as a "do the right thing" sponsorship. Because the organization is so small, general sponsorships, rather than those focused on a specific project, are what it is looking for. There's still plenty of room to become a sponsor, but "if your organization would like to be a founding member of the Ada Initiative, now is the time to be talking to us."
Discussions on the supporters mailing list have focused on individual contributions. While that is not the kind of funding TAI is looking for in the long term, it would help with the start-up process, so there will be some means of doing that (possibly through a Kickstarter campaign) coming soon. But there are ways to help beyond just the financial:
If you want to you help, you should also sign up for one of our myriad announcement channels - Twitter, blog, etc. - and we will make announcements as we have opportunities for people to contribute.
It is clear from the FAQ that TAI hopes that fundraising will provide the financial resources to allow the organization to dig into projects that are difficult or impossible for all-volunteer organizations to take on. By providing salaries to its employees (eventually, anyway), those people can concentrate solely on the projects, rather than having to work on them in "evening and weekend" time. It is a different style than that taken by existing organizations, such as LinuxChix and AussieChix, but one that TAI believes will be beneficial to the whole ecosystem, as Aurora pointed out:
The announcement was met with an "excited and supportive" reception, which, along with the sponsors that seem to be lining up, should bode well for TAI. According to Aurora, the initiative expects to be fully funded and working full-time on its projects by July. That means we should start seeing concrete results from those efforts in the latter half of the year. Gardiner and Aurora created TAI because it was "the right thing" to do, Aurora said, and they have been pleasantly surprised with the reaction from the rest of the open technology and culture communities:
The Ada Initiative—named for Countess Ada Lovelace, "the world's first woman open source programmer"—is a very interesting experiment. It will not only provide ways to increase the participation of women in free software and related fields, which is worthwhile goal in itself, but it may also provide an example of how to fund organizations focused on other specific initiatives within our communities.
There are a number of similar kinds of organizations in our community, the foundations for Linux, GNOME, and Apache for example, but those tend to be larger, umbrella organizations, whereas TAI is tightly focused on a well-defined, existing problem. There are certainly other technical and social problems in our communities that might benefit from a similar approach. More women in open technology and culture would be a fabulous outcome from this experiment, and finding more ways to fund interesting projects would just be icing on the cake.
Mario and David are annoyed that Android does not run on a normal Linux system, on any other operating system, or on any architecture except ARM (though they did note the in-progress x86 port). They like their Android applications and want to be able to run them on ordinary systems. To get there, they have developed a plan of decoupling the various parts of an Android system so that they can be replaced. Then they will implement whatever pieces are needed using ordinary Java and the openJDK; that includes implementing a Dalvik virtual machine (VM) in Java and/or running Dalvik as a standalone application. The result will be IcedRobot - an Android implementation built with ordinary Java and which can run on standard operating systems.
Why would one get into a project like this? As Mario put it: they like Google TV and want to run it on a desktop system. It might be nice to dispense with GNOME shell or Unity altogether and run in a pure Android environment. Or, on a traditional desktop, one could run interesting Android applications as "desklets." There is, they said, some potential commercial value for the Dalvik virtual machine which has been liberated from the custom Android kernel and libraries. They mentioned that a Dalvik VM running inside a normal Java VM might take the wind out of the sails of Oracle's lawsuit; since it would obviously be a pure Java application, Oracle's patent claims might not apply. And, they said, it's "time to do something crazy" now that the task of liberating Java is finally complete.
IcedRobot comes down to three separate projects aimed at different use cases. The first of these is gnudroid, which can be thought of as the IcedRobot "micro edition." For this incarnation, there is no interest in running on desktop systems. Gnudroid dispenses with the special Android kernel and the "bionic" libc replacement as well, going back to using standard system components. The Dalvik VM runs as a standalone application on such systems; the end result is something which is quite similar to standard Android in terms of functionality. The developers are removing "meaningless" code from the system - a move, which they say, cuts out 70% of the code. (Details on what is "meaningless" were not provided, though one assumes that removing the custom kernel is a big part of the total.) A new set of build scripts has been written, and the whole thing has been put into a Mercurial repository - they are evidently more comfortable with Mercurial than with git.
The next component, called Daneel, is a Dalvik interpreter written in pure Java. It's only an interpreter at the outset; they acknowledged that it may be necessary to add a just-in-time compiler in the future. This is the piece that, they think, might serve as a workaround for any Oracle patents which might otherwise be applicable. It is, they said, "a bridge between the worlds" of the Dalvik VM and pure Java systems.
Finally, GNUBishop is the "IcedRobot standard edition." It would be made up of three parts - a browser plugin, a desktop application framework, and a full standalone operating system. It replaces the Dalvik runtime entirely, using OpenJDK for the runtime system and Daneel as the core virtual machine. The plugin would allow running Android applications within a browser; most of the popular browsers are targeted. The application framework, instead, would allow the installation of Android applications on a normal desktop system. Linux systems are clearly targeted here, but the developers also have Mac OS and Windows systems in mind - and even QNX. The full operating system would be a Linux distribution built around the Android system.
This work is a volunteer effort for now, but Mario and David would appear to have some commercial goals in mind as well. They discussed the idea of the "GNU AppBazaar," which would be an IcedRobot equivalent to the Android Market. Evidently 10% of all proceeds from the AppBazaar will be sent to the Free Software Foundation. Also planned is "GNU AdNonSense," an advertising system for IcedRobot applications. They were quite firm that any such ads would be completely untargeted and that privacy is an important feature of this system. So no per-user information would be collected, and there will be no way for advertisers to target their ads to specific users. There was some talk of aiming IcedRobot at the automotive market, where, evidently, the developers see a fair amount of opportunity.
The current state of the code is not at all clear; it will, they said, be posted on IcedRobot.org soon, but, as of this writing, that site does not yet exist. From this weblog posting it seems that the process of decoupling Dalvik from the Android kernel is not yet complete; in the talk they said that the replacement of bionic is also an ongoing task. But there are apparently a number of developers working on the project, and they have that wild look in their eyes that suggests they may have the drive to see it through. The IcedRobot may yet walk among us.
The OpenSSL license, which is BSD-style with an advertising clause, has been a source of problems in the past because it is rather unclear whether projects using it can also include GPL-licensed code. Most distributions seem to be comfortable that OpenSSL can be considered a "system library", so that linking to it does not require OpenSSL to have a GPL-compatible license, but the Free Software Foundation (FSF) and, unsurprisingly, Debian are not on board with that interpretation. This licensing issue recently reared its head again in a thread on the pgsql-hackers (PostgreSQL development) mailing list.
For command-line-oriented programs, the GNU readline library, which allows various types for command-line editing, is a common addition. But readline is licensed under the GPL (rather than the LGPL), which means that programs which use it must have a compatible license and PostgreSQL's BSD-ish permissive license certainly qualifies. But the OpenSSL license puts additional restrictions on its users and is thus not compatible with the GPL. Whether that is a real problem in practice depends on how you interpret the GPL and whether OpenSSL qualifies for the system library exception.
Debian has chosen a fairly hardline stance on the matter, which is evidently in line with the FSF's interpretation, so it switched to the BSD-licensed Editline (aka libedit) library instead of readline. PostgreSQL supports libedit as a readline alternative, so making the switch is straightforward. Unfortunately, a bug in libedit means that Debian PostgreSQL users can't input multi-byte characters into the psql command-line tool when using Unicode locales.
For the PostgreSQL project, it is something of a "rock and a hard place" problem. The OpenSSL code works well, and is fairly tightly—perhaps too tightly—integrated. There are two obvious alternatives, though, GnuTLS and Mozilla's Network Security Services (NSS). Switching to either of those would obviate the readline problem because their licenses do not contain the problematic advertising clause.
There have been efforts to switch PostgreSQL to use GnuTLS, as described in Greg Smith's nice overview of the history of the problem, but they didn't pass muster due to the size and intrusiveness of the patch. Part of the problem is that psql is too closely tied to OpenSSL as Martijn van Oosterhout, who developed the GnuTLS support, describes:
The problems are primarily that psql exposes in various ways that it uses OpenSSL and does it in ways that are hard to support backward [compatibly]. So for GnuTLS support you need to handle all those bits too.
Another route to fixing the problem might be for either the readline or the OpenSSL license to change, but that is not a very likely outcome. Some GPL-licensed code has added an explicit "OpenSSL exception", but it is pretty implausible to expect the FSF to do so for readline—it has long seen that library as a way to move more projects to GPL-compatible licenses. OpenSSL is either happy with its license or is unable to change it as Stephen Frost points out in the thread:
Robert Haas recommends revisiting the GnuTLS support for the PostgreSQL 9.2 release, but in the meantime there are some Debian users who cannot easily use psql. It goes beyond just Debian, though, because Ubuntu will be picking up the PostgreSQL+libedit version for its next release. That spreads the problem further, as Joshua D. Drake, who started the whole thread, notes: "As popular as Debian is, the 'user' population is squarely in Ubuntu world and that has some serious public implications as a whole."
Instead of GnuTLS, NSS could be used and has one major advantage: Federal Information Processing Standard (FIPS) 140-2 certification. FIPS 140-2 is a US government standard for encryption that is sometimes required by companies and organizations when adopting products that contain encryption. OpenSSL has been FIPS 140-2 certified, as has NSS, but GnuTLS has not been. For that reason, there is talk of making PostgreSQL support NSS rather than GnuTLS.
The Fedora project is also looking at NSS as part of an effort to consolidate the cryptography libraries used by the project. For a number of reasons, including FIPS certification and some features missing from GnuTLS (notably S/MIME), NSS is the direction Fedora chose. One would guess that the GPL-incompatible license for OpenSSL played a role in eliminating it from consideration.
On the other hand, Fedora does ship various tools with both readline and OpenSSL, including PostgreSQL. It would seem that Fedora (and possibly Red Hat's lawyers) are relying on a belief that OpenSSL is distributed as a system library, as Fedora engineering manager Tom "spot" Callaway has said in 2008 and again in 2009. The project (and other distributions) may also be relying on the near-zero probability that the FSF will ever make a serious effort to stop the distribution of PostgreSQL using readline.
LD_PRELOAD=/lib/libreadline.so.5 psqleverything works as normal.
That's a bit of an ugly hack, and no one seems very happy about it, but the plan is to add the LD_PRELOAD (if libreadline is available) into the psql wrapper that is shipped in the postgresql-client-common package. Martin Pitt sums it up this way:
I don't really like this situation, and personally I'd rather move back to libreadline until OpenSSL or readline or PostgreSQL threatens Debian with a legal case for license violation (I daresay that the chances of this happening are very close to zero..). But oh well..
This kind of licensing clash occurs with some frequency, and the OpenSSL license is known to be problematic—at least for projects that use GPL code. The advertising requirement, which is something of a throwback to the early days of the BSD license, makes OpenSSL increasingly isolated. Distributions and other projects are likely to continue to search for, and find, alternatives, if only to reduce the licensing murkiness and associated questions from developers and users. It is unfortunate that an ego-stroking clause or two in the license of a useful library may reduce its usage but, as always, free software will find a way to work around these kinds of problems and move on.
Servers and PCs get the lion's share of security attention, so it is refreshing to occasionally find a security tool addressing other areas of the ubiquitous computing landscape. One such tool is Bluepot, a GPLv3-licensed honeypot for Bluetooth attacks originally written as a school project by developer Andrew Smith.
A "honeypot" is security slang for a trap designed to lure in attackers by masquerading as a vulnerable system. Generally speaking, a honeypot is used to catch attackers before they reach a genuine network resource (either to shut them down or to report them), but honeypots can also be used as purely research devices — helpful tools to profile the current vulnerability landscape. In Bluetooth attack preparedness, setting up an attractive honeypot probably means pretending to be a phone model with known exploits, known or weak PINs, or other enticing properties.
Bluepot is written in Java and distributed as a JAR file, although, despite the language choice, for the moment it runs only on Linux. This is because Smith designed the application to support the use of multiple Bluetooth adapters simultaneously, which is a feature that Windows cannot handle. The current release is version 0.1, from December 29, 2010. From the Subversion logs, it appears that the bulk of the code was written in the spring of 2010, with a cleanup phase preceding its public release in December. Smith announced the release on his blog, on which he regularly writes about honeypot development.
To get started, you must first install the Bluetooth development libraries for your distribution (presumably this is required to make use of the libraries' lower-level Bluetooth utilities in order to manipulate the adapter's hardware settings more easily). Debian and Ubuntu title the package libbluetooth-dev, while Fedora and Red Hat name it bluez-libs-devel, and openSUSE calls it bluez-devel. You must also have one or more BlueZ-supported Bluetooth adapters. With the dependencies taken care of, simply unpack the Bluepot tarball, and launch Bluepot-0.1.jar with root privileges (root is required in order to change adapter settings; if you attempt to start Bluepot without root privileges, it will not even run).
Normally, your Bluetooth adapter advertises a public name set through the GNOME or KDE tool's Bluetooth configuration utility, and a "computer" major device class. Bluepot allows you to advertise each adapter on your system with a different name, major device class, and minor device class. Historically, lower-level devices such as low-end cell phones, printers, and headsets have had most of the Bluetooth security holes exploited in the wild, particularly because few consumers update the firmware of such products. Thus, to make your honeypot the most attractive to would-be attackers, you may wish to set its name to an older-model Nokia phone and its device class to phone/cellular. Alternatively, Bluepot can randomly alter the advertised name and device class of each adapter, which is probably wise if you want to take a longer look at the attackers in your surroundings.
Bluepot runs its adapters in discoverable mode, accepting all incoming connection requests and transfers. It tracks the OBEX (Object Exchange) protocol used to directly transfer files between devices, the RFCOMM (Radio Frequency Communication) protocol used for serial communication, and the L2CAP (Logical Link Control and Adaptation Protocol) used for transmission control.
The simplest Bluetooth attack is called bluejacking. In spite of the seeming connection to "hijacking," bluejacking is simply sending an unauthorized message of file transfer to another device, using OBEX. For the most part, modern phones and printer now refuse to accept incoming file transfers without explicit user authorization, but there are older models that still accept files from previously-paired devices, and some phones that automatically accept vCards (or any other file payload with the .vcf extension) in the interest of friendly business-card-like information exchange.
Cracking tools may allow an attacker to brute-force the four-digit numeric PIN used to initially pair new devices, which potentially allows for an attack vector to get around the previously-paired-device limitation. According to the specification, Bluetooth PINs can be up to 128 bits long; consumer electronics manufacturers tend to use 4-6 numeric digits to make them easier to remember — which also makes them far easier to brute-force. Even worse, a significant percentage of non-computer devices use easily guessed PINs like 0000 or 1234.
A far more serious exploit goes by the memorable name bluesnarfing; this attack involves remotely reading files from another device: address books, SIM contacts, photos, saved text messages and emails, etc. As with bluejacking, it works over OBEX, although it is more complex, because the remote device must authorize file browsing. The weak-PIN problem is a potential issue here, too, although most devices use encryption, and there are fewer devices that accept any form of incoming file browsing requests without explicit user authorization.
The most serious attack is referred to as bluebugging, which amounts to remotely taking over control of the target device, using it to place or route calls, send SMS or MMS messages, or consume data services. This is typically done by exploiting the Bluetooth stack in order to do a privilege escalation. In addition to these phone-centric attacks, there is an array of potential exploits not centered around cell phone usage, including uploading malware to Bluetooth devices, and hijacking or snooping audio connections.
Bluepot should be able to track and log all of these connections. In its configuration tab, you can specify a directory in which to store any files uploaded by attackers, and you can customize the OBEX and RFCOMM response messages sent, in order to better masquerade as a specific device.
News and blog coverage of bluejacking and bluesnarfing peaked in the mid-2000s, at which point there were a number of common cell phones on the market with known vulnerabilities. Most of the media coverage of the phenomenon I read involves attackers lying in wait for victims in high-traffic public locations such as mass transit points. Since I did not expect to find such nefarious behavior on display in the non-public-transport-served area where I live, I opted to test Bluepot at home instead, by using a pair of machines and a variety of Internet-provided tools.
That itself proved to be a challenge, since most of the publicly-available pen-testing tools date from the mid-2000s as well, and BlueZ, the Linux Bluetooth stack, has undergone a number of revisions since then. The Bluesnarfer tool, for example, is apparently written for BlueZ prior to the version 4.0 release, which changed a number of the setup utilities. Others, like Blooover, are written for Java MIDP-powered phones.
Nevertheless, I was able to test and verify Bluepot's ability to falsely advertise my desktop's USB Bluetooth adapter as a phone, a printer, a network access point, and several other devices, and to safely intercept OBEX file transfers. Along the way, I think I discovered what I would have to call a bug in the GNOME Bluetooth stack, namely that every Linux machine that I tested with aggressively caches the advertised names and device classes of the Bluetooth devices that it discovers when scanning for nearby connections — even when I could verify that a name change had taken place on the Bluepot machine (with hciconfig), it took a reboot of the attacker machines to pick up the updated information.
Along those lines, though, one thing I was not able to do was browse files on the Bluepot machine. That is a feature I was expecting in a honeypot application — to see which files attackers requested, and potentially to feed them bogus data in response. It is possible that Bluepot supports this and I simply could not get it to work — sadly, BlueZ 4.x on Linux is almost completely undocumented. It has improved considerably in the past two or three years, but vague and cryptic error messages (such as "Unable to find service record" in response to a failed OBEX file transfer to a paired device) are still the norm.
I had better luck with the audio device exploit tester carwhisperer, which is designed to inject a harmless audio message to un-secured car hands-free devices, and to intercept and record audio from them. Naturally there was no audio to record when using Bluepot to simulate a hands-free device, but Bluepot tracked and logged the connections admirably.
Bluepot has some basic diagnostic tools, allowing you to chart protocol traffic and file downloads over time, and to view the session logs sorted by adapter and attacker (for each attacker, it logs the Bluetooth device address). One area in which Bluepot falls short, however, is in saving these session logs: its logs its internal status in the logs directory of the unpacked tar archive, but this only includes startup, adapter initialization, and shutdown messages. It apparently attempts to log attack data with the log4j Java library, but through some misconfiguration, fails to do so, and the log settings are not configurable in the user interface. Thus, if you want to save session data, you will have to cut-and-paste information from the GUI's log tab into an external editor.
Smith is pretty open about Bluepot's feature set and limitations on the project site and on his blog; the basic framework is there to collect Bluetooth attack data, and, through multiple-adapter support and device randomization, to do so without the likelihood of discovery. It might be more powerful to masquerade as other Bluetooth addresses, or to provide some more interactive honeypot-like features (such as dummy file content), but it is still a nice starting point, and admirably simple to get started using. I don't expect to catch bluebugging criminals at my local Starbucks, but it will be tempting to take Bluepot with me to the next free software conference I attend, just to see what turns up in the hallway track.
From: Greg To: Jussi Subject: Re: need to ssh into rootkit yes jussi thanks did you reset the user greg or? ------------------------------------- From: Jussi To: Greg Subject: Re: need to ssh into rootkit nope. your account is named as hoglund
|Created:||February 15, 2011||Updated:||November 21, 2011|
|Description:||From the Fedora advisory:
Abcm2ps v5.9.12: Multiple unspecified security vulnerabilities
Abcm2ps v5.9.13: More multiple unspecified security vulnerabilities
|Created:||February 10, 2011||Updated:||February 16, 2011|
From the Debian advisory:
|Package(s):||chrome chromium||CVE #(s):||CVE-2011-0777 CVE-2011-0778 CVE-2011-0783 CVE-2011-0983 CVE-2011-0981 CVE-2011-0984 CVE-2011-0985|
|Created:||February 16, 2011||Updated:||August 23, 2011|
|Description:||The Google chrome and chromium browsers prior to chrome 9.0.597.84 contain a number of vulnerabilities with denial of service or "unspecified impact" consequences.|
|Created:||February 14, 2011||Updated:||February 16, 2011|
|Description:||Version 9.0.597.94 contains an updated version of Flash player (10.2), along with several security fixes.|
|Package(s):||ffmpeg mplayer||CVE #(s):||CVE-2010-3429 CVE-2010-4704 CVE-2010-4705|
|Created:||February 16, 2011||Updated:||September 12, 2011|
|Description:||The ffmpeg library suffers from integer overflow and "arbitrary offset dereference" vulnerabilities which can be exploited via hostile flic and Vorbis files.|
|Package(s):||flash-player||CVE #(s):||CVE-2011-0558 CVE-2011-0559 CVE-2011-0560 CVE-2011-0561 CVE-2011-0571 CVE-2011-0572 CVE-2011-0573 CVE-2011-0574 CVE-2011-0575 CVE-2011-0577 CVE-2011-0578 CVE-2011-0607 CVE-2011-0608|
|Created:||February 10, 2011||Updated:||March 22, 2011|
From the Red Hat advisory:
Multiple security flaws were found in the way flash-plugin displayed certain SWF content. An attacker could use these flaws to create a specially-crafted SWF file that would cause flash-plugin to crash or, potentially, execute arbitrary code when the victim loaded a page containing the specially-crafted SWF content. (CVE-2011-0558, CVE-2011-0559, CVE-2011-0560, CVE-2011-0561, CVE-2011-0571, CVE-2011-0572, CVE-2011-0573, CVE-2011-0574, CVE-2011-0575, CVE-2011-0577, CVE-2011-0578, CVE-2011-0607, CVE-2011-0608)
|Created:||February 11, 2011||Updated:||February 16, 2011|
|Description:||From the Ubuntu advisory:
Stéphane Graber discovered that the iTALC private keys shipped with the Edubuntu Live DVD were not correctly regenerated once Edubuntu was installed. If an iTALC client was installed with the vulnerable keys, a remote attacker could gain control of the system. Only systems using keys from the Edubuntu Live DVD were affected.
|Created:||February 11, 2011||Updated:||July 22, 2011|
|Description:||From the Red Hat advisory:
A denial of service flaw was found in the way certain strings were converted to Double objects. A remote attacker could use this flaw to cause Java-based applications to hang, for instance if they parse Double values in a specially-crafted HTTP request.
|Created:||February 16, 2011||Updated:||July 6, 2011|
|Description:||An initialization flaw in the ethtool ioctl() handler could disclose information to a local user with the CAP_NET_ADMIN capability.|
|Created:||February 16, 2011||Updated:||June 26, 2012|
|Description:||The developers of the nbd block device server managed to reintroduce CVE-2005-3534 - a buffer overflow enabling code execution by a remote attacker.|
|Created:||February 14, 2011||Updated:||February 16, 2011|
|Description:||From the Pardus advisory:
The key_certify function in usr.bin/ssh/key.c in OpenSSH 5.6 and 5.7, when generating legacy certificates using the -t command-line option in ssh-keygen, does not initialize the nonce field, which might allow remote attackers to obtain sensitive stack memory contents or make it easier to conduct hash collision attacks.
|Created:||February 11, 2011||Updated:||May 19, 2011|
|Description:||From the openssl advisory:
Incorrectly formatted ClientHello handshake messages could cause OpenSSL to parse past the end of the message.
This issue applies to the following versions:
The parsing function in question is already used on arbitrary data so no additional vulnerabilities are expected to be uncovered by this. However, an attacker may be able to cause a crash (denial of service) by triggering invalid memory accesses.
|Package(s):||pam||CVE #(s):||CVE-2010-3430 CVE-2010-3431 CVE-2010-4706|
|Created:||February 14, 2011||Updated:||May 31, 2011|
|Description:||From the Pardus advisory:
The privilege-dropping implementation in the (1) pam_env and (2) pam_mail modules in Linux-PAM (aka pam) 1.1.2 does not perform the required setfsgid and setgroups system calls, which might allow local users to obtain sensitive information by leveraging unintended group permissions, as demonstrated by a symlink attack on the .pam_environment file in a user's home directory. NOTE: this vulnerability exists because of an incomplete fix for CVE-2010-3435. (CVE-2010-3430)
The privilege-dropping implementation in the (1) pam_env and (2) pam_mail modules in Linux-PAM (aka pam) 1.1.2 does not check the return value of the setfsuid system call, which might allow local users to obtain sensitive information by leveraging an unintended uid, as demonstrated by a symlink attack on the .pam_environment file in a user's home directory. NOTE: this vulnerability exists because of an incomplete fix for CVE-2010-3435. (CVE-2010-3431)
The pam_sm_close_session function in pam_xauth.c in the pam_xauth module in Linux-PAM (aka pam) 1.1.2 and earlier does not properly handle a failure to determine a certain target uid, which might allow local users to delete unintended files by executing a program that relies on the pam_xauth PAM check. (CVE-2010-4706)
|Created:||February 14, 2011||Updated:||September 14, 2012|
|Description:||From the Pardus advisory:
It was discovered that the patch utility allowed '..' in path names which could allow an attacker to create arbitrary files using a specially-crafted patch file.
|Package(s):||mod_php php-cli php-common||CVE #(s):||CVE-2010-4697 CVE-2010-4698|
|Created:||February 10, 2011||Updated:||May 5, 2011|
From the Pardus advisory:
CVE-2010-4697: Use-after-free vulnerability in the Zend engine in PHP before 5.2.15 and 5.3.x before 5.3.4 might allow context-dependent attackers to cause a denial of service (heap memory corruption) or have unspecified other impact via vectors related to use of __set, __get, __isset, and __unset methods on objects accessed by a reference.
CVE-2010-4698: Stack-based buffer overflow in the GD extension in PHP before 5.2.15 and 5.3.x before 5.3.4 allows context-dependent attackers to cause a denial of service (application crash) via vectors related to the iimagepstext function and invalid anti-aliasing.
|Package(s):||mod_php php-cli php-common||CVE #(s):||CVE-2011-0752 CVE-2011-0753 CVE-2011-0755|
|Created:||February 14, 2011||Updated:||April 5, 2011|
|Description:||From the Pardus advisory:
The extract function in PHP before 5.2.15 does not prevent use of the EXTR_OVERWRITE parameter to overwrite (1) the GLOBALS superglobal array and (2) the this variable, which allows context-dependent attackers to bypass intended access restrictions by modifying data structures that were not intended to depend on external input. (CVE-2011-0752)
Race condition in the PCNTL extension in PHP before 5.3.4, when a user-defined signal handler exists, might allow context-dependent attackers to cause a denial of service (memory corruption) via a large number of concurrent signals. (CVE-2011-0753)
Integer overflow in the mt_rand function in PHP before 5.3.4 might make it easier for context-dependent attackers to predict the return values by leveraging a script's use of a large max parameter, as demonstrated by a value that exceeds mt_getrandmax. (CVE-2011-0755)
|Package(s):||phpmyadmin||CVE #(s):||CVE-2011-0986 CVE-2011-0987|
|Created:||February 14, 2011||Updated:||February 25, 2011|
|Description:||From the Mandriva advisory:
When the files README, ChangeLog or LICENSE have been removed from their original place (possibly by the distributor), the scripts used to display these files can show their full path, leading to possible further attacks (CVE-2011-0986).
It was possible to create a bookmark which would be executed unintentionally by other users (CVE-2011-0987).
|Created:||February 14, 2011||Updated:||February 16, 2011|
|Description:||From the Pardus advisory:
Due to an integer overflow when parsing CharCodes for fonts and a failure to check the return value of a memory allocation, it is possible to trigger writes to a narrow range of offsets from a NULL pointer.
|Package(s):||python-django||CVE #(s):||CVE-2011-0696 CVE-2011-0697|
|Created:||February 14, 2011||Updated:||October 5, 2011|
|Description:||From the Debian advisory:
For several reasons the internal CSRF protection was not used to validate ajax requests in the past. However, it was discovered that this exception can be exploited with a combination of browser plugins and redirects and thus is not sufficient. (CVE-2011-0696)
It was discovered that the file upload form is prone to cross-site scripting attacks via the file name. (CVE-2011-0697)
|Created:||February 15, 2011||Updated:||May 2, 2011|
|Description:||From the Ubuntu advisory:
Neil Wilson discovered that if VNC passwords were blank in QEMU configurations, access to VNC sessions was allowed without a password instead of being disabled. A remote attacker could connect to running VNC sessions of QEMU and directly control the system. By default, QEMU does not start VNC sessions.
|Created:||February 16, 2011||Updated:||March 28, 2011|
|Description:||The chfn and chsh utilities fail to properly sanitize user input, allowing the injection of newlines into the password file; that, in turn, allows the addition of arbitrary entries.|
|Package(s):||tomcat6||CVE #(s):||CVE-2010-3718 CVE-2011-0013 CVE-2011-0534|
|Created:||February 14, 2011||Updated:||October 20, 2011|
|Description:||From the Debian advisory:
It was discovered that the SecurityManager insufficiently restricted the working directory. (CVE-2010-3718)
It was discovered that the HTML manager interface is affected by cross-site scripting. (CVE-2011-0013)
It was discovered that NIO connector performs insufficient validation of the HTTP headers, which could lead to denial of service. (CVE-2011-0534)
|Created:||February 11, 2011||Updated:||April 7, 2011|
|Description:||From the CVE entry:
demux/mkv/mkv.hpp in the MKV demuxer plugin in VideoLAN VLC media player 184.108.40.206 and earlier allows remote attackers to cause a denial of service (crash) and execute arbitrary commands via a crafted MKV (WebM or Matroska) file that triggers memory corruption, related to "class mismatching" and the MKV_IS_ID macro.
|Package(s):||vlc vlc-firefox||CVE #(s):||CVE-2011-0021|
|Created:||February 14, 2011||Updated:||February 16, 2011|
|Description:||From the Pardus advisory:
Multiple heap-based buffer overflows in cdg.c in the CDG decoder in VideoLAN VLC Media Player before 1.1.6 allow remote attackers to cause a denial of service (application crash) or possibly execute arbitrary code via a crafted CDG video.
|Created:||February 14, 2011||Updated:||April 19, 2011|
|Description:||From the Pardus advisory:
Wireshark 1.5.0, 1.4.3, and earlier frees an uninitialized pointer during processing of a .pcap file in the pcap-ng format, which allows remote attackers to cause a denial of service (memory corruption) or possibly have unspecified other impact via a malformed file.
Page editor: Jake Edge
Brief itemsreleased on February 15. The patch volume is dropping (a bit) as this kernel stabilizes, so there's not a lot of new features, but there are some important bug fixes here. Details can be found in the full changelog.
And yes, next time this discussion comes up, I _will_ remove that piece-of-sh*t. It's a disease. It's just a stupid way to say "somebody else should deal with this problem". It's a way to make excuses. It's crap. It was a mistake to ever take any of that to begin with.
There are a lot of enhancements in the pipeline. A bad block log would allow RAID arrays to continue functioning in the presence of bad blocks without needing to immediately eject the offending drive. There is a variant on "hot replace" which would allow a new drive to be inserted before removing the old one, thus allowing the array to continue with a full complement of drives while the new one is being populated. Tracking of areas which are known not to contain useful data would reduce synchronization costs. A number of proposed enhancements to the "reshape" functionality would make it more robust and flexible and allow operations to be undone. A number of other changes are contemplated as well; see Neil's post for the full list.
Except that, sometimes, that's exactly what a system administrator may want to do. Limiting the maximum share of CPU time that a process (or group of processes) may consume can be desirable if those processes belong to a customer who has only paid for a certain amount of CPU time or in situations where it is necessary to provide strict resource-use isolation between processes. The CFS scheduler cannot limit CPU use in that manner, but the CFS bandwidth control patches, posted by Paul Turner, may change that situation.
This patch adds a couple of new control files to the CPU control group mechanism: cpu.cfs_period_us defines the period over which the group's CPU usage is to be regulated, and cpu.cfs_quota_us controls how much CPU time is available to the group over that period. With these two knobs, the administrator can easily limit a group to a certain amount of CPU time and also control the granularity with which that limit is enforced.
Paul's patch is not the only one aimed at solving this problem; the CFS hard limits patch set from Bharata B Rao provides nearly identical functionality. The implementation is different, though; the hard limits patch tries to reuse some of the bandwidth-limiting code from the realtime scheduler to impose the limits. Paul has expressed concerns about the overhead of using this code and how well it will work in situations where the CPU is almost fully subscribed. These concerns appear to have carried the day - there has not been a hard limits patch posted since early 2010. So the CFS bandwidth control patches look like the form this functionality will take in the mainline.
Kernel development news
That said, your editor was recently amused by this message on the golang-dev list indicating that the developers of the Go language have adopted a solution of equal elegance. Go has memory management and garbage collection built into it; the developers believe that this feature is crucial, even in a systems-level programming language. From the FAQ:
In the process of trying to reach that goal of "low enough overhead and no significant latency," the Go developers have made some simplifying assumptions, one of which is that the memory being managed for a running application comes from a single, virtually-contiguous address range. Such assumptions can run into the same problem your editor hit with vi - other code can allocate pieces in the middle of the range - so the Go developers adopted the same solution: they simply allocate all the memory they think they might need (they figured, reasonably, that 16GB should suffice on a 64-bit system) at startup time.
That sounds like a bit of a hack, but an effort has been made to make things work well. The memory is allocated with an mmap() call, using PROT_NONE as the protection parameter. This call is meant to reserve the range without actually instantiating any of the memory; when a piece of that range is actually used by the application, the protection is changed to make it readable and writable. At that point, a page fault on the pages in question will cause real memory to be allocated. Thus, while this mmap() call will bloat the virtual address size of the process, it should not actually consume much more memory until the running program actually needs it.
This mechanism works fine on the developers' machines, but it runs into trouble in the real world. It is not uncommon for users to use ulimit -v to limit the amount of virtual memory available to any given process; the purpose is to keep applications from getting too large and causing the entire system to thrash. When users go to the trouble to set such limits, they tend, for some reason, to choose numbers rather smaller than 16GB. Go applications will fail to run in such an environment, even though their memory use is usually far below the limit that the user set. The problem is that ulimit -v does not restrict memory use; it restricts the maximum virtual address space size, which is a very different thing.
One might argue that, given what users typically want to do with ulimit -v, it might make more sense to have it restrict resident set size instead of virtual address space size. Making that change now would be an ABI change, though; it would also make Linux inconsistent with the behavior of other Unix-like systems. Restricting resident set size is also simply harder than restricting the virtual address space size. But even if this change could be made, it would not help current users of Go applications, who may not update their kernels for a long time.
One might also argue that the Go developers should dump the continuous-heap assumption and implement a data structure which allows allocated memory to be scattered throughout the virtual address space. Such a change also appears not to be in the cards, though; evidently that assumption makes enough things easy (and fast) that they are unwilling to drop it. So some other kind of solution will need to be found. According to the original message, that solution will be to shift allocations for Go programs (on 64-bit systems) up to a range of memory starting at 0xf800000000. No memory will be allocated until it is needed; the runtime will simply assume that nobody else will take pieces of that range in between allocations. Should that assumption prove false, the application will die messily.
For now, that assumption is good; the Linux kernel will not hand out memory in that range unless the application asks for it explicitly. As with many things that just happen to work, though, this kind of scheme could break at any time in the future. Kernel policy could change, the C library might begin doing surprising things, etc. That is always the hazard of relying on accidental, undocumented behavior. For now, though, it solves the problem and allows Go programs to run on systems where users have restricted virtual address space sizes.
It's worth considering what a longer-term solution might look like. If one assumes that Go will continue to need a large, virtually-contiguous heap, then we need to find a way to make that possible. On 64-bit systems, it should be possible; there is a lot of address space available, and the cost of reserving unused address space should be small. The problem is that ulimit -v is not doing exactly what users are hoping for; it regulates the maximum amount of virtual memory an application can use, but it has relatively little effect on how much physical memory an application consumes. It would be nice if there were a mechanism which controlled actual memory use - resident set sizes - instead.
As it turns out, we have such a mechanism in the memory controller. Even better, this controller can manage whole groups of processes, meaning that an application cannot increase its effective memory limit by forking. The memory controller is somewhat resource-intensive to use (though work is being done to reduce its footprint) and, like other control group-based mechanisms, it's not set up to "just work" by default. With a bit of work, though, the memory controller could replace ulimit -v and do a better job as well. With a suitably configured controller running, a Go process could run without limits on address space size and still be prevented from driving the system into thrashing. That seems like a more elegant solution, somehow.
SELinux works by matching a specific access attempt against the permissions granted to the calling process. For system calls like write(), the type of access is obvious - the process is attempting to write to an object. With ioctl(), things are not quite so clear. In past times, SELinux would attempt to deal with ioctl() calls by looking at the specific command to figure out what the process was actually trying to do; a FIBMAP command, for example (which reads a map of a file's block locations) would be allowed to proceed if the calling process had the permission to read the file's attributes.
There are a couple of problems with this approach, starting with the fact that the number of possible ioctl() commands is huge. Even without getting into obscure commands implemented by a single driver, trying to enumerate them all and determine their effects is a road to madness. But it gets worse, in that the intended behavior of a given command may not match what a specific driver actually does in response to that command. So the only way to really know what an ioctl() command will do is to figure out what driver is behind the call, and to have some knowledge of what each driver does. Simply creating this capability is not a task for sane people; maintaining it would not be a task for anybody wanting to remain sane. So security module developers were looking for a better way.
They thought they had found one when somebody realized that the command codes used by ioctl() implementations are not random numbers. They are, instead, a carefully-crafted 32-bit quantity which includes an 8-bit "type" field (approximately identifying the driver implementing the command), a driver-specific command code, a pair of read/write bits, and a size field. Using the read/write bits seemed like a great way to figure out what sort of access the ioctl() call needed without actually understanding the command. Thus, a patch to SELinux was merged for 2.6.27 which ripped out the command recognition and simply used the read/write bits in the command code to determine whether a specific call should be allowed or not.
That change remained for well over two years until Eric Paris noticed that, in fact, it made no sense at all. Most ioctl() calls involve the passing of a data structure into or out of the kernel; that structure describes the operation to be performed or holds data returned from the kernel - or both. The size field in the command code is the size of this structure, and the permission bits describe how the structure will be accessed by the kernel. Together, that information can be used by the core ioctl() code to determine whether the calling process has the proper access rights to the memory behind the pointer passed to the kernel.
What those bits do not do, as Eric pointed out, is say anything about what the ioctl() call will do to the object identified by the file descriptor passed to the kernel. A call passing read-only data to the kernel may reformat a disk, while a call with writable data may just be querying hardware information. So using those bits to determine whether the call should proceed is unlikely to yield good results. It's an observation which seems obvious when spelled out in this way, but none of the developers working on security noticed the problem at the time.
So that code has to go - but, as of this writing, it has not been changed in the mainline kernel. There is a simple reason for that: nobody really knows what sort of logic should replace it. As discussed above, simply enumerating command codes with expected behavior is not a feasible solution either. So something else needs to be devised, but it's not clear what that will be.
Stephen Smalley pointed out one approach which was posted back in 2005. That patch required drivers (and other code implementing ioctl()) to provide a special table associating each command code with the permissions required to execute the command. The obvious objections were raised at that time: changing every driver in the system would be a pain, ioctl() implementations are already messy enough as it is, the tables would not be maintained as the driver changed, and so on. The idea was eventually dropped. Bringing it back now seems unlikely to make anybody popular, but there is probably no other way to truly track what every ioctl() command is actually doing. That knowledge resides exclusively in the implementing code, so, if we want to make use of that knowledge elsewhere, it needs to be exported somehow.
Of course, the alternative is to conclude that (1) ioctl() is a pain, and (2) security modules are a pain. Perhaps it's better to just give up and hope that discretionary access controls, along with whatever checks may be built into the driver itself, will be enough. That is, essentially, the solution we have now.
The "completely fair queueing" (CFQ) I/O scheduler tries to divide the available bandwidth on any given device fairly between the processes which are contending for that device. "Bandwidth" is measured not in the number of bytes transferred, but the amount of time that each process gets to submit requests to the queue; in this way, the code tries to penalize processes which create seek-heavy I/O patterns. (There is also a mode based solely on the number of I/O operations submitted, but your editor suspects it sees relatively little use). The CFQ scheduler also supports group scheduling, but in an incomplete way.
Imagine the group hierarchy shown on the right; here we have three control groups (plus the default root group), and four processes running within those groups. If every process were contending fully for the available I/O bandwidth, and they all had the same I/O priority, one would expect that bandwidth to be split equally between P0, Group1, and Group2; thus P0 should get twice as much I/O bandwidth as either P1 or P3. If more processes were to be added to the root, they should be able to take I/O bandwidth at the expense of the processes in the other control groups. Similarly, the creation of new control groups underneath Group1 should not affect anybody outside of that branch of the hierarchy. In current kernels, though, that is not how things work.
With the current implementation of CFQ group scheduling, the above
hierarchy is transformed into something that looks like this:
The CFQ group scheduler currently treats all groups - including the root group - as being equal, at the same level in the hierarchy. Every group is a top-level group. This level of grouping will be adequate for a number of situations, but there will be other users who want the full hierarchical model. That is why control groups were made to be hierarchical in the first place, after all.
The hierarchical CFQ group scheduling patch set from Gui Jianfeng aims to make that feature available. These patches introduce a new cfq_entity structure which is used for the scheduling of both processes and groups; it is clearly modeled after the sched_entity structure used in the CPU scheduling code. With this in place, the I/O scheduler can just give bandwidth to the top-level cfq_entity which has run up the least "vdisktime" so far; if that entity happens to be a group, the scheduling code drops down a level and repeats the process. Sooner or later, the entity which is scheduled for I/O will be an actual process, and the scheduler can start dispatching I/O requests.
This patch set is on its fourth revision; the previous iterations have led to significant changes. It appears that there are a few things to fix up still, but this work seems to be getting closer to being ready.
One thing is worth bearing in mind: there are two I/O bandwidth controllers in contemporary Linux kernels: the proportional bandwidth controller (built into the CFQ scheduler) and the throttling controller built into the block layer. The group scheduling changes only apply to the proportional bandwidth controller. Arguably there is less need for full group scheduling with the throttling controller, which puts absolute caps on the bandwidth available to specific processes.
Controlling I/O bandwidth has a lot of applications; providing some isolation between customers on a shared hosting service is an obvious example. But this feature may yet prove to have value on the desktop as well; many interactivity problems come down to contention for I/O bandwidth. Anybody who has tried to start an office suite while simultaneously copying a video image on the same drive understands how bad it can be. If the group I/O scheduling feature can be made to "just work" like the group CPU scheduling, we may have made another step toward a truly responsive Linux desktop.
Patches and updates
Core kernel code
Filesystems and block I/O
Benchmarks and bugs
Page editor: Jonathan Corbet
Ubuntu's 11.04 release ("Natty Narwhal") is going to be an important inflection point for the project, and for Canonical. The company is banking on its users, and potential users, embracing a user interface (Unity) that differs significantly from the previous Ubuntu release as well as other familiar desktop UIs. Further, the target release date is less than three months away and significant chunks of the Unity interface are still unfinished. The second alpha release on February 3 shows promise, but there is significant work left to be done.
The most interesting, or at least most visible, change is in the shift to Unity. Canonical began work on Unity during the 10.10 cycle for the Ubuntu Netbook Remix. Despite the less-than-exuberant reception for Unity on 10.10, where some vendors opted to remain on 10.04 for netbooks, Canonical decided to push ahead and make Unity the default shell in 11.04 rather than adopting GNOME Shell from GNOME 3.0.
Why has Canonical chosen to take this route instead of GNOME Shell? In part because of differing visions for the desktop. Ubuntu developer Jorge Castro pointed to different ideas, for example, about Application Indicators. While GNOME Shell and Unity have some similarities, the projects also diverge significantly. Initially designed to use the new GNOME window manager (Mutter), Mark Shuttleworth has said that Canonical was unhappy with its performance — which has led to using Compiz instead. There were also problems with getting the Zeitgeist data engine fully integrated with upstream GNOME.
The second alpha lives up to the alpha name. You don't expect that an alpha will be ready for prime time, but this alpha has more bugs than is expected from an Ubuntu development release due to Unity development and some major shifts in underlying packages. Specifically, the alpha was pushed out very shortly after the transition to X Server 1.10 and the rest of the X.org stack, which breaks the proprietary Nvidia and ATI drivers and has a few bugs when using the Intel drivers as well.
Booting the standard desktop ISO to install or test 11.04 alpha 2 on many systems (or under VirtualBox or VMware) is unlikely to result in much joy due to the changes in the X.org stack. This, however, is likely to be resolved by the time that the third alpha ships in March. For determined developers and testers, it is possible to get a working install. Users that have been running the first Natty alpha will escape the problems in the transition, as the upgrade won't replace the affected X packages. I was also able to upgrade a system running Ubuntu 10.10 in place to Natty without problems, though it required manually installing the Nouveau driver to be able to use the default Unity interface. Unity is now no longer dependent on Mutter (as it was in 10.10), and is instead using Compiz.
Unity's UI consists of the Launcher on the left-hand side of the screen, a Panel at the top of the screen, and a Home button (also referred to as the Big Freaking Button) on the extreme left on the panel. The BFB now brings up, or should, the Dash (dashboard) with applications and a search bar that allows the user to search the system for applications, files, etc. In this alpha, however, it simply brings up a blank Dash that's approximately the size of a netbook screen. Castro said that it will eventually be re-sizable so users can expand it to fit the whole screen or just part of the screen at their preference.
The Launcher holds icons or items, which can be for individual applications (such as Firefox) or "Places." What's a Place? One example is the Application place which should display the most used applications as a top row and then all installed applications grouped by category, or displayed alphabetically. But the hope is that developers will create Places that are much more specialized. Castro described it to me as "like a Firefox special search on steroids." Eventually, Castro says, developers should be able to create Places for all manner of things — one example would be an IMDB "place" that would allow users to search IMDB via a launcher and see results in an overlay from the Launcher.
The top panel implements a global application menu that works with most applications. This means that instead of displaying the standard "File, Edit, View," etc. menu items in each window, they are displayed in the Panel. This works with standard GNOME and Qt applications, but there are some outliers — like Firefox, LibreOffice, and Eclipse to name just three — that don't use GTK or Qt menuing. For Firefox (and Thunderbird) this is being implemented as an extension by Chris Coulson that should be ready in time for 11.04. However, it seems likely that there will be at least some percentage of applications that will not quite fit in the standard Unity UI for some time.
Whether the switch to a global application menu is preferable or not is left as an exercise to the reader. The per-window menu mode is deeply ingrained for many of us, so even when the menu works properly for all applications it's going to take some getting used to. Having it implemented for most, but not all, applications is likely to irritate many users.
Unity also has a workspace switcher that allows users to view all workspaces in a tiled view, move applications back and forth between workspaces, or switch between them. This is not dissimilar to the way that GNOME Shell works, or Spaces in Mac OS X.
Overall, the release (if you can get it running) is usable but not entirely stable. A helpful tip, if Unity crashes but the desktop session remains open, you can restart and refresh unity with
unity --refresh. But you have to use a terminal emulator to run this, as Unity does not yet have a run dialog that can be called with Alt-F2 implemented. Castro said that they're likely to use the GNOME Completion-Run Utility, but it hasn't been decided yet.
Though not yet implemented in the alpha, by the time 11.04 ships, there will be an API in place for applications to have a progress meter and/or number on the launcher. If you've used an iOS or Android device, you've probably seen something similar with the application icons on those devices. Castro says that the idea is to stop cluttering the system tray with application-specific notifications and move them to the application icons, keeping system-level notifications and controls (such as the sound volume or network indicators) in the system tray. A mockup can be found on Castro's post about the libunity library. One might wonder, what happens on other distributions without libunity with applications that have implemented these features? Castro says that they'll still run fine on other distributions without any problems, though without the notifications.
What if you don't have supported 3D hardware? Natty will fall back to the standard GNOME 2.32 interface, even though Canonical is working on a 2D Unity interface based on Qt for Ubuntu on ARM. Why not default to Unity 2D for the x86/AMD64 releases of 11.04 as well? The primary issue here is making space for the Qt libraries on the installation CD. However, the plan now is to make space for those libraries in time for Ubuntu 11.10.
Users also won't be seeing an option for GNOME 3.0 in 11.04, either. In fact, they won't be seeing the option in the Software Center. The decision was made mid-January and announced by Sebastien Bacher on the ubuntu-desktop list, where Bacher said "we don't feel integrating GNOME3 with a high quality level in Ubuntu is a job which can be done in one cycle and we prefer to delay it to be default next cycle."
Specifically, Bacher says that "it's not really possible to bring some updated components or [software] in without bringing the GNOME3 desktop" which left the desktop team to decide whether to switch to GNOME 3 in the 11.04 cycle. The decision ultimately was to remain on GNOME 2.32, which is the basis for Ubuntu's 2D fallback. There's also the small matter that GNOME 3.0 would probably not be ready in time for the feature freeze for 11.04 toward the end of February. At any rate, users will need to seek out a Personal Package Archive (PPA) for GNOME 3.0 on 11.04 if they prefer that interface. Castro did indicate that Ubuntu was open to making available an Ubuntu-based release with GNOME 3.0 at some point if there were contributors interested in doing the work.
For contributors interested in working on Unity, there's plenty of room. The project has a collection of small bugs and projects under the "bitesize" label that should be a good option for new contributors. It should be noted, however, that even "bite-sized" patches require agreement to Canonical's contributor agreement, which is less than universally loved by free and open source software developers.
Though buggy and incomplete, the implementation of Unity as it stands now looks interesting. It's unlikely to appeal to GNOME 2.x stalwarts, but it's unclear whether GNOME 3.0 will either. It's an interface that may appeal to non-Linux users, if Canonical can find hardware partners to ship it pre-installed.
Debian GNU/LinuxIt is replaced by the suite squeeze-updates on the official mirrors. Its management will move to the Debian Release Team, who already manage regular updates to Debian stable and oldstable."
FedoraThe adjective edited came from the fact that this planet will be maintained and edited by a group of people (the editors), that will make sure appropriate and relevant content gets posted."
Ubuntu familyAfter reviewing the plans at the end of this release, it was felt that a release candidate release on April 21st showing up just before the easter holiday would be a bit late." Beta 2 is scheduled for April 14.
Newsletters and articles of interest
Page editor: Rebecca Sobol
The 2011 FOSDEM conference had a Configuration and Systems Management developer room on its second day. This first meeting about configuration management and automation with open source tools was organized by the people from Puppet Labs and had a focus on Puppet, but other tools like Chef and Cfengine were also discussed.
Configuration management is about establishing and maintaining consistency of a system throughout its life. For software, this means that the system has to track and control all configuration changes, which can be the contents of files in /etc, the installation of specific packages, file permissions, users, and so on. Having a configuration management tool for your systems is useful in a lot of ways: you can automatically repair a system's configuration after a failure, you can easily reproduce a specific configuration on another system, you can audit changes, and, if you pair the configuration management system with a version control system like Git, you can always return to a known-good configuration if things go wrong. Where configuration management systems really shine is when you have a large number of systems networked together: by automating the configuration, you save the system administrator's time and you're sure that all systems are configured consistently.
The big three configuration management systems for Linux are Puppet (used by Red Hat, Citrix, and the Los Alamos National Laboratory), Chef (used by Engine Yard, 37signals, and Scribd), and Cfengine 3 (used by Facebook, AMD and the Joint Australia Tsunami Warning Centre). Puppet and Chef are broadly similar in architecture, but Puppet has a language designed specifically for the task of describing resources, while Chef is using the general-purpose programming language Ruby to configure resources. Also, Chef seems to be more aimed at developers that want to deploy their web applications, and it doesn't support as many platforms as Puppet does. Cfengine is the grandfather of these configuration management systems (with Cfengine 3 as a total rewrite); one of its advantages is its lower memory footprint and higher performance than Puppet and Chef, but in recent years its popularity has declined. Other configuration management systems that were present in the developer room are FusionInventory, GLPI, and OPSI.
In his case study about Linux system engineering in air traffic control, Stefan Schimanski showed how scalable Puppet really is and how it can guarantee reliable mass deployment of the Linux-based, mission critical applications needed in air traffic control centers. Air traffic is growing yearly, so the number of computer systems that have to handle these flights is also growing, as is the work load for the system administrators. Moreover, the systems really need 24/7 365 high-availability: if they go down for 30 minutes, air traffic control has a really big problem. For example, if a computer in a control center freezes, the operator is essentially blind.
These strong requirements coupled with the growing number of servers mean that air traffic control centers need automatic installations of every system with minimal downtime and fast rollbacks. Moreover, all informal requirements documents, described by non-technical people, should be converted into formal specifications of the configuration of the system, to be able to standardize the systems and make their configuration reproducible. Therefore, Schimanski rethought his system engineering approach in 2010 and turned to Puppet.
One thing that Puppet makes easy is distinguishing between the abstract requirements and the concrete implementation. For each node, the system administrator can define how the node has to be configured in an abstract way, e.g. by including classes for a desktop node, a server node, a webserver node, and so on. By reading these node definitions, you can easily see what the node is supposed to be doing, without having to bother with the concrete implementation, which is written in separate files for these classes. For example, the webserver class installs and configures Apache and also includes the configuration of the server class. Moreover, according to Schimanski a good Puppet configuration introduces traceability, which is essential in that kind of environment: "If someone asks where requirement #91 of the requirements document is implemented, it's easy to point out the Puppet code that implements this."
Another interesting idea that Schimanski introduced in his talk was the concept of a meta-distribution: the air traffic control systems are implemented as SUSE Linux Enterprise and Red Hat Enterprise Linux servers, but the Linux distribution itself is completely interchangeable. The AutoYaST or Kickstart files of the installation are minimal, and almost all configuration is done in the form of Puppet modules, e.g. for NTP and other services. The result is a heavily customized enterprise Linux distribution, but all these customizations are documented in a completely formal way. Schimanski explains the rationale behind this approach:
To a certain degree, Puppet modules can be written in an operating system independent way. There are always some minor differences, such as where the distribution puts its configuration files, but this can be abstracted away with variables that get their value (e.g. the file path) depending on the operating system. Of course you have to check these little things before migrating to another operating system, so it's not effortless, but according to Schimanski, Puppet makes migrating a lot easier.
The talks also showed that there is a nice ecosystem of tools developing around Puppet. For example, Henrik Lindberg gave a demo of Geppetto, a new Eclipse-based project developing tools to simplify the process of authoring and using Puppet manifests and modules. The near-term objectives of the project are flattening the learning curve for new Puppet users, supporting best practices, and encouraging the sharing of Puppet modules. Under the hood, Geppetto has a grammar for the Puppet DSL (Domain Specific Language), written with Xtext. Thanks to Xtext, this also automatically results in an Eclipse editor that knows the Puppet language and offers syntax coloring, code completion, code folding, and syntax errors and warnings. Moreover, when creating a Puppet module you can enter metadata and choose dependencies, and at the end you can export the module to a zip file which can be uploaded to the Puppet Forge. The Geppetto integrated development environment can be downloaded as a stand-alone product for Linux, Windows or Mac OS X, or as a separate plug-in for Eclipse.
Another rising star in the Puppet ecosystem is Foreman, presented by its creator Ohad Levy, who joined the ranks of Red Hat in August 2010 as a principal software engineer in its cloud team. This project is now a year and a half old and has 20 contributors, but according to Levy, Foreman will at some point be part of Red Hat's cloud portfolio. Foreman integrates with Puppet and acts as a web based dashboard for it, providing real time information about the status of hosts based on Puppet reports, statistics, and so on. Moreover, Foreman takes care of the low-level details of setting up machines and installing the Puppet client on them, until Puppet is able to take care of the configuration defined in your Puppet modules. It even supports creating virtual machines using the libvirt API, with RHEV-M and Amazon EC2 support in the works. The largest installation managed by Foreman that Levy knows about is running 4000 active hosts. This is clearly a project to watch, as it is backed by Red Hat and it has the potential to make managing an environment with Puppet a lot easier.
Configuration management is not only useful for system administrators installing servers, but also for developers setting up their development environment. Gareth Rushgrove talked about using configuration management tools to get new employees up and running quickly with a development virtual machine. Especially interesting was his coverage of Vagrant, a tool for automated virtual machine creation for Oracle's VirtualBox. Using automated provisioning of the virtual environments using Puppet or Chef, developers can get a complete development environment up and running in no time. Users can configure Vagrant to forward ports to the host machine, to configure shared folders, and so on. It's also possible to package an environment in a distributable box, and rebuilding a complete environment from scratch or tearing down the environment when you're done is possible with a single command. Normally users start by downloading a base box to use with Vagrant (the default one is Ubuntu Lucid Lynx), but they can also build their own base box with a tool like VeeWee.
While Puppet clearly was the most visible configuration management system at FOSDEM, it was not the only one. Joshua Timberman, Sr. Technical Evangelist at Opscode (the creators of Chef), gave a short "Chef 101" talk, followed by an overview of how to use Chef to deploy applications with nothing but the source code repository and data about the application configuration. Traditionally, one deploys applications with tools like tar, rsync and (in the Ruby world) cap deploy, but what do you do then with the server configuration, like that needed for web servers, load balancers, database servers? Timberman showed how you can easily deploy web applications with their corresponding servers using various server roles configured in Chef cookbooks. The Chef server itself is a lightweight Ruby on Rails application, and the largest Chef deployment that Timberman knows about has 5000 nodes checking in to the Chef server each 30 minutes.
The first talk of the day was by Nicolas Charles and Jonathan Clarke who presented their use of Cfengine in their company Normation and focused on their experiences with disaster recovery. All their services (web, email, Git repository, Redmine, ...) were running on one hosted server. This used a three-disk RAID5 array, with daily backups, separate virtual machines for each service, and all services automatically installed and configured using Cfengine 3.
When two hard drives failed simultaneously, they first thought this would be easy to repair, as they had backups and used a configuration management system. However, it seemed they had forgotten some things. For example, they hadn't automated nor made a backup of the configuration of the virtual machines, so these had to be re-created manually. Moreover, after watching all the services coming back online with the right configuration thanks to Cfengine 3, they saw that they had to manually restore the backups, after which they saw that a couple of files were missing. The three big lessons here are: don't forget to describe your virtualization setup in your configuration management system, tie in your configuration management system to your backup tool, and always test your backups.
The best quote that summarized the don't reinvent the wheel approach of configuration management came from Levy's talk: "Automate as many processes as possible, using best practices where available, and act as the glue between the gaps." In this regard, it is interesting to know that everyone can share their Chef "cookbooks" (packages of "recipes") on cookbooks.opscode.com, and Puppet users can share their Puppet modules on the Puppet Forge. This is great for new users who can research the modules of other users and reuse them in their own infrastructure. Your author had already automated some of the services on his home network with Puppet, and this configuration management track at FOSDEM was inspiring enough to continue this approach and decrease the amount of glue in his network.
Newsletters and articles
Page editor: Jonathan Corbet
Brief itemsform a patent pool around patents that apply to the VP8 video codec, which is part of Google's WebM open web media push. "In order to participate in the creation of, and determine licensing terms for, a joint VP8 patent license, any party that believes it has patents that are essential to the VP8 video codec specification is invited to submit them for a determination of their essentiality by MPEG LA's patent evaluators. At least one essential patent is necessary to participate in the process, and initial submissions should be made by March 18, 2011. Although only issued patents will be included in the license, in order to participate in the license development process, patent applications with claims that their owners believe are essential to the specification and likely to issue in a patent also may be submitted." announced the release of a component catalog that lists Linux-compatible devices. "With this database, corporate buyers can specify the design of their Ubuntu desktops or servers from manufacturers much more efficiently. Individuals can be sure that the key components of the machine they are considering will work with their preferred Ubuntu or Linux distribution. The PC and server industry will also have a simple single source to publicize the work that they do in certifying Linux components and making that knowledge freely available." This looks to be a great resource, but it does not seem to make any distinction between free and binary-only driver support. announced the launch of LiMo 4. "LiMo 4 makes extensive use of best of breed technologies from leading open source projects. LiMo's Open Source Policy also promotes strong bilateral engagement with these projects in the interests of maintenance efficiency and market access for future open source innovation. It is planned that LiMo 4 code will become available for public download from July 2011." The race for funds is open until March 21st 2011, which marks the beginning of Spring in the northern hemisphere. All users - especially enterprises - are invited to donate to the capital stock of the future foundation."
Articles of interestlooks at the Freedom Box Foundation. There will be little new here for most LWN readers, but it's nice to see the effort getting wider attention. "Mr. Moglen said that if he could raise 'slightly north of $500,000,' Freedom Box 1.0 would be ready in one year." takes a look at version 1.0 of the Open Source Hardware Definition. "The definition, based in part on the OSI's Open Source Definition, covers the requirements for availability of documentation, necessary software and optional attribution. It also covers what attributes the licence used should have such as not being specific to a product, not restricting other hardware or software and being technology neutral." covers the IBM Watson game playing supercomputer. "The IBM Watson supercomputer runs on 10 racks of IBM POWER 750 Servers that can be powered by a number of operating systems including IBM's own AIX Unix operating system as well as Linux. IBM chose Linux and more specifically, Novell's SUSE Linux Enterprise Server (SLES) as the underlying operating system for Watson." talks with some of Nokia's open-source partners about MeeGo. "In particular, although Nokia has said it will continue to support MeeGo, Intel, Nokia's chief MeeGo partner was not pleased. In a statement Intel said: "While we are disappointed with Nokia's decision, Intel is not blinking on MeeGo. We remain committed and welcome Nokia's continued contribution to MeeGo open source.""
Education and CertificationThis initiative by LPI affiliate, LPI-Asia Pacific will enable post-secondary academic programs in Malaysia to adopt LPI training as part of their regular IT curriculum. LPI-APAC is working with the Department of Higher Education of the Ministry of Higher Education of Malaysia in introducing this program to both private and public educational institutions within Malaysia."
Calls for Presentations
Upcoming EventsAndroid Builders Summit, to be held April 13 and 14 in San Francisco, immediately after the Embedded Linux Conference. "Android is expanding to an increasing number of industry segments in addition to smart phones and tablets. There is a need for the ecosystem of builders to collaborate on a common solution for existing limitations and desired features across all of these device categories."
|February 25||Build an Open Source Cloud||Los Angeles, CA, USA|
|Southern California Linux Expo||Los Angeles, CA, USA|
|February 25||Ubucon||Los Angeles, CA, USA|
|February 26||Open Source Software in Education||Los Angeles, CA, USA|
|Linux Foundation End User Summit 2011||Jersey City, NJ, USA|
|March 5||Open Source Days 2011 Community Edition||Copenhagen, Denmark|
|Drupalcon Chicago||Chicago, IL, USA|
|ConFoo Conference||Montreal, Canada|
|conf.kde.in 2011||Bangalore, India|
|PyCon 2011||Atlanta, Georgia, USA|
|March 19||Open Source Conference Oita 2011||Oita, Japan|
|Chemnitzer Linux-Tage||Chemnitz, Germany|
|March 19||OpenStreetMap Foundation Japan Mappers Symposium||Tokyo, Japan|
|Embedded Technology Conference 2011||San Jose, Costa Rica|
|OMG Workshop on Real-time, Embedded and Enterprise-Scale Time-Critical Systems||Washington, DC, USA|
|UKUUG Spring 2011 Conference||Leeds, UK|
|PgEast PostgreSQL Conference||New York City, NY, USA|
|Palmetto Open Source Software Conference||Columbia, SC, USA|
|March 26||10. Augsburger Linux-Infotag 2011||Augsburg, Germany|
|GNOME 3.0 Bangalore Hackfest | GNOME.ASIA SUMMIT 2011||Bangalore, India|
|March 28||Perth Linux User Group Quiz Night||Perth, Australia|
|NASA Open Source Summit||Mountain View, CA, USA|
|Flourish Conference 2011!||Chicago, IL, USA|
|Workshop on GCC Research Opportunities||Chamonix, France|
|April 2||Texas Linux Fest 2011||Austin, Texas, USA|
|Camp KDE 2011||San Francisco, CA, USA|
|SugarCon 11||San Francisco, CA, USA|
|Selenium Conference||San Francisco, CA, USA|
|5th Annual Linux Foundation Collaboration Summit||San Francisco, CA, USA|
|Hack'n Rio||Rio de Janeiro, Brazil|
|April 9||Linuxwochen Österreich - Graz||Graz, Austria|
|April 9||Festival Latinoamericano de Instalación de Software Libre|
|O'Reilly MySQL Conference & Expo||Santa Clara, CA, USA|
|2011 Embedded Linux Conference||San Francisco, CA, USA|
|2011 Android Builders Summit||San Francisco, CA, USA|
|April 16||Open Source Conference Kansai/Kobe 2011||Kobe, Japan|
If your event does not appear here, please tell us about it.
Page editor: Rebecca Sobol
Copyright © 2011, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds