LWN.net Weekly Edition for February 19, 2009
How (not) to brick the Android Developer Phone
Your editor's adventure with the Android Developer Phone (ADP1) began just before the end of the year. This phone, remember, has the nearly unique selling point that it is lacking any sort of lockdown feature. It will happily run any software which is fed to it, from the kernel on up. It thus brings the promise of free software to a market which has traditionally gone out of its way to avoid enabling any sort of freedom. It's actually possible to control the software we run on our phones - but only if we buy the right phone.The path to exercising this freedom is long and poorly documented, though. Eventually somebody will certainly pull together a definitive resource for developers wanting to hack on their Android phones; until then, one is left digging through a long series of web sites and forums (a few of which are listed below). This article will not be that resource, but, hopefully, it can help to point interested people in the right direction. Please note that this article assumes that you have an ADP1 phone; if you have a locked-down G1 you can still participate in all of the fun and games that follow, but you will need to root the phone first.
The first stop, unfortunately, is the decidedly non-free Android SDK. Actually, this package is only truly mandatory for those wanting to build Android applications of their own. But it contains a pre-built version of the Android Debug Bridge (adb) tool, which is essential for working with the phone over a USB connection. With adb, one can connect to a shell running on the phone, move files back and forth, forward network ports to and from the phone, and more. Yes, one can run a shell directly on the device using the terminal emulator application, but life is certainly much easier when one can use a real keyboard.
Note that it may be necessary to either (1) run adb as root, or (2) play with your udev setup to be able to access the phone via USB.
Putting new software onto the phone involves flashing its NVRAM. There are six partitions on the onboard flash:
dev: size erasesize name
mtd0: 00040000 00020000 "misc"
mtd1: 00500000 00020000 "recovery"
mtd2: 00280000 00020000 "boot"
mtd3: 04380000 00020000 "system"
mtd4: 04380000 00020000 "cache"
mtd5: 04ac0000 00020000 "userdata"
Details about these partitions can be found on this page. Most of them will be fairly obvious in purpose. The "recovery" partition holds a recovery image which can be used to un-brick the phone. In "boot" is the initial system image, while "system" is the root filesystem. Application settings and such go into "userdata". With any luck at all, it should be possible to put a new system onto the phone by flashing only the "boot" and "system" partitions, leaving settings and such in place.
First, though, comes that sweaty-palms moment when one realizes that one is about to overwrite the operating software on an expensive new toy. A fairly nice new toy that, mostly likely, has become an important working tool. The idea of turning this nice device into an expensive brick lacks appeal. It might be different if the second-generation Android devices were available; then, at least, one could rationalize an update disaster as a celestial sign that it's time to get a newer phone. In the absence of such an ulterior motive, your editor stepped back from the brink and pondered ways to recover from a failed update.
[PULL QUOTE: Everybody should be able to run vi on their phone (though emacs appears to be a bit too much to hope for). END QUOTE] One method your editor has seen recommended is to simply make copies of the various /dev/mtd/mtd? devices, then use adb to lift those copies off the phone. The system running on the phone has a rather minimal command set, so this copying must be done using cat and shell redirection operators. This experience gives a quick thrill, as if one were reliving the very earliest days of Unix before advanced commands like cp had been invented, but said thrill is quick indeed. Thereafter, one usually wants to go out and install busybox on the device. After making a few strategic symbolic links, one will have something that looks a lot more like an ordinary Linux shell environment. Everybody should be able to run vi on their phone (though emacs appears to be a bit too much to hope for).
Back to backups: an alternative is to use the nandroid script. With nandroid, a simple command will back up all of the useful partitions on the device in a way which lets them be quickly restored. Unfortunately, though, nandroid will not work with a stock phone. At a minimum, one must install busybox, then make links for commands like nc, tar, and md5sum. Alternatively, one can install the modified recovery image from the amazingly productive "JesusFreke," then back up the phone while it is in recovery mode. Either way, one will, once again, end up with a set of image files containing copies of the phone's flash partitions.
So what does one do with these image files? The key tool here is fastboot, a command-line tool which runs on a Linux-based host system. With fastboot, one can flash one or more partitions to a USB-connected phone, then reboot into the new code. First, though, one must know the secret handshake: power up the phone while holding down the camera button, connect the USB cable, then hit the "back" button until the display reads "fastboot." Needless to say, the manual that came with the ADP1 did not mention this little detail.
Of course, said "manual" is a single slip of paper showing how to insert the battery.
Once one is convinced of one's ability to recover from a disaster, it's time to try to put some new software onto the phone. If you have built the Android platform from source (a process which will be addressed in the next installment), the result will be new "boot" and "system" images which can be flashed to the phone using fastboot. System images built by others can also take that form, but the more common approach is to package the whole thing up into a Zip file. In such cases, the recipe is as follows:
- Using adb, put the update image onto the SD card on the phone
(mounted under /sdcard) as update.zip.
- Reboot the phone into recovery mode (secret handshake: hold down
the "home" key while booting).
- Press alt l, followed by alt s. Note that, unlike ordinary ADP1/G1 keyboard operation, you need to actually hold down the "alt" key while pressing the associated letter.
This sequence will cause the phone to rewrite its software with the image found in the update.zip file. Once the process is complete, hitting the "home" and "back" buttons together will reboot the phone into the new image.
So, what might one install via this method? The set of modified images provided by JesusFreke are a good place to start. The JFv1.31 image makes a lot of things work more nicely; it includes busybox with a set of useful links, a fancier recovery image with built-in backup capability, a version of su which asks the user for confirmation (and which, thus, should be harder to exploit from an evil application), and more. Also worth noting is that the JF images disable any over-the-air updates. Such updates should not be happening with an ADP1 phone in any case, but, when one has control over one's own phone, there is no reason to allow outside agencies to drop new software into it.
Rather more fun can be had by going to the JFv1.43 image. This version includes an update from Android, which is said to fix a number of small issues and improve battery life. It adds a voice calling capability which was notably lacking in the original Android distribution. Your editor's first attempt, "call home," was turned into "call mom" though; Google does not appear (yet) to have achieved a level of omniscience sufficient to know that those two have not been synonymous for some years now. Also added is a voice search mechanism. But that's not all: this update includes the multitouch functionality that Google left out; the "pinch" gesture now zooms web pages in and out. Other applications have not yet been enhanced to use multitouch, yet.
Arguably even nicer than multitouch at this point is the new "autorotate" setting in the browser. The ADP1 can report its orientation to an application, but almost no applications make use of that information. So, on a stock ADP1, using the browser in landscape mode requires opening the keyboard. With autorotate turned on, the browser senses when the phone has been turned and adjusts the display accordingly. It's one of those little features which should have been there from the outset.
And that, of course, gets to the heart of why an open phone is such a nice thing to have. We're no longer dependent on the manufacturer to get everything right, and we're no longer dependent on wireless carriers to hold off from crippling our devices. We bought the hardware, and we wrote the software. We are well within our rights to change how it all works - even if we want to do something crazy like install Debian over Android. It is unfortunate that, at this time, so few devices afford this kind of freedom; ADP1 and OpenMoko appear to be about the only options. With any luck at all, awareness of the value of this freedom will spread over time, and vendors will find that their customers will settle for nothing less.
Of course, real freedom doesn't stop at the ability to install software images created by others. The next installment in this series will start to delve into the process of generating a new system image from source. Among other things, your editor intends to take the "cupcake" development version - which includes, among other things, the much-requested on-screen keyboard feature - for a spin. Stay tuned.
Resources. Information about working with Android is spread around the net; here are a few useful places your editor has found:
- There is, of course, good information to be found at source.android.com.
- developer.android.com is
the source for the software development kit and related information.
- The xda-developers site is a repository for vast amounts of useful -
if noisy and slowly-served - information. In particular, the Dream
Android development forums seem to be the primary gathering point
for people hacking on this platform.
- Some information - notably new pre-built image announcements - can be found on andblogs.net.
FOSDEM09: RandR 1.3 and multimedia processing extensions for X
At FOSDEM 2009 (Free and Open Source Software Developers' European Meeting) in Brussels, your author attended a number of talks about the state of graphics in Linux. Two of them stood out: Matthias Hopf's talk about RandR 1.3 and Helge Bahmann's work on multimedia processing extensions for X.
RandR 1.3: panning, transformations and properties
Matthias Hopf of SUSE R&D gave an update about RandR 1.3 (Resize and Rotate Extension). So far, RandR 1.2 exposes an interface to dynamically set and query properties such as the displayed and known video modes, the framebuffer size, and attachment of a monitor. However, there are still some important features lacking. For example, querying the state of an output involves output probing, and there is no way for applications to distinguish between the internal panel and an external output, which could be interesting for presentation software. Panning is also lacking, just as displaying in a non-1:1 fashion. And last but not least: the framebuffer size of X is limited to its initial allocation.
RandR 1.3, which is to be released with X Server 1.6, should implement a number of these features. With the new version of the extension, it is finally possible to query the state without output probing. The function RRGetScreenResourcesCurrent is equivalent to RRGetScreenResources but does not use polling. However, you won't get notified of new monitors this way. The xrandr command to query the state of a VGA output would be:
xrandr --output VGA --current
Other additions are multi-monitor panning and display transformations. When the mouse hits the screen borders, the viewport has to be changed. For a seamless movement without flickering, the graphics driver needs an update. The --panning option of the new xrandr command has three sets of four parameters, as in this example:
xrandr --output VGA --panning 2000x1200+0+0/2000x1200+0+0/100/100/100/100
The first parameter set is the panning area, the second one is the tracking area, and the third one are the borders. For example, setting the right border to 100 means that panning begins if the user reaches a border of 100 pixels before the right end of the physical screen. The panning area is the area that might be visible on the screen, while the tracking area is the area in which the mouse pointer movements influence the pan. In most circumstances these two are identical.
There are still some conceptual problems to be solved, according to Hopf. He wonders what the combination of a dual-head configuration and panning could mean: should the whole space span when the user reaches the side of the virtual space, or should each physical space pan separately? Or should it be a combination of these two? Xrandr needs an update to accommodate to these possibilities. Another problem is that panning and display transformations don't fit together.
Display transformations in RandR 1.3 make it possible to transform the perspective of the CRTC content. This could be used for rotation, flipping, scaling and keystone correction. Under the hood, the code is using homogeneous coordinate transformations, implemented by a 3-component matrix-vector multiplication. The user has to specify this transformation matrix in the appropriate xrandr command, as in:
xrandr --output VGA --transform 2,0,0,0,2,0,0,0,1
which scales the image down by a factor of 2. A more pragmatic use of this display transformation would be a keystone correction matrix, which transforms the distorted image of an incorrectly positioned projector to a perfect rectangle. It would seem that there is ample scope for the creation of more user-friendly interfaces to this functionality, though.
Distinguishing between different types of screens can be done in RandR 1.3 with standard properties, such as output and signal types. RandR will require graphics drivers to implement some mandatory properties to claim RandR 1.3 support. Hopf added that "unknown" is a valid value, so initial support is trivial. Two of these mandatory properties are SignalFormat and ConnectorType. The former describes the physical protocol format, such as VGA, TMDS, LVDS, Composite, Composite-PAL, Composite-NTSC, Composite-SECAM, SVideo, Component or DisplayPort. The graphics driver changes this property when the underlying hardware indicates a protocol change, and X clients can change this property to select a protocol. The ConnectorType property is immutable, and can have one of VGA, DVI, DVI-I, DVI-A, DVI-D, HDMI, Panel, TV, TV-Composite, TV-SVideo, TV-Component, TV-SCART, TV-C4 or DisplayPort as its value. A presentation application can use this property to detect unambiguously which is the laptop display and which is the projector display.
Other, non-mandatory properties are SignalProperties, ConnectorNumber, EDID (formerly EDID_DATA, the raw EDID data from the monitor), CompatibilityList and CloneList. Many of these properties haven't been implemented by any driver yet. A final problem that cannot be solved in RandR 1.3 is the framebuffer size limitation. The culprit is the current XAA implementation: XAA calls don't get the pitch as an argument and assume it stays the same for the whole life of the X Server.
Multimedia processing extensions for X
Helge Bahmann, a research assistant at the Technical University of Freiburg in Germany, talked about his experimental multimedia processing extensions for the X Window System. At this moment, multimedia applications either bypass X (e.g. by DRI), or they use X as a video playback service for computed images (e.g. by XVideo). The network transparency for which X is famous fails in both cases. If you want to display video remotely, you have to be able to transmit compressed media data, and you have to synchronize audio and video. For this purpose, Bahmann introduced three new experimental X extensions.
The TIME extension, part of Bahmann's master's thesis in 2002, introduces two new server-side objects: Clocks and Schedules. An X client can start, stop and query the X server's clock. The client also submits commands to the server with execution and expiration timestamps, and the scheduler executes these requests at the appropriate time. This mechanism allows the application to schedule drawing requests (using the RENDER extension). It's important to note the (non-)obligations of the client and server: the X server doesn't guarantee the timely execution of the commands, and the client thus cannot rely on a created state. At the other end, the client can "change its mind": it can retract scheduled commands and it can replace them with completely different commands. Retracting and replacing can fail if the server has already started the execution.
The other extension Bahmann introduced is the AUDIO extension: it implements a SampleBuffer object, which is server-side storage for audio samples, equivalent to pixmaps for images. A PCMContext object can serve as a clock: a client can bind an execution scheduler to it, which allows operations to be executed synchronized to audio playback or capture. This also allows a simple synchronization between audio and video. The AUDIO extension spawns a dedicated real-time thread, but the rest of the X server is completely unaware of the thread. Because of the audio thread, the X server has to be linked to a thread-safe libc.
The TIME and AUDIO extensions are the basic infrastructure for multimedia processing, but as you have to deal with huge amounts of data, this will not work well on low-bandwidth networks. That's why Bahmann introduced a third X extension: COMPRESS, which is actually a misnomer because it uncompresses data. The ImageSequenceDecompressor and AudioSequenceDecompressor can receive and buffer individual compressed JPEG frames, and convert them into an uncompressed representation which can be processed by the X server. The client must submit compressed frame data and hence has to understand the compression format.
Summarizing his talk, Bahmann stressed that his X extensions are conceptually relatively simple and reuse existing X functionality (such as communication, client/resource management and security) without duplicating existing X functionality. A drawback for the programmer is that these multimedia extensions are very low-level and hence not at all easy-to-use. The client application has a big responsibility: it has to understand, parse and partition the media data, to plan ahead, submit compressed data and schedule commands, and to handle synchronization. It also needs to back-track and reschedule commands, for example when the window size changes. Bahmann warns this can be very complex to implement.
Bahmann calls his extensions "experimental" because they work for him but probably require diving into the code if you want to use them. Audio, timing and synchronization basically work, and the protocol part of the compression is finished. However, the backend interface to plug in new decompressors is still in flux. But all in all, it works:
While implementing and thinking about these extensions, Bahmann encountered some deficiencies of the current design of the X server. There's the security problem of audio/image decompressors in a process running with root privileges. Knowing the bad security track record of media players such as VLC and MPlayer, one should be cautious with such complex code running with root privileges. Bahmann is currently auditing the decompressors, and that's the principal reason why there are so few codecs available. Secondly, the compute-intensive decompression operations of the COMPRESS extension may stall the X server. Bahmann suggests to give up the current single-threaded design of the X server, but that is an idea which has not been accepted by the X development community in the past. In the absence of multiple threads, decompression must be handled carefully so as to avoid interfering with other X operations.
Novell and Red Hat ask community help in patent case
In any legal case, the conventional wisdom is that you should say nothing outside of court. However, in the patent infringement case IP Innovation LLC et al vs. Red Hat Inc. et al, Red Hat and Novell have chosen to ask to the free and open source software (FOSS) community for help in finding prior art — that is, evidence that might disprove the validity of the patents involved. The decision, says Rob Tiller, vice president and assistant general counsel, IP, for Red Hat, is a mixture of practicality, experimentation, and diplomatic relations.
The case was filed in United States District Court for the Eastern District of Texas, Marshall Division, on October 9, 2007. IP Innovation is a subsidiary of Acacia Technologies, which has filed over 239 infringement cases since 2003 — almost three times as many as any other company. The infringement claim alleges that Red Hat's and Novell's GNU/Linux products infringe U.S. Patent Numbers 5,072,412, 5,533,183, and 5,394,521. All three patents involve the use of graphical multiple or virtual workspaces on the desktop, a feature common to most popular desktops, including GNOME, KDE, and Xfce. See this LWN article from 2007 for more information on the complaint.
For the past sixteen months, the case has ground onwards. According to Tiller, the case is currently in the stage known as claim construction, in which the court attempts to define the patents and the terms they use. Now, with deadlines for submitting prior art approaching in March, Red Hat has posted a request for aid from the community on Post-Issue Peer to Patent. Specifically, the request is for prior art from before 25 March, 1986, the year before the patents were filed.
This is not the first time such a request has been made. Last year, Barracuda Networks made a similar request to the community in its case against Trend Micro. Tiller, though, had apparently not heard of this earlier effort.
So far, the request has produced over 20 responses, as well as over 125 beneath the Slashdot link to it. If these responses prove valid, they could be used to discredit the patents on the grounds that they were either unoriginal or too obvious to have been granted, or that the claimed techniques had previously been invented elsewhere — two classic strategies in patent infringement cases.
As it happens, the request is a tactic agreed upon both Novell
and Red Hat. Both companies are represented in the case by law firm Gibson, Dunn, and
Crutcher, and are coordinating their defenses.
"This whole thing is done with Novell's agreement,
" Tiller
says, even though Red Hat was the company that actually file the request on
Post-Issue.
Jim Lundberg, VP Legal at Novell agrees. "Essentially, it is a joint
effort by Novell and Red Hat. Both Red Hat and Novell are working closely
together in defending against the patent litigation that was filed.
"
Neither explained why the request was posted to Post-Issue by Red Hat alone. However, given Novell's unpopularity with some of the community because of its agreements with Microsoft in the last few years, perhaps those involved calculated that a request from Red Hat alone would stand a better chance of receiving useful answers.
Certainly the idea seems to have carefully calculated from every other angle before being implemented. Acknowledging the unusual nature of the request, Tiller says:
In the end, Novell and Red Hat decided that the advantages outweighed the
potential hazards. "The principle reason,
" Lundberg says,
"is because
of the background and the input that [the community] can provide —
the wealth of knowledge that exists out there.
"
A secondary reason for going public, says Tiller, speaking as a representative of the Red Hat legal department, is that the request is consistent with open source methodology and corporate image. By making the request, he hopes:
Part of that decision, Tiller adds, is to keep the contributions available
to the public.
"We considered carefully whether we should drop the
information in a locked box where no one else could see it
" — a
possibility that might have prevented IP Innovation from being pre-warned
of the evidence that the defendants might use. However, in the end, Tiller
says, "we decided that would be contrary to the spirit of free and open
source collaboration, that it would be important for people to see what
others had done, and what others' insights were
".
Still another consideration is that sorting through the replies for prior
art will require extra work in the few weeks left before final
submissions. But, although replies are still arriving, the preliminary
results seem promising. "We believe that they've been
helpful,
" Lundberg says tentatively.
Similarly, Tiller says, "In fact, we think there are likely to be a
good deal of antecedents that have a bearing on the claim in these patents,
so we expect to see a variety of ideas.
" At this point, though, he
says, "We don't want to inhibit the discussion. We're looking for as
many ideas as we can get.
"
Having already taken such a large and nontraditional step, Tiller and Lundberg seem reluctant to say much more about the case. Although Tiller admits that the Red Hat legal department is keeping other lawyers with FOSS expertise informed, he says:
That said, the defendants in the case sound reasonably
optimistic. "We feel very strongly about our ability to prevail in
this case,
" Lundberg says. "With the assistance of the open source
community, we think that potential is even stronger.
"
Echoing these sentiments, Tiller concludes: "We feel we have have
good, strong defenses against the infringement claims already, but we
really think that this is an opportunity for us to participate with the
community, not only in opposing the particular entities that are attacking
Linux here, but also in developing a body of experience that will guard
both us and the community in the future.
"
Security
Book review: Nmap Network Scanning
Gordon "Fyodor" Lyon is the principal author of the network scanner Nmap, and his new book Nmap Network Scanning is its authoritative guide. Lyon has crafted a precise, readable resource that will serve both newcomers and experienced Nmap users well. Equal parts manual, network scanning textbook, history lesson, and field guide, the book is a detailed reference to what Nmap can do, an explanation of how and why it works, and instructions on how to best use it for maximum result.
For those unfamiliar with the tool, Nmap is a network scanner. It can detect and enumerate the active machines on a computer network -- local or the Internet at large -- scan which TCP and UDP ports are open, and, in most cases, determine what services are running on the open ports and what operating system the host itself is running. It performs this service by sending specially-tailored IP, ICMP, TCP, and other packets, then interpreting the results. At its simplest, Nmap sends a SYN packet asking to open a TCP connection addressed to a particular port. If something responds, there is a service running on the port. But Nmap does far more than that, utilizing nearly every flag ever defined in an RFC, and doing it -- in parallel -- to potentially thousands of ports on thousands of hosts. Nmap has more than one hundred command-line options; understanding them and how best to use them is the subject of Lyon's book.
Like Nmap itself, Nmap Network Scanning begins by addressing the most commonly used features, and explores more complex options later. As prelude, chapter one gives an overview of Nmap's features, introducing the concepts of port scanning, service and OS discovery, and basic usage examples. Chapter two explains how to get and install the code, including its status of various platforms, the Zenmap graphical user interface, community-created scripts, and finding updates to both the code and important data files.
The book then delves into Nmap usage itself, beginning with the fundamental functions: host discovery in chapter three, and port scanning in chapters four and five. The two topics do overlap, as TCP SYN and ACK scans are used to discover hosts as well as to discover ports. But Lyon has chosen to craft the initial chapters of the book so that they mimic the logic of Nmap itself, and host discovery is the first execution step in any Nmap command. This is no accident; Lyon explains Nmap's architecture as only its creator could: with real-world examples, he illustrates how separating host discovery from port scanning allows a professional security or penetration tester to take hours off of a large scan through careful planning. And he explains how some host discovery techniques (such as DNS) expose the user to discovery in exchange for speed, while others (such as ARP pings) give the opposite tradeoff.
Chapter four's discussion of port scanning explains the broad strokes of scanning TCP and UDP ports, lists the most common types of scan, and describes how Nmap distinguishes between open, closed, filtered, and ambiguous ports. Chapter five covers Nmap's port scanning techniques in detail. It describes the basic TCP and UDP scans, contrasts when different techniques produce different results, and explains less commonly used scans and when they are appropriate. Lyon provides thorough examples, including real-world scans the reader can execute, and hypothetical "case study" problems weighing the pros and cons of multiple approaches. Chapter six is a discussion of optimizing Nmap scan performance, centered on how to select the right scanning technique, the right scanning target, and the right timing options. Nmap scans can take a very long time if the wrong parameters are chosen, so mastering the variables is a valuable skill.
Chapter seven looks at the next step beyond port scanning: service and version detection, by which Nmap can determine what applications are running on open ports, and in many cases precisely which version. Chapter eight looks at operating system detection, which Nmap performs by sending a complex series of tests to the target machine, then comparing the resulting "fingerprint" to a database of known profiles. Chapter nine describes one of Nmap's newest features, the Nmap Scripting Engine (NME). NME is a Lua-based engine that allows constructing more complex scans and queries that the Nmap core can perform on its own. The chapter also provides a reference to the carefully-chosen suite of NME scripts that ships with the current Nmap release.
Chapter ten explores how to use Nmap to perform two higher-level tasks: mapping out and bypassing firewall rules, and evading or defeating intrusion detection systems (IDSs). The text covers both general strategies, and sketches of popular firewall and IDS products on the market. Chapter eleven explores the other side of the coin, how to defend against Nmap scans, including detecting scans, blocking or slowing down scans, and misleading service and OS detection.
The remainder of the book is dominated by reference material. Chapter twelve introduces Zenmap, the official Nmap GUI client, including how it can benefit even experienced Nmap hackers. Chapter thirteen explains Nmap's output formats, including human-readable plaintext, machine-friendly XML, and "grepable" text. It also covers manipulating and transforming the XML format for use with other tools. Chapter fourteen describes Nmap's data files, including the version and OS detection databases, and support files used by NME. Chapter fifteen is a comprehensive reference guide for Nmap, detailing all of the over 100 command line options. For further reference, appendix A contains the document type definition (DTD) for Nmap's XML output, and the introductory material includes a helpful reference of IP, TCP, UDP, and ICMP headers.
Documentation and more
Nmap Network Scanning is a thorough guide to Nmap itself, and a lesson in network scanning at no additional charge. If you are new to the subject, the educational material will help you fill in the gaps in your knowledge, from TCP flags and connection setup, to how firewalls determine which packets to stop and which to allow through to their destination. The inline examples explain how Nmap performs its scans (often with real, Internet-accessible URLs as the targets), but also how the user can and should interpret the results. Longer SOLUTION passages discuss more complex problems by presenting a case study of a broadly stated challenge (such as "find all of the servers on a network running an insecure or nonstandard application") and the steps in which Nmap can help hone in on the answer. As the author shows, much of being a good network scanner is knowing what tests to perform, and how to decipher what those tests tell you.
The book is successful as a comprehensive manual, but Lyon makes it more than just documentation by infusing it with his experience. First, he is an experienced scanning and security expert, and in almost every section shares specific, real-world expertise about the good and bad points of the available scanning techniques under discussion. As he points out in the introductory material, when it comes to free software, experience is the only barrier to becoming an expert, and he shares his without reservation. For example, in addition to the predefined scan types, Nmap's --scanflags option allows you to define a custom set of TCP flags for your probe. The author presents an example where crafting a packet with both the SYN and FIN flags set will get by certain firewall configurations because the TCP RFC is ambiguous about how hosts should interpret certain combinations of flags.
Second, Lyon is the creator of Nmap, and while that does not automatically mean he would write a better book on the subject, he uses his background with the project to enhance the text. As noted earlier, he explains design decisions that affect how Nmap performs its scans and tests, and understanding why Nmap works the way it does is far better for the reader than simply understanding what it can and cannot do. For example, chapter nine describes why (unlike other services) detecting Skype requires multiple tests, and Lyon explains why Nmap implements Skype detection as an NME script rather than building a single-purpose test into the service detection code.
He also draws on the history of the entire project to educate the reader. He includes background and discussion about scans and tests (such as the TCP FTP bounce scan) that are less and less useful every year as operating systems and applications servers close old security vulnerabilities. He notes changes in the code, such as the 2006 rewrite of the OS detection module that enhances the program but obsoletes older OS detection fingerprints. And he explains how new and interesting scans (such as Gerhard Rieger's IP Protocol scan) were discovered and added to Nmap's arsenal. Finally, Lyon brings the perspective of an ongoing project lead to the book, encouraging and explaining the importance of participation in Nmap's development process -- from consulting the mailing list, to submitting OS detection fingerprints to the Nmap database, to properly documenting homemade NME scripts.
Whether you are a novice port scanner looking to learn Nmap, or a security professional looking for the definitive reference on the ubiquitous free software scanner, Nmap Network Scanning has something for you. Nmap Network Scanning is available online from a variety of retailers; a current list as well as the best available price can be found at http://nmap.org/book. There you can also read several sample chapters in a free online edition.
Brief items
Follow up: How to write a Linux virus
Blogger "foobar" has written a followup article on the How to write a Linux virus in 5 easy steps article, which was mentioned on LWN here. "Yesterday I published an article about How to write a Linux virus in 5 easy steps. There has been quite an overwhelming response for this. Within just a few hours this article became my most visited blog post ever. Wow! Just goes to show that either the article hit a real nerve, or the other articles on my blog are just really boring. :-)"
New vulnerabilities
asterisk: information disclosure
| Package(s): | asterisk | CVE #(s): | CVE-2009-0041 | ||||||||||||||||||||||||
| Created: | February 13, 2009 | Updated: | December 15, 2009 | ||||||||||||||||||||||||
| Description: | IAX2 authentication in Asterisk provides different responses for non-existent accounts and password mismatches, allowing an attacker to determine whether specific accounts exist. See the Asterisk security report for details. | ||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||
bind: validation bypass
| Package(s): | bind | CVE #(s): | CVE-2009-0265 | ||||||||
| Created: | February 16, 2009 | Updated: | March 9, 2009 | ||||||||
| Description: | From the CVE entry: Internet Systems Consortium (ISC) BIND 9.6.0 and earlier does not properly check the return value from the OpenSSL EVP_VerifyFinal function, which allows remote attackers to bypass validation of the certificate chain via a malformed SSL/TLS signature, a similar vulnerability to CVE-2008-5077 and CVE-2009-0025. | ||||||||||
| Alerts: |
| ||||||||||
dia: arbitrary code execution
| Package(s): | dia | CVE #(s): | CVE-2008-5984 | ||||||||||||
| Created: | February 17, 2009 | Updated: | December 9, 2009 | ||||||||||||
| Description: | From the Mandriva advisory: Python has a variable called sys.path that contains all paths where Python loads modules by using import scripting procedure. A wrong handling of that variable enables local attackers to execute arbitrary code via Python scripting in the current dia working directory | ||||||||||||||
| Alerts: |
| ||||||||||||||
fail2ban: denial of service
| Package(s): | fail2ban | CVE #(s): | CVE-2009-0362 | ||||||||
| Created: | February 16, 2009 | Updated: | February 18, 2009 | ||||||||
| Description: | From the CVE entry: filter.d/wuftpd.conf in Fail2ban 0.8.3 uses an incorrect regular expression that allows remote attackers to cause a denial of service (forced authentication failures) via a crafted reverse-resolved DNS name (rhost) entry that contains a substring that is interpreted as an IP address, a different vulnerability than CVE-2007-4321. | ||||||||||
| Alerts: |
| ||||||||||
gedit: arbitrary code execution via Python scripts
| Package(s): | gedit | CVE #(s): | CVE-2009-0314 | ||||||||
| Created: | February 16, 2009 | Updated: | March 31, 2009 | ||||||||
| Description: | From the Mandriva advisory: Python has a variable called sys.path that contains all paths where Python loads modules by using import scripting procedure. A wrong handling of that variable enables local attackers to execute arbitrary code via Python scripting in the current gedit working directory | ||||||||||
| Alerts: |
| ||||||||||
libpam-krb5: multiple vulnerabilities
| Package(s): | libpam-krb5 | CVE #(s): | CVE-2009-0360 CVE-2009-0361 | ||||||||||||||||||||
| Created: | February 12, 2009 | Updated: | March 26, 2009 | ||||||||||||||||||||
| Description: | Two vulnerabilities have been found in the Kerberos PAM module.
From the Debian alert:
CVE-2009-0360 Russ Allbery discovered that the Kerberos PAM module parsed configuration settings from environment variables when run from a setuid context. This could lead to local privilege escalation if an attacker points a setuid program using PAM authentication to a Kerberos setup under her control. CVE-2009-0361 Derek Chan discovered that the Kerberos PAM module allows reinitialisation of user credentials when run from a setuid context, resulting in potential local denial of service by overwriting the credential cache file or to privilege escalation. | ||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||
moodle: multiple vulnerabilities
| Package(s): | moodle | CVE #(s): | CVE-2009-0499 CVE-2009-0500 CVE-2009-0501 CVE-2009-0502 | ||||||||||||||||||||||||||||
| Created: | February 13, 2009 | Updated: | June 25, 2009 | ||||||||||||||||||||||||||||
| Description: | From the CVE entries:
Cross-site request forgery (CSRF) vulnerability in the forum code in Moodle 1.7 before 1.7.7, 1.8 before 1.8.8, and 1.9 before 1.9.4 allows remote attackers to delete unauthorized forum posts via a link or IMG tag to post.php. (CVE-2009-0499) Cross-site scripting (XSS) vulnerability in course/lib.php in Moodle 1.6 before 1.6.9, 1.7 before 1.7.7, 1.8 before 1.8.8, and 1.9 before 1.9.4 allows remote attackers to inject arbitrary web script or HTML via crafted log table information that is not properly handled when it is displayed in a log report. (CVE-2009-0500) Unspecified vulnerability in the Calendar export feature in Moodle 1.8 before 1.8.8 and 1.9 before 1.9.4 allows attackers to obtain sensitive information and conduct "brute force attacks on user accounts" via unknown vectors. (CVE-2009-0501) Cross-site scripting (XSS) vulnerability in blocks/html/block_html.php in Snoopy 1.2.3, as used in Moodle 1.6 before 1.6.9, 1.7 before 1.7.7, 1.8 before 1.8.8, and 1.9 before 1.9.4, allows remote attackers to inject arbitrary web script or HTML via an HTML block, which is not properly handled when the "Login as" feature is used to visit a MyMoodle or Blog page. (CVE-2009-0502) | ||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||
net-snmp: restriction bypass
| Package(s): | net-snmp | CVE #(s): | CVE-2008-6123 | ||||||||||||||||||||||||||||||||||||
| Created: | February 17, 2009 | Updated: | June 3, 2010 | ||||||||||||||||||||||||||||||||||||
| Description: | From the CVE entry: The netsnmp_udp_fmtaddr function (snmplib/snmpUDPDomain.c) in net-snmp 5.0.9 through 5.4.2, when using TCP wrappers for client authorization, does not properly parse hosts.allow rules, which allows remote attackers to bypass intended access restrictions and execute SNMP queries, related to "source/destination IP address confusion." | ||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||
php5: multiple vulnerabilities
| Package(s): | php5 | CVE #(s): | CVE-2008-5557 CVE-2008-5624 CVE-2008-5658 CVE-2007-5625 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Created: | February 13, 2009 | Updated: | February 23, 2010 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Description: | From the Ubuntu advisory:
It was discovered that PHP did not properly handle Unicode conversion in the mbstring extension. If a PHP application were tricked into processing a specially crafted string containing an HTML entity, an attacker could execute arbitrary code with application privileges. (CVE-2008-5557) It was discovered that PHP did not properly initialize the page_uid and page_gid global variables for use by the SAPI php_getuid function. An attacker could exploit this issue to bypass safe_mode restrictions. (CVE-2008-5624) It was discovered that PHP did not properly enforce error_log safe_mode restrictions when set by php_admin_flag in the Apache configuration file. A local attacker could create a specially crafted PHP script that would overwrite arbitrary files. (CVE-2007-5625) It was discovered that PHP contained a flaw in the ZipArchive::extractTo function. If a PHP application were tricked into processing a specially crafted zip file that had filenames containing "..", an attacker could write arbitrary files within the filesystem. This issue only applied to Ubuntu 7.10, 8.04 LTS, and 8.10. (CVE-2008-5658) | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
python-fedora: privilege escalation
| Package(s): | python-fedora | CVE #(s): | |||||||||
| Created: | February 13, 2009 | Updated: | February 18, 2009 | ||||||||
| Description: | From the Fedora advisory: This release includes a bugfix to the fedora.client.AccountSystem().verify_password() method. verify_password() was incorrectly returning True (username, password combination was correct) for any input. Although no known code is using this method to verify a user's account with the Fedora Account System, the existence of the method and the fact that anyone using this would be allowing users due to the bug makes this a high priority bug to fix. | ||||||||||
| Alerts: |
| ||||||||||
squidGuard: access restriction bypass
| Package(s): | squidguard | CVE #(s): | |||||||||
| Created: | February 13, 2009 | Updated: | February 18, 2009 | ||||||||
| Description: | The Red Hat bugzilla notes a "trailing dot" domain access restriction bypass in squidGuard. | ||||||||||
| Alerts: |
| ||||||||||
websvn: access violation
| Package(s): | websvn | CVE #(s): | CVE-2009-0240 | ||||||||
| Created: | February 16, 2009 | Updated: | March 9, 2009 | ||||||||
| Description: | From the Debian advisory: Bas van Schaik discovered that WebSVN, a tool to view Subversion repositories over the web, did not properly restrict access to private repositories, allowing a remote attacker to read significant parts of their content. | ||||||||||
| Alerts: |
| ||||||||||
Page editor: Jonathan Corbet
Kernel development
Brief items
Kernel release status
The current 2.6 development kernel is 2.6.29-rc5, released on February 13. It has some driver updates and a lot of fixes. "So go out and test the heck out of it, because I'm going to spend the three-day weekend drunk at the beach. Because somebody has to do it." See the full changelog for all the details.
The current stable 2.6 kernel is 2.6.28.6, released (along with 2.6.27.18) on February 17. Both contain a long list of fixes for a variety of problems.
Previously, 2.6.28.5 and 2.6.27.16 were released on February 12. 2.6.27.17 was rushed out moments afterward with a fix to an "instant oops" problem on some laptops.
Kernel development news
Quotes of the week
From wakelocks to a real solution
Last week's article on wakelocks described a suspend-inhibiting interface which derives from the Android project and the hostile reaction that interface received. Since then, the discussion has continued in two separate threads. Kernel developers, like engineers everywhere, are problem solvers, so the discussion has shifted away from criticism of wakelocks and toward the search for an acceptable solution. As of this writing, that solution does not exist, but we have learned some interesting things about the problem space.Getting Linux power management to work well has been a long, drawn-out process, much of which involves fixing device drivers and applications, one at a time. There is also a lot of work which has gone into ensuring that the CPU remains in an idle state as much as possible. One of the reasons that some developers found the wakelock interface jarring was that the Android developers chose a different approach to power management. Rather than minimize power consumption at any given time, the Android code simply tries to suspend the entire device whenever possible. There are a couple of reasons for this approach, one of which we will get to below.
But we'll start with a very simple reason why Android goes for the "suspend the entire world" solution: because they can. The hardware that Android runs on, like many embedded systems (but unlike most x86-based systems), has been designed to suspend and resume quickly. So the Android developers see no reason to do things any other way. But that leads to comments like this one from Matthew Garrett:
A solution that's focused on powering down as much unused hardware as possible regardless of the system state benefits the x86 world as well as the embedded world, so I think there's a fairly strong argument that it's a better solution than one requiring an explicit system state change.
Matthew also notes that it's possible to solve the power management problem without fully suspending the system; he gives the Nokia tablets as an example of a successful implementation which uses finer-grained power management.
That said, it seems clear that the full-suspend approach to power management is not going to go away. Some hardware is designed to work best that way, so Linux needs to support that mode of operation. So there has been some talk about how to design wakelocks in a way which fits better into the kernel as a whole. On the kernel side, there is some dispute as to whether the wakelock mechanism is needed at all; drivers can already inhibit an attempt by the kernel to suspend the system. But there is some justice to the claim that it's better if the kernel knows it can't suspend the system without having to poll every driver.
One simple solution, proposed by Matthew, would be a simple pair of functions: inhibit_suspend() and uninhibit_suspend(). On production systems, they would manipulate an atomic counter; when the counter is zero, the system can be suspended. These functions could take a device structure as an argument; debugging versions could then track which devices are blocking a suspend at any given time. The user-space equivalent could be a file like /dev/inhibit_suspend; as long as at least one process holds that file open, the system will continue to run. All told, it looks like a simple API without many of the problems seen in the wakelock code.
There were a few complaints from the Android side, but the biggest sticking point appears to be over timeouts. The wakelock API implements an automatic timeout which causes the "lock" to go away after a given time. There appear to be a few reasons for the existence of the timeouts:
- Since not all drivers use the wakelock API, timeouts are required to
prevent suspending the system while those drivers are running. The
proposed solution to this one is to instrument all of the drivers
which need to keep the system running. Once an acceptable API is
merged into the kernel, drivers can be modified as needed.
- If a process holding a wakelock dies unexpectedly, the timeout will
keep the system running while the watchdog code restarts the faulting
process. The problem here is that timeouts encode a recovery policy
in the kernel and do little to ensure that operation is actually
correct. What has been proposed instead is that the user-space
"inhibit suspend" policy be encapsulated into a separate daemon which
would make the decisions on when to keep the system awake.
- User-space applications may simply screw up and forget to allow the system to suspend.
The final case above is also used as an argument for the full-suspend approach to power management. Even if an ill-behaved application goes into a loop and refuses to quit, the system will eventually suspend and save its battery anyway. This is an argument which does not fly particularly well with a lot of kernel developers, who respond that, rather than coding the kernel to protect against poor applications, one should simply fix those applications. Arjan van de Ven points out that, since the advent of PowerTop, the bulk of the problems with open-source applications have been fixed.
In this space, though, it is harder to get a handle on all of these problems. Brian Swetland describes the situation this way:
- carrier deploys a device
- carrier agrees to allow installation of arbitrary third party apps without some horrible certification program requiring app authors to jump through hoops, wait ages for approval, etc
- users rejoice and install all kinds of apps
- some apps are poorly written and impact battery life
- users complain to carrier about battery life
Matthew also acknowledges the problem:
It is a real problem, but it still is not at all clear that attempts to fix such problems in the kernel are advisable - or that they will be successful in the end. Ben Herrenschmidt offers a different solution: a daemon which monitors application behavior and warns the user when a given application is seen to be behaving badly. That would at least let users know where the real problem is. But it is, of course, no substitute for the real solution: run open-source applications on the phone so that poor behavior can be fixed by users if need be.
The Android platform is explicitly designed to enable proprietary applications, though. It may prove to be able to attract those applications in a way which standard desktop Linux has never quite managed to do. So some sort of solution to the problem of power management in the face of badly-written applications will need to be found. The Android developers like wakelocks as that solution for now, but they also appear to be interested in working with the community to find a more globally-acceptable solution. What that solution will look like, though, is unlikely to become clear without a lot more discussion.
Getting the measure of ksize()
One of the lesser-known functions supported by the kernel's memory management code is ksize(); given a pointer to an object allocated with kmalloc(), ksize() will return the size of that object. This function is not often needed; callers to kmalloc() usually know what they allocated. It can be useful, though, in situations where a function needs to know the size of an object and does not have that information handy. As it happens, there are other potential uses for ksize(), but there are traps as well.Users of ksize() in the mainline kernel are rare. Until 2008, the main user was the nommu architecture code, which was found to be using ksize() in a number of situations where that use was not appropriate. The result was a cleanup of the nommu code and the un-exporting of ksize() in an attempt to prevent that sort of situation from coming about again.
Happiness prevailed until recently; the 2.6.29-rc5 kernel includes a patch to the crypto code which makes use of ksize() to ensure that crypto_tfm structures are completely wiped of sensitive data before being returned to the system. The lack of an export for ksize() caused the crypto code to fail when built as a module, so Kirill Shutemov posted a patch to export it. That's when the discussion got interesting.
There was resistance to restoring the export for ksize(); the biggest problem would appear to be that it's an easy function to use incorrectly. It is only really correct to call ksize() with a pointer obtained from kmalloc(), but programmers seem to find themselves tempted to use it on other types of objects as well. This situation is not helped by the fact that the SLAB and SLUB memory allocators work just fine if any slab-allocated memory object is passed to ksize(). The SLOB allocator, instead, is not so accommodating. An explanation of this situation led to some complaints from Andrew Morton:
[...]
Gee this sucks. Biggest mistake I ever made. Are we working hard enough to remove some of these sl?b implementations? Would it help if I randomly deleted a couple?
Thus far, no implementations have been deleted; indeed, it appears that the SLQB allocator is headed for inclusion in 2.6.30. The idea of restricting access to ksize() has also not gotten very far; the export of this function was restored for 2.6.29-rc5. In the end, the kernel is full of dangerous functions - such is the nature of kernel code - and it is not possible to defend against any mistake which could be made by kernel developers. As Matt Mackall put it, this is just another basic mistake:
There is another potential reason to keep this function available: ksize() may prove to have a use beyond freeing developers from the need to track the size of allocated objects. One poorly-kept secret about kmalloc() is that it tends to allocate objects which are larger than the caller requests. A quick look at /proc/slabinfo will (with the right memory allocator) reveal a number of caches with names like kmalloc-256. Whenever a call to kmalloc() is made, the requested size will be rounded up to the next slab size, and an object of that size will be returned. (Again, this is true for the SLAB and SLUB allocators; SLOB is a special case).
This rounding-up results in a simpler and faster allocator, but those benefits are gained at the cost of some wasted memory. That is one of the reasons why it makes sense to create a dedicated slab for frequently-allocated objects. There is one interesting allocation case which is stuck with kmalloc(), though, for DMA-compatibility reasons: SKB (network packet buffer) allocations.
An SKB is typically sized to match the maximum transfer size for the intended network interface. In an Ethernet-dominated world, that size tends to be 1500 bytes. A 1500-byte object requested from kmalloc() will typically result in the allocation a 2048-byte chunk of memory; that's a significant amount of wasted RAM. As it happens, though, the network developers really need the SKB buffer to not cross page boundaries, so there is generally no way to avoid that waste.
But there may be a way to take advantage of it. Occasionally, the network layer needs to store some extra data associated with a packet; IPSec, it seems, is especially likely to create this type of situation. The networking layer could allocate more memory for that data, or it could use krealloc() to expand the existing buffer allocation, but both will slow down the highly-tuned networking core. What would be a lot nicer would be to just use some extra space that happened to be lying around. With a buffer from kmalloc(), that space might just be there. The way to find out, of course, is to use ksize(). And that's exactly what the networking developers intend to do.
Not everybody is convinced that this kind of trick is worth the trouble. Some argue that the extra space should be allocated explicitly if it will be needed later. Others would like to see some benchmarks demonstrating that there is a real-world benefit from this technique. But, in the end, kernel developers do appreciate a good trick. So ksize() will be there should this kind of code head for the mainline in the future.
Interview: the return of the realtime preemption tree
The realtime preemption project is a longstanding effort to provide deterministic response times in a general-purpose kernel. Much code resulting from this work has been merged into the mainline kernel over the last few years, and a number of vendors are shipping commercial products based upon it. But, for the last year or so, progress toward getting the rest of the realtime work into the mainline has slowed.On February 11, realtime developers Thomas Gleixner and Ingo Molnar resurfaced with the announcement of a new realtime preemption tree and a newly reinvigorated development effort. Your editor asked them if they would be willing to answer a few questions about this work; their response went well beyond the call of duty. Read on for a detailed look at where the realtime preemption tree stands and what's likely to happen in the near future.
LWN: The 2.6.29-rc4-rt1 announcement notes that you're coming off a 1.5-year sabbatical. Why did you step away from the RT patches so long; have you been hanging out on the beach in the mean time? :)
Seriously, we underestimated the amount of work which was necessary to bring the unified x86 architecture into shape. Nothing to complain about; it definitely was and still is a worthwhile effort and I would not hesitate longer than a fraction of a second to do it again.
Ingo: Yeah, hanging out on the beach for almost two years was well-deserved for both of us. We met Linus there and it was all fun and laughter, with free beach cocktails, pretty sunsets and camp fires. [ All paid for by the nice folks from Microsoft btw., - those guys sure know how to please a Linux kernel hacker! ;-) ]
So what has brought you back to the realtime work at this time?
The most important reason for returning was of course our editors challenge in The Grumpy Editor's guide to 2009: "The realtime patch set will be mostly merged by the end of the year..."
Ingo: When we left for the x86 land more than 1.5 years ago, the -rt patch-queue was a huge pile of patches that changed hundreds of critical kernel files and introduced/touched ten thousand new lines of code. Fast-forward 1.5 years and the -rt patchqueue is a humungous pile of patches that changes nearly a thousand critical kernel files and introduces/touches twenty-thirty thousand lines of code. So we thought that while the project is growing nicely, it is useful and obviously people love it - the direction of growth was a bit off and that this particular area needs some help.
Initially it started as a thought experiment of ours: how much time and effort would it take to port the most stable -rt patch (.26-rt15) to the .29-tip tree and could we get it to boot? Turns out we are very poor at thought experiments (just like we are pretty bad at keeping patch queues small), so we had to go and settle the argument via some hands-on hacking. Porting the queue was serious fun, it even booted after a few dozen fixes, and the result was the .29-rt1 release.
Maintaining the x86 tree for such a long time and doing many difficult conceptual modernizations in that area was also very helpful when porting the -rt patch-queue to latest mainline.
Most of the code it touched and most of the conflicts that came up looked strangely familiar to us, as if those upstream changes went through our trees =;-)
(It's certainly nothing compared to the beach experience though, so we are still looking at returning for a few months to a Hawaii cruise.)
How well does the realtime code work at this point? What do you think are the largest remaining issues to be tackled?
Ingo: To me what settled quite a bit of "do we need -rt in mainline" questions were the spin-mutex enhancements it got. Prior that there were a handful of pretty pathologic workload scenarios where -rt performance tanked over mainline. With that it's all pretty comparable.
The patch splitup and patch quality has improved too, and the queue we ported actually builds and boots at just about every bisection point, so it's pretty usable. A fair deal of patches fell out of the .26 queue because they went upstream meanwhile: tracing patches, scheduler patches, dyntick/hrtimer patches, etc.
It all looks a lot less scary now than it looked 1.5 years ago - albeit the total size is still considerable, so there's definitely still a ton of work with it.
What are your current thoughts with regard to merging this work into the mainline?
Ingo: IMO the key thought here is to move the -rt tree 'ahead of the upstream development curve' again, and to make it the frontier of Linux R&D. With a 2.6.26 basis that was arguably hard to do. With a partly-2.6.30 basis (which the -tip tree really is) it's a lot more ahead of the curve, and there are a lot more opportunities to merge -rt bits into upstream bits wherever there's accidental upstream activity that we could hang -rt related cleanups and changes onto. We jumped almost 4 full kernel releases, that moves -rt across 1 year worth of upstream development - and keeps it at that leading edge.
Another factor is that most of the top -rt contributors are also -tip contributors so there's strong synergy.
The -tip tree also undergoes serious automated stabilization and productization efforts, so it's a good basis for development _and_ for practical daily use. For example there were no build failures reported against .29-rt1, and most of the other failures that were reported were non-fatal as well and were quickly fixed. One of the main things we learned in the past 1.5 years was how to keep a tree stable against a wild, dangerous looking flux of modifications.
YMMV ;-)
Thomas once told me about a scheme to patch rtmutex locks into/out of the kernel at boot time, allowing distributors to ship a single kernel which can run in either realtime or "normal" mode. Is that still something that you're working on?
Ingo: That still sounds like an interesting feature, but it's pretty hard to pull it off. We used to have something rather close to that, a few years ago: a runtime switch that turned the rtmutex code back into spinning code. It was fragile and hard to maintain and eventually we dropped it.
Ideally it should be done not at boot time but runtime - via the stop-machine-run mechanism or so. [extended perhaps with hibernation bits that force each task into hitting user-mode, so that all locks in the system are released]
It's really hard to implement it, and it is definitely not for the faint hearted.
The RT-preempt code would appear to be one of the biggest exceptions to the "upstream first" rule, which urges code to be merged into the mainline before being shipped to customers. How has that worked out in this case? Are there times when it is good to keep shipping code out of the mainline for such a long time?
All changes which are user space API related (e.g. PI futexes) were merged into mainline before they got shipped to customers via preempt-rt and all bug fixes and improvements of mainline code were sent upstream immediately. Preempt-rt was never a detached project which did not care about mainline.
When we started preempt-rt there was huge demand on the customer side - both enterprise and embedded - for an in kernel realtime solution. The dual kernel approaches of RTAI, RT-Linux and Xenomai had no chance to get ever accepted into the mainline and the handling of the dual kernel environment has never been an easy task. With preempt-rt you just switch the kernel under a stock mainline user space environment and voila your application behaves as you would expect - most of the time :) Dual kernel environments require different libraries, different APIs and you can not run the same binary on a non -rt enabled kernel. Debugging preempt-rt based real time applications is exactly the same as debugging non real time applications.
[PULL QUOTE: While we never had doubts that it would be possible to turn Linux into a real time OS, it was clear from the very beginning that it would be a long way until the last bits and pieces got merged. END QUOTE] While we never had doubts that it would be possible to turn Linux into a real time OS, it was clear from the very beginning that it would be a long way until the last bits and pieces got merged. The first question Ingo asked me when I contacted him in the early days of preempt-rt was: "Are you sure that you want to touch every part of the kernel while working on preempt-rt?". This question was absolutely legitimate; in the first days of preempt-rt we really touched every part of the kernel due to problems which were mostly locking and preemption related. The fixes have been merged upstream and especially in the locking area we got a huge improvement in mainline due to lock debugging, conversion to mutexes, etc. and a general better awareness of locking and preemption semantics.
preempt-rt was always a great breeding ground for fundamental changes in the kernel and so far quite a large part of the preempt-rt development has been integrated into the mainline: PI-futexes, high-resolution timers ... I hope we can keep that up and provide soon more interesting technological changes which emerged originally from the preempt-rt efforts.
Ingo: Preempt-rt turns the kernel's scheduling, lock handling and interrupt handling code upside down, so there was no realistic way to merge it all upstream without having had some actual field feedback. It is also unique in that you need _all_ those changes to have the new kernel behavior - there's no real gradual approach to the -rt concept itself. That adds up to a bit of a catch-22: you don't get it upstream without field use, and you don't get field use without it being upstream.
Deterministic execution is a major niche, one of which was not effectively covered by the mainstream kernel before. It's perhaps the last major technological niches in existence that the stock upstream kernel does not handle yet, and it's no wonder that the last one out is in that precise position for conceptually hard reasons.
In short: all the easy technologies are upstream already ;-)
Nevertheless we strictly got all user-ABI changes upstream first: PI-futexes in particular. The rest of -rt is "just" a new kernel option that magically turns kernel execution into deterministic mode.
Where would be the best starting point for a developer who wishes to contribute to this effort?
Ingo: Beyond the "try it yourself, follow the discussions, and go wherever your heart tells you to go" suggestion, there's a few random areas that might need more attention:
- Big Kernel Lock removal. It's critical for -rt. We still have the
tip:core/kill-the-BKL branch, and if someone is interested it would
be nice to drive that effort forward. A lot of nice help-zap-the-BKL
patches went upstream recently (such as the device-open patches), so
we are in a pretty good position to try the kill-the-BKL final hammer
approach too.
[I have just done a (raw!) refresh and conflict resolution merge of that tree to v2.6.29-rc5. Interested people can find it at:
git pull \ git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip.git \ core/kill-the-BKLWarning: it might not even build. ] - Look at Steve's git-rt tree and split out and gradually merge bits. A
fair deal of stuff has been cleaned up there and it would be nice to
preserve that work.
- Latency measurements and tooling. Go try the latency tracer, the
function graph tracer and ftrace in general. Try to find delays in
apps caused by the kernel (or caused by the app itself), and think
about whether the kernel's built-in tools could be improved.
- Try Thomas's cyclictest utility and try to trace and improve those
worst-case latencies. A nice target would be to push the worst-case
latencies on a contemporary PC below 10 microseconds. We were down to
about 13 microseconds with a hack that threaded the timer IRQ with
.29-rt1, so it's possible to go below 10 microseconds i think.
- And of course: just try to improve the mainline kernel - that will improve the -rt kernel too, by definition :-)
But as usual, follow your own path. Independent, critical thinking is a lot more valuable than follow-the-crowd behavior. [As long as it ends up producing patches (not flamewars) that is ;-)]
And by all means, start small and seek feedback on lkml early and often. Being a good and useful kernel developer is not an attribute but a process, and good processes always need time, many gradual steps and a feedback loop to thrive.
Many thanks to Thomas and Ingo for taking the time to answer (in detail!) this long list of questions.
Patches and updates
Kernel trees
Architecture-specific
Development tools
Device drivers
Documentation
Filesystems and block I/O
Memory management
Networking
Security-related
Virtualization and containers
Benchmarks and bugs
Miscellaneous
Page editor: Jonathan Corbet
Distributions
News and Editorials
A look at package repository proxies
For simplicity's sake, I keep all of my general-purpose boxes running the same Linux distribution. That minimizes conflicts when sharing applications and data, but every substantial upgrade means downloading the same packages multiple times — taking a toll on bandwidth. I used to use apt-proxy to intelligently cache downloaded packages for all the machines to share, but there are alternatives: apt-cacher, apt-cacher-ng, and approx, as well as options available for RPM-based distributions. This article will take a look at some of these tools.
The generic way
Since Apt and RPM use HTTP to move data, it is possible to speed up
multiple updates simply by using a caching Web proxy like Squid. A transparent
proxy sitting between your LAN clients and the Internet requires no changes
to the client machines; otherwise you must configure Apt and RPM to use the
proxy, just as you must configure your Web browser to redirect its
requests. In each case, a simple change in the appropriate configuration
file is all that is required: /etc/apt/apt.conf.d/70debconf or
/etc/rpmrc, for example.
Although straightforward, this technique has its drawbacks. First, a Web proxy will not recognize that two copies of a package retrieved from different URLs are identical, undermining the process for RPM-based distributions like Fedora, where the Yum update tool incorporates built-in mirroring.
Secondly, using the same cache for packages and all other HTTP traffic risks overflowing the cache. Very large upgrades — such as changing releases rather than individual package updates — can fill up the cache used by the proxy, and downloaded packages can get pushed out of the way by web traffic if your LAN upgrade process takes too much time. It is better to keep software updates and general web traffic separate.
Apt-proxy versus apt-cacher
The grand-daddy of the Apt caching proxies is apt-proxy. The current revision is written in Python and uses the Twisted framework. Complaints about apt-proxy's speed, memory usage, and stability spawned the creation of apt-cacher, a Perl-and-cURL based replacement that can run either as a stand-alone daemon or as a CGI script on a web server. Both operate by running as a service and accepting incoming Apt connections from client machines on a high-numbered TCP port: 9999 for apt-proxy, 3142 for apt-cacher.
Apt-proxy is configured in the file /etc/apt-proxy/apt-proxy-v2.conf. In this file, one sets up a section for each Apt repository that will be accessed by any of the machines using the proxy service. The syntax requires assigning a unique alias to each section along with listing one or more URLs for each repository. On each client machine, one must change the repository information in /etc/apt/sources.list, altering each line to point to the apt-proxy server and the appropriate section alias that was assigned in /etc/apt-proxy/apt-proxy-v2.conf.
For example, consider an apt-proxy server running on 192.168.1.100. If the original repository line in a client's sources.list is:
deb http://archive.ubuntu.com/ubuntu/ intrepid main
It would instead need to read:
deb http://192.168.1.100:9999/ubuntubackend intrepid main
The new URL points to the apt-proxy server on 192.168.1.100, port 9999,
and to the section configured with the alias ubuntubackend.
The apt-proxy-v2.conf file would contain an entry such as:
[ubuntubackend]
backends = http://archive.ubuntu.com/ubuntu/
If you find that syntax confusing, you are not alone. Apt-proxy requires detailed configuration on both the server and client sides: it forces you to invent aliases for all existing repositories, and to edit every repository line in every client's sources.list.
Apt-cacher is notably simpler in its configuration. Although there are
a swath of options available in apt-cacher's server configuration file
/etc/apt-cacher/apt-cacher.conf, the server does not
need to know about all of the upstream Apt repositories that clients
will access. Configuring the clients is enough to establish a working
proxy. On the client side, there are two options: either rewrite
the URLs of the repositories in each client's sources.list, or activate
Apt's existing proxying in /etc/apt/apt.conf. But
choose one or the other; you cannot do both.
To rewrite entries in sources.list, one merely prepends the address of the apt-cacher server to the URL. So
deb http://archive.ubuntu.com/ubuntu/ intrepid main
becomes:
deb http://192.168.1.100:3142/archive.ubuntu.com/ubuntu/ intrepid main
Alternatively, leave the sources.list untouched, and edit apt.conf, inserting the line:
Acquire::http::Proxy "http://192.168.1.100:3142/";
Ease of configuration aside, the two tools are approximately equal under basic LAN conditions. Apt-cacher does offer more options for advanced usage, including restricting access to specific hosts, logging, rate-limiting, and cache maintenance. Both tools allow importing existing packages from a local Apt cache into the cache shared by all machines.
Much of the criticism of the tools observed on mailing lists or web forums revolves around failure modes, for example whether Twisted or cURL is more reliable as a network layer. But there are telling discussions from experienced users of both that highlight differences you would rather not experience firsthand.
For example, this discussion includes a description of how apt-proxy's simplistic cache maintenance can lose a cached package: If two clients download different versions of the same package, the earlier downloads will expire from the cache because apt-proxy does not realize that keeping both versions is desirable. If you routinely test unstable packages on one but not all of your boxes, such a scenario could bite you.
Other tools for Apt
Although apt-proxy and apt-cacher get the most attention, they are not the only options.
Approx is intended as a replacement for apt-proxy, written in Objective Caml and placing an emphasis on simplicity. Like apt-proxy, client-side configuration involves rewriting the repositories in sources.list. The server side configuration is simpler, however. Each repository is re-mapped to a single alias, with one entry per line.
Apt-cacher-ng is designed to serve as a drop-in replacement for apt-cacher, with the added benefits of multi-threading and HTTP pipelining lending it better speed. The server runs on the same TCP port, 3142, so transitioning from apt-cacher to apt-cacher-ng requires no changes on the client side. The server-side configuration is different, in that the configuration can be split into multiple external files and incorporate complicated remapping rules.
Apt-cacher-ng does not presently provide manpage documentation, supplying instead a 14-page PDF. Command-line fans may find that disconcerting. Neither application has supplanted the original utility it was designed to replace, but both are relatively recent projects. If apt-proxy or apt-cacher don't do the job for you, perhaps approx or apt-cacher-ng will.
Tools for RPM
The situation for RPM users is less rosy. Of course, as any packaging maven will tell you, RPM and Apt are not proper equivalents. Apt is the high-level tool for managing Debian packages with dpkg. A proper analog on RPM-based systems would be Yum. Unfortunately, the Yum universe does not yet have dedicated caching proxy packages like those prevalent for Apt. It is not because no one is interested; searching for the appropriate terms digs up threads at Linux Users' Group mailing lists, distribution web forums, and general purpose Linux help sites.
One can, of course, use Apt to manage an RPM-based system, but in most cases the RPM-based distributions assume that you will use some other tool designed for RPM from the ground up. In such a case, configuring Apt is likely to be a task left to the individual user, as opposed to a pre-configured Yum setup.
Most of the proposed workarounds for Yum involve some variation of the general-purpose HTTP proxy solution described above, using Squid or http-replicator. If you take this road, it is possible to avoid some of the pitfalls of lumping RPM and general web traffic into one cache by using the HTTP proxy only for package updates. Just make sure that plenty of space has been allocated for the cache.
Alternatively, setting up a local mirror of the entire remote repository, either with a tool such as mrepo, or piecemeal is possible. The local repository can then serve all of the clients on the LAN. Note, however, that this method will maintain a mirror of the entire remote repository, not just the packages that you download, and that you will have to update the machine hosting the mirror itself in the old-fashioned manner.
Finally, for the daring, one other interesting discussion proposes faking a caching proxy by configuring each machine to use the same Yum cache, shared via NFS. Caveat emptor.
I ultimately went with apt-cacher for this round of upgrades, on the basis of its simpler configuration and its widespread deployment elsewhere. Thus far, I have no complaints; the initial update went smoothly — Ubuntu boxes moving from 8.04 to 8.10, for the curious. The machines are now all in sync; time will tell whether or not additional package updates will reveal additional problems in the coming months. It's a good thing there are alternatives.
New Releases
Debian 5.0 released
It's official: Debian GNU/Linux 5.0 ("Lenny") is available; lots of details can be found in the release notes. For embedded developers, there are updated versions of Emdebian Grip (which is binary-compatible with regular Debian) and Emdebian Crush (which is not). And, for Debian developers who have been held back by the release freeze: it's now open season in -unstable as the "Squeeze" development cycle begins.DebXO 0.5 release
DebXO is a customized version of Debian Lenny (5.0) customized for XO hardware. Click below for more information.Mandriva Linux 2009 Spring beta released
A beta version of Mandriva Linux 2009.1 has been announced. "The beta release of Mandriva Linux 2009 Spring (code name Margaux) is now available. This beta version provides some updates on major desktop components of the distribution, including KDE 4.2.0, GNOME 2.25.90, Xfce 4.6 RC1, X.org server 1.5, OpenOffice.Org 3.0.1, qt 4.5.0 (RC1)." See also the release notes and errata.
Fedora Unity Announces Fedora 10 Re-spins
The Fedora Unity Project has announced the release of new ISO Re-Spins of Fedora 10. "These Re-Spin ISOs are based on the officially released Fedora 10 installation media and include all updates released as of February 10th, 2009."
DragonFly Release 2.2
DragonFly BSD Release 2.2 is out. See the release notes for more information. "The HAMMER filesystem is considered production-ready in this release; It was first released in July 2008. The 2.2 release represents major stability improvements across the board, new drivers, much better pkgsrc support and integration, and a brand new release infrastructure with multiple target options." DragonFly BSD was forked from FreeBSD 4.8 in June of 2003.
BackTrack 4 Beta Released
The Remote Exploit Development Team has announced the release of BackTrack 4 Beta. "The most significant of these changes is our expansion from the realm of a Pentesting LiveCD towards a full blown "Distribution". Now based on Debian core packages and utilizing the Ubuntu software repositories, BackTrack 4 can be upgraded in case of update. When syncing with our BackTrack repositories, you will regularly get security tool updates soon after they are released."
Arch Linux 2009.02 ISO Release
Arch Linux has announced the release of version 2009.02. "It took us quite a while, but we think the result is worth it: we added some cool new things and ironed out some long-lasting imperfections."
Distribution News
Debian GNU/Linux
Kudos to translators
The Debian Project congratulates the translation team for their work on Debian 5.0 (lenny). "Lenny comes with an installer translated in 63 languages (5 more than Etch), which also means that 150 millions additional people can install Debian in their language."
Fedora
FUDCon Berlin 2009
The Fedora Project has announced the date and location of the next Fedora Users and Developers Conference (FUDCon); Berlin, Germany from June 26 - 28, 2009. "LinuxTag takes place from Wednesday June 24 - Saturday June 27, and FUDCon will take place from Friday the 26th - Sunday the 28th, meaning that there will be two days of overlap during which a large number of people who would otherwise never attend a FUDCon will have a chance to see Fedora up close, and in great detail."
FUDCons and FADs
Fedora Project Leader Paul Frields blogs about Fedora events. "In the past, besides the notable trade and community shows, Fedora also started, years ago, the Fedora Users and Developers Conference, or FUDCon for short. FUDCon is not just an event that allows Fedora community members to get together, interact, share ideas, and produce results â itâs Red Hatâs gift back to the community, a way of thanking people for their help with the Fedora Project. Typically a FUDCon is several days long, and includes self-organizing hackfests around key project areas or goals, and a BarCamp-style day of technical sessions where contributors deliver talks in an informal and participation-oriented environment."
Fedora Board Recap 2009-02-10
The Fedora Board Recap for February 10, 2009 covers Follow-up To Previous Business, Max Spevack Update, Fedora Store in EMEA, Denoting Fedora sponsorship, Community Architecture & LXDE In Fedora, FUDCon LATAM 2009 and FUDCon EMEA 2009 plans and Future Business.
Gentoo Linux
Council meeting summary for 12 February 2009
The Gentoo council met February 12, 2009. Topics discussed included the need for a dedicated secretary, elections, technical issues, open bugs and non-technical issues.
Distribution Newsletters
Arch Linux Newsletter for February, 2009
The Arch Linux Newsletter for February is out. "Welcome to another issue of the Arch Linux Newsletter. This newsletter covers from December to February. Not too much content, just the relevant. We have an excellent interview with Jan de Groot, the GNOME packager. This month feature many Arch in the media articles, proving the success of Arch Linux in general. Also we have a all the common sections updated with new information for your reading pleasure."
DistroWatch Weekly, Issue 290
The DistroWatch Weekly for February 16, 2009 is out. "Without a shadow of a doubt, the biggest story of the past week was the release of Debian GNU/Linux 5.0 'Lenny'. After nearly two years of continuous development and a controversial vote or two, we finally get the chance to take a quick look at the finished product - the new live media as well as the 'netinst' network installation CD. In other news, Ubuntu announces that Jaunty will ship with Linux kernel 2.6.28, Wiley publishes OpenSolaris Bible and makes three sample chapters available for free download, openSUSE's Zypper gains Bash-completion improvements, Red Hat publishes a 'State of the Union' address, the Woof project releases version 0.0.0 with support for Arch Linux, and Cuba develops their own Gentoo-based variant distribution called Nova. Also in this issue are links to two interviews - the first with Steve MacIntyre, the head of the Debian project, and the second with Scott Ritchie, an Ubuntu community developer."
Fedora Weekly News #163
The Fedora Weekly News for February 15, 2009 is out. "This week's issue provides some detail on the upcoming Fedora Activity Day (FAD) at Southern California Linux Expo (SCaLE), many posts from the Fedora Planet blogosphere, and selected wonderful event reports from FOSDEM. We welcome a brand new Quality Assurance beat this issue, with coverage of the latest test day focusing on iSCSI for Fedora 11, summary of the latest QA weekly meeting, and discussion of the process for critical-release bugs. In Development news, discussion of FLOSS multimedia codec support in Fedora, preview looks at F11 release notes, and the availability of CrossReport, a tool to evaluate the ease with which applications can be ported to Windows using the MinGW libraries." Plus several other topics.
The Mint Newsletter - issue 75
The Linux Mint Newsletter is out. "The forum has moved to Canada - works much better now. For the time being activation and notification mail are not sent, unknown why. The XFCE Community Edition (CE) is almost ready for its final release. The KDE and Fluxbox CEs are almost ready for their first RC to be released" And much more.
openSUSE Weekly Newsletter FOSDEM 2009 issue
openSUSE has released a special issue of its newsletter to cover FOSDEM 2009. Topics include Before FODSEM, FOSDEM LIVE, Media, Review - After FOSDEM 2009, and Credits.OpenSUSE Weekly News/59
This issue of the OpenSUSE Weekly News covers the special FOSDEM2009 edition, OpenOffice_org 3.0.1 final available, Jan-Christoph Bornschlegel: Product Creation with the openSUSE Build Service, Henne Vogelsang: Fosdem talk about collaboration features and Contrib, kamilsok: installing 64bit Java on openSUSE 11.1, and much more.Ubuntu Weekly Newsletter #129
The Ubuntu Weekly Newsletter for February 14, 2009 covers: Ubuntu LoCo Teams Meeting, New MOTU's, Rockin' LoCo Docs Day, Ubuntu Hug Day, Improved mail server stack: Testing needed, Drupal 5.x and 6.x LoCo Suite Released, Ubuntu Honduras being organized, Launchpod #17, Triage in Launchpad suite, PPA page perfomance improvements, Ubuntu Training for USA, HP Mini Mi Screenshots, Server Team Meeting Feb. 10th, and much more.
Newsletters and articles of interest
Top 5 Netbook Linux Distributions (Internetling)
Internetling looks at the top 5 Linux netbook distributions. "Some of the advantages of running Linux on a sub-notebook are a smaller memory footprint, better security and tons of free applications right out of the box. If you decide to install it by yourself, you may encounter some compatibility problems here and there, therefore it is wiser to buy one of the more widely-sold netbooks such as the Eee PC or the Acer Aspire One."
Cuba launches Gentoo Linux distro (DesktopLinux)
DesktopLinux covers the launch of Gentoo-based "Nova," Cuba's new distribution. "Developed by the Universidad de las Ciencias Informáticas (UCI), Nova was launched at the recent International Conference on Communication and Technologies, says the story in Reuters. Despite ongoing trade embargoes from the U.S., and what would seem to be a natural fit between open source technology and socialism, Cuba is still primarily a Microsoft Windows-centric country, according to the story. Yet, the government has come to believe that Windows could be a threat because it believes U.S. security agencies have access to Microsoft codes, says the story. Plus, the trade embargoes make it difficult to get legal, supported copies of the software for regular updates, says Reuters."
Debian 5's Five Best Features (CompterWorld)
Steven J. Vaughan-Nichols shares his five favorite features of the newly released Debian 5.0 "Lenny". "1) X.org 7.3 integration. It used to be setting up your screen in Linux was a real pain-in-the-rump. With X.org 7.3 the X-server behind Linux's most common GUIs (graphical user interfaces), the program automatically take care of setting up your display resolution."
Page editor: Rebecca Sobol
Development
Google's Summer of Code: Past and Future
Since 2005, Google has run its Summer of Code program each (northern hemisphere) summer, offering college students $4500 and a T-shirt to work on an open-source project instead of flipping burgers. Students involved often report that the program has allowed them to get their dream jobs or get into their top-choice schools. For the projects fortunate enough to be accepted, the Summer of Code offers a number of benefits:
- Increased visibility of submitted code
- $500 per student from Google, to be used for any purpose
- New developers, if you can recruit them by the end of summer
- Experience in mentoring people who may have no previous familiarity with your project, and connection with a community of people doing their best at the same.
Last summer's program was about three times bigger than the first year's (see the tables below). Because of the economic downturn, the 2009 program will be capped at 1000 students, a slight decrease from 1125 last year. The open-source community is fortunate that Google continues to offer this program at all, since it has been laying off many of its own employees. With 1000 students involved, this year's program will amount to a commitment to open-source of more than $5 million.
To better understand the last few years and come up with some estimates about this year's program, I researched data from the previous four years, calculated a few statistics, and projected a few more. There were three numbers I was curious about: student acceptance rates, organization acceptance rates, and student-to-organization ratios.
Let's start with student acceptance. In the below table, you can see the number of applied and accepted students for each year. Bold text indicates a projection and bold, italicized text indicates a number derived from a projection. The next column has the growth rate in number of applicants. I used last year's growth rate as a conservative estimate of this year's increase, then calculated the number of applicants from a 15% increase to last year's count. Unsurprisingly, a growing number of applicants coupled with a lower number of available slots would reduce the acceptance rate. For open-source projects, this implies that students who make the cut will be even better than last year. Unfortunately, that means there will be more tough choices and deserving students who will not make the cut.
Year Accepted Applied Applicant Growth Acceptance Rate 2009 1000 8200 15% 12% 2008 1125 7100 15% 16% 2007 900 6200 103% 15% 2006 600 3050 -65% 20% 2005 420 8750 5%
Next, let's take a look at stats for the open-source projects involved in the Summer of Code. From the past two years, we can see that more than 1/3 of applying organizations get accepted. This seems high enough to be worth the effort of applying, which is primarily composed of thinking of project ideas. This exercise can be valuable for recruiting new developers outside of the Summer of Code, too.
One number that turned out to be surprisingly informative was students per organization, which has stayed remarkably consistent since 2006. Using the average of this number over the past 3 years, I estimated the likely number of organizations in this year's program and came up with around 150. If the organization applications increase at the same rate as they did from 2007 to 2008, the acceptance rate for organizations could drop below 20%.
Year Accepted Applied Acceptance Rate Students/Organization 2009 150 6.4 2008 175 500 35% 6.4 2007 130 300 43% 6.9 2006 100 6.0 2005 40 10.5
In addition to a few guesses about numbers, there's one major change to the program that we know will happen this year: the move to an open-source web application called Melange. This will enable anyone involved in the program to add new features or bugfixes on-demand. Since Google's open-source team is typically extremely busy, this means anyone who wants a feature can add it themselves as fast as they want to. One other interesting feature is that it should allow easy collection of various statistics across the entire program.
In addition, Melange's open-source nature means organizations besides Google can use the same application to run their own programs similar to Summer of Code. Work on Melange is still underway and the current developers would appreciate help in getting it ready for this year's program. So please get in touch if either of those reasons motivate you and you want to work with Django. At the moment, Melange runs on Google App Engine, but contributors are welcome to add new back-ends, according to Leslie Hawthorn, who runs the Summer of Code.
Last year's mentor summit and later discussions resulted in a wiki to collect the wisdom and experience of mentoring organizations over the years. This wiki is now hosted by the OSU Open Source Lab and was recently opened to the general public. It's only editable by Summer of Code mentors, but anyone can read and learn from it. It seems likely that it could become a valuable resource for organizations mentoring any new developers, whether within this program or outside of it. In addition, session notes from the mentor summits are also available on the wiki.
To find out more details about this year's Summer of Code, check out the FAQ. The application period for organizations is March 9 to 13, which gives you a few weeks to think of projects. The FAQ is a good starting point; it describes what a strong organization application looks like. Potential mentors will want to read the mentor advice page. For students, the application period is March 23 - April 3. If you are a student who is serious about getting accepted, read the student advice page and get in touch with organizations as soon as they have been announced.
System Applications
Audio Projects
JACK 1.9.1 released
Version 1.9.1 of JACK, the JACK Audio Connection Kit, has been announced. "Future JACK2 will be based on C++ jackdmp code base. Jack 1.9.1 is the "renaming" of jackdmp and the result of a lot of developments started after LAC 2008. What is new: - A lot of improvements and bug fixes in NetJack2, that is now working more reliably. - Synchronize the JACK2 codebase with recent changes in JACK1 API (in particular some thread related functions as well as ALSA backend, ring buffer code...) - A lot of small bug fixes and improvements everywhere."
Clusters and Grids
GridTrust: Enforcer 1.0 released (SourceForge)
Version 1.0 of GridTrust: Enforcer has been announced. "The overall objective of the GridTrust project is to develop the technology to manage trust and security for the Next Generation Grids from the requirement level down to the application, middleware and foundation levels. GridTrust project team is pleased to announce the 1.0 release of Enforcer. This release is the first release of Enforcer."
Database Software
MySQL Community Server 5.0.77 released
Version 5.0.77 of MySQL Community Server has been announced. "The following section lists important, incompatible and security changes since the previous (binary) MySQL Community Server 5.0.67 release..."
PostgreSQL Weekly News
The February 15, 2009 edition of the PostgreSQL Weekly News is online with the latest PostgreSQL DBMS articles and resources.SQLite release 3.6.11 announced
Version 3.6.11 of SQLite has been announced. "Changes associated with this release include the following: * Added the hot-backup interface. * Added new commands ".backup" and ".restore" to the CLI. * Added new methods backup and restore to the TCL interface. * Improvements to the syntax bubble diagrams * Various minor bug fixes."
Peer to Peer
Tahoe filesystem 1.3 announced
Version 1.3 of allmydata.org's Tahoe filesystem has been announced. "We are pleased to announce the release of version 1.3.0 of "Tahoe", the Least Authority Filesystem. Tahoe-LAFS is a secure, decentralized, fault-tolerant filesystem. All of the source code is available under a choice of two Free Software, Open Source licences. This filesystem is encrypted and distributed over multiple peers in such a way it continues to function even when some of the peers are unavailable, malfunctioning, or malicious."
Web Site Development
lighttpd 1.4.21 released
Version 1.4.21 of lighttpd, a light weight web server, has been announced. "Four and a half months after the release of 1.4.20 comes a new version in the stable branch of lighty: 1.4.21 is here. It is a bugfix release but also contains 3 small new features. We would like to thank everybody who reported bugs, especially the ones who provided patches."
mnoGoSearch 3.3.8 released
Version 3.3.8 of the mnoGoSearch web site search engine has been announced. See the change log for more information.Samizdat 0.6.2 released
Version 0.6.2 of Samizdat has been announced, it includes some new capabilities and security fixes. "Samizdat is a generic RDF-based engine for building collaboration and open publishing web sites. Samizdat provides users with means to cooperate and coordinate on all kinds of activities, including media activism, resource sharing, education and research, advocacy, and so on. Samizdat intends to promote values of freedom, openness, equality, and cooperation."
Miscellaneous
syslog-ng 3.0 released
Version 3.0 of syslog-ng has been announced. "After the release of its commercial version last fall, syslog-ng Open Source Edition 3.0 is finally available. The syslog-ng Open Source Edition application is a mature, stable system logging application that has become the most common alternative logging server of the Linux/Unix world. The syslog-ng application is the default logging solution of the SUSE distributions, and is estimated to be used by tens of thousands of organizations on hundreds of thousands of computers. Version 3.0 contains several new features that strengthen the range of syslog-ng's functionalities."
Desktop Applications
Audio Applications
Das_Watchdog 0.9.0 is available
Version 0.9.0 of Das_Watchdog has been announced, it includes bug fixes Fedora 10 support and improved documentation. "Das_Watchdog is a general watchdog for the linux operating system that should run in the background at all times to ensure a realtime process won't hang the machine."
Business Applications
PMDEX 4.0 released (SourceForge)
Version 4.0 of Project Manager Dexea has been announced, it adds several new capabilities. "Project Manager Dexea is a simple multiuser timetracking & multiproject management tool with a intuitive and easy to use web interface. Control your projects with a lot of charts, gantts and statistics."
Desktop Environments
Announcing the Awesome project
Julien Danjou has announced the Awesome project. "Speaking of awesome, I'd like to bring your attention to the awesome project, a window manager designed rather as a frame-work than as a classical flat-configuration-file-driven window manager. It allows to do almost anything you can expect with a window manager, and probably more. It even respects and implements many of the Freedesktop standard, like EWMH, XDG base directory, system tray or notifications. We are also the first, and still one of the only, window manager to use the X C Bindings, dropping Xlib usage."
GNOME Software Announcements
The following new GNOME software has been announced this week:- Accerciser 1.5.91 (bug fixes and translation work)
- Alacarte 0.11.9 (code cleanup and translation work)
- Anjuta 2.25.902 (bug fixes)
- Brasero 2.25.91 (bug fixes and translation work)
- Cheese 2.25.91 (bug fixes and translation work)
- Deskbar-Applet 2.25.91 (bug fixes and translation work)
- Empathy 2.25.91 (new features, bug fixes and translation work)
- Evince 2.25.91 (bug fixes and translation work)
- Eye of GNOME 2.25.91 (bug fixes and translation work)
- GCalctool 5.25.91 (bug fixes, documentation and translation work)
- gconf-editor 2.25.91 (new features, bug fixes and translation work)
- Gdl 2.25.91 (code cleanup and documentation work)
- giggle 0.4.91 (bug fixes)
- GLib 2.19.7 (new features, bug fixes and translation work)
- GLib 2.19.8 (bug fixes)
- gnome-applets 2.25.91 (bug fixes and translation work)
- GNOME DVB Daemon 0.1.4 (new features and bug fixes)
- gnome-games 2.25.91 (new features, bug fixes and translation work)
- gnome-keyring 2.25.91 (new features, bug fixes and translation work)
- gnome-mud 0.11.1 (new features, bug fixes and translation work)
- GNOME Power Manager 2.24.4 (bug fixes)
- gnome-speech 0.4.25 (build fix)
- GOK 2.25.91 (bug fixes and translation work)
- GParted 0.4.3 (bug fixes)
- GTK+ 2.15.4 (new features, bug fixes and translation work)
- Gtk2-Perl 2.25.91 (new features and bug fixes)
- gtk-engines 2.17.3 (bug fixes and translation work)
- Gwget 1.00 (new features, bug fixes and translation work)
- Gwget 1.0.1 (bug fixes)
- gyrus 0.3.8 (bug fixes and translation work)
- Libgda 3.99.11 (bug fixes and documentation work)
- Marlin 0.13 (unspecified)
- MonoDevelop 2.0 Beta 1 (new features)
- mousetweaks 2.25.91 (bug fixes and translation work)
- Orca 2.25.91 (bug fixes and translation work)
- Quick Lounge Applet 2.13.2 (new features and translation work)
- seahorse 2.25.91 (new features, bug fixes, documentation and translation work)
- Tegaki 0.1 (initial release)
- Tomboy 0.13.5 (bug fixes and translation work)
KDE Software Announcements
The following new KDE software has been announced this week:- 2ManDVD 0.1 (initial release)
- 2ManDVD 0.2 (new features, bug fixes and translation work)
- choqoK 0.4 (new features and bug fixes)
- digiKam 0.10.0-rc2 for KDE4 (bug fixes)
- filelight 1.9-alpha (KDE4 port)
- FreeRemote 0.1.0 (initial release)
- Frescobaldi 0.7.5 (new features and translation work)
- kipi-plugins 0.2.0-rc2 for KDE4 (unspecified)
- KlamAV 0.45 (new features, bug fixes and translation work)
- KOceanSaver 0.3 (code cleanup)
- konqil.icio.us 3.1 (bug fix)
- KPackageKit 0.4.0 (bug fixes)
- KTorrent 3.2 (new features and bug fixes)
- Linux Unified Kernel 0.2.3 (new features)
- Perl Audio Converter 4.0.4 (bug fixes and translation work)
- PySMSsend 1.40 (bug fixes and code cleanup)
- QSvn 0.8.1 (bug fixes)
- QTrans 0.2.1.4 (new feature)
- QTrans 0.2.1.4-2 (unspecified)
- Radios Francaise 0.1 (initial release)
- Radio Sweden 0.2 (compatibility update)
- SIR 1.9.5 (new features and translation work)
- Social Networks Visualiser 0.50 (new features and bug fixes)
- Social Networks Visualiser 0.51 (bug fixes and translation work)
- Tellico 1.3.5 (new features and bug fixes)
- uRSSus 0.2.11 (unspecified)
- Valknut 0.3.23 / 0.4.9 (new features and bug fixes)
- webdav 0.0.1 (initial release)
Xorg Software Announcements
The following new Xorg software has been announced this week:- libX11 1.2 (new features and bug fixes)
- libxcb 1.2 (bug fix)
- pixman 0.14.0 (new features and performance improvements)
- xcb-proto 1.4 (new features and bug fixes)
- xf86-input-fpit 1.3.0 (bug fixes and code cleanup)
- xf86-video-geode 2.11.1 (bug fixes)
- xf86-video-vesa 2.2.0 (code cleanup and documentation work)
- xorg-server 1.5.99.903 (new features and bug fixes)
Electronics
Herb status and eye candy
A new status report has come out of the Herb VLSI design project. "There are a ton of changes happening in Herb (a complete set of tools for VLSI design). Jan Schmidt has joined in to hack on the code. It would be nice to have several more C developers join us, and this is an extremely interesting project which uses GLib, so why shouldnt you?"
Financial Applications
JStock - Stock Market Software: 1.0.3 Released (SourceForge)
Version 1.0.3 of JStock has been announced, it adds some new features and bug fixes. "JStock is a free stock market software, which supports multiple countries' stock market. (11 countries at this moment) It provides Real-Time stock info, Stock indicator editor, Stock indicator scanner, Portfolio management and Market chit chat features."
Interoperability
Wine 1.1.15 announced
Version 1.1.15 of Wine has been announced. Changes include: "Gecko engine update. Better region support in GdiPlus. Support for cross-compilation in winegcc. Beginnings of MS Text Framework support. Many fixes to the regression tests on Windows. Various bug fixes."
Multimedia
Elisa Media Center 0.5.27 released
Version 0.5.27 of Elisa Media Center has been announced. "This release is a 'light weight' release, which means it is supposed to be pushed to the users through our automatic plugin update system. That is why there is no new Elisa installer nor any new packages from our side: use the existing ones for 0.5.27; with the default configuration, they should upgrade automatically to 0.5.28, asking you to restart Elisa when everything is downloaded. Tarballs are provided for packagers who want to disable the automatic plugin update system on their distribution, so that they can make new packages for their users to be able to update (I strongly advise that, the new video section is worth it)."
Music Applications
Marlin 0.13 released
Version 0.13 of Marlin has been announced. "After far too long, I got round to releasing a new version of Marlin. Marlin is a sample editor based around GStreamer, JACK and GTK."
Office Suites
KOffice 2.0 Beta 6 Released (KDEDot)
KDE.News covers the release of KOffice 2.0 Beta 6. "The KOffice developers have released their sixth beta for KOffice 2.0. With this release we start to approach the end of the beta series and move towards the Release Candidates. As usual the list of changes is rather long, but it is obvious that the really large issues are starting to dry up."
Video Applications
Dirac 1.0.2 released
Version 1.0.2 of the Dirac video CODEC has been announced. "This a a minor release complying with the latest Dirac Bytestream Specification 2.2.3."
Miscellaneous
BleachBit 0.3.2 released
Version 0.3.2 of BleachBit has been announced. "BleachBit is a Internet history, locale, registry, privacy, and temporary file cleaner for Linux on Python v2.4 - v2.6. Notable changes for 0.3.1: * Clean apt cache, yum cache, rotated system logs, Skype chat logs, Transmission cache, Exaile cache, and more localizations. * Fix bug in selecting trash for cleaning. * Fix permission of configuration files created when running in sudo mode. * Fix unusual situation where selected language could disappear. * Fix situation where BleachBit could fail to start. * Add French, Arabic, and Turkish translations."
VPTerminal: initial release announced (SourceForge)
The initial release of VPTerminal has been announced. "RS232 Terminal Program. The first public release of VPTerminal is available at SourceForge."
Languages and Tools
C
GCC 4.4.0 Status Report
The February 16, 2009 edition of the GCC 4.4.0 Status Report has been published. "The trunk remains Stage 4, so only fixes for regressions (and changes to documentation) are allowed. As stated previously, the GCC 4.4 branch will be created when there are no open P1s and the total number of P1, P2, and P3 regressions is under 100. We've achieved that, but are still waiting for the FSF to provide instructions regarding the installation of the new run-time library license."
Caml
Caml Weekly News
The February 17, 2009 edition of the Caml Weekly News is out with new articles about the Caml language.
Perl
Parrot 0.9.1 released
Version 0.9.1 of Parrot has been announced. "On behalf of the Parrot team, I'm proud to announce Parrot 0.9.1 "Final Countdown." Parrot (http://parrot.org/) is a virtual machine aimed at running all dynamic languages."
The Periodic table of Perl operators
An updated version of the periodic table of the operators for Perl 6 has been posted. It is truly a work of art, to say the least; suitable for framing.
Python
ftputil 2.4 released
Version 2.4 of ftputil, a high-level FTP client library for Python, has been announced. "The ``FTPHost`` class got a new method ``chmod``, similar to ``os.chmod``, to act on remote files. Thanks go to Tom Parker for the review. There's a new exception ``CommandNotImplementedError``, derived from ``PermanentError``, to denote commands not implemented by the FTP server or disabled by its administrator. Using the ``xreadlines`` method of FTP file objects causes a warning through Python's warnings framework. Upgrading is recommended."
Numexpr 1.2 released
Version 1.2 of Numexpr has been announced. "Numexpr is a fast numerical expression evaluator for NumPy. With it, expressions that operate on arrays (like "3*a+4*b") are accelerated and use less memory than doing the same calculation in Python. The main feature added in this version is the support of the Intel VML library (many thanks to Gregor Thalhammer for his nice work on this!). In addition, when the VML support is on, several processors can be used in parallel (see the new `set_vml_num_threads()` function)."
Released Python 3.0.1
Python 3.0.1 has been unleashed. This is the first bugfix release of the new 3.0 branch. "Python 3.0 represents a major milestone in Python's history. This new version of the language is incompatible with the 2.x line of releases, while remaining true to BDFL Guido van Rossum's vision." Get it, test it.
Python-URL! - weekly Python news and links
The February 17, 2009 edition of the Python-URL! is online with a new collection of Python article links.
Tcl/Tk
Tcl-URL! - weekly Tcl news and links
The February 12, 2009 edition of the Tcl-URL! is online with new Tcl/Tk articles and resources.
Version Control
Bazaar 1.12 released
Version 1.12 of Bazaar has been announced. "Bazaar (bzr) is a decentralized revision control system, designed to be easy for developers and end users alike. Bazaar is part of the GNU project to develop a complete free operating system. This release of Bazaar contains many improvements to the speed, documentation and functionality of ``bzr log`` and the display of logged revisions by ``bzr status``. bzr now also gives a better indication of progress, both in the way operations are drawn onto a text terminal, and by showing the rate of network IO."
List of proposed backward incompatible changes to git
Junio C. Hamano has sent out a list of proposed backwards-incompatible changes to git. "Here is a list of possible future changes to git that are backward incompatible that are under discussion on the git mailing list. None of them will be in the upcoming 1.6.2 release, but some of them are likely to appear in future versions. If you think we should not introduce some of the listed changes, here is a chance to voice your opinions and make a convincing argument against them, so please do so."
Miscellaneous
Gerrit Code Review 2.0.3 announced
Version 2.0.3 of Gerrit Code Review has been announced. "Gerrit is a web based code review system, facilitating online code reviews for projects using the Git version control system. Gerrit makes reviews easier by showing changes in a side-by-side display, and allowing inline comments to be added by any reviewer."
Page editor: Forrest Cook
Linux in the news
Recommended Reading
Bruce Perens: How Many Open Source Licenses Do You Need? (IT Management)
Bruce Perens wonders how many open source licences we really need. "The Open Source initiative has, to date, approved 73 licenses. How many do you really need? If you're a company or individual producing Open Source software, no more than 4. And you can get along with just 2 of them."
Trade Shows and Conferences
Open Source News from FOSDEM 2009 - Day 1 (LXer)
LXer covers the first day of FOSDEM 2009. "This weekend, the 9th Free & Open Source Developers' Europe Meeting (FOSDEM) took place at the Université Libre Bruxelles (ULB) in Brussels. Your editors Sander Marechal and Hans Kwint attended this meeting to find out for you what's hot, new in the area of the Linux environment and might be coming to you in the near future. Here is the blow-by-blow of the first day with talks about Mozilla's future, the role of Debian, two OSI talks, Reverse engineering and much, much more."
Open Source News from FOSDEM 2009 - Day 2 (LXer)
LXer has a look at FOSDEM Day 2. "In the weekend of 7 and 8 February, the 9th Free & Open Source Developers' Europe Meeting (FOSDEM) took place at the Université Libre Bruxelles (ULB) in Brussels. Your editors Sander Marechal and Hans Kwint attended this meeting to find out for you what's hot, new in the area of the Linux environment and might be coming to you in the near future. This is our report of the second day covering the talks about Thunderbird 3, Debian release management, Ext4, Syslinux, CalDAV and more."
Companies
Freescale adds Android, Xandros netbook support (ZDNet UK)
ZDNet UK reports that Freescale's new ARM processors can now support the Android and Xandros open-source operating systems. "New industry agreements pave the way for non-Intel netbooks, Freescale said, with "dramatically longer" battery life and better portability. Up to half the netbook market — expected to double to 30 million units in 2009 — may go to ARM, the company predicted."
Microsoft blinks first on interoperability with Red Hat (451 CAOS Theory)
451 CAOS Theory takes a brief look at an interoperability deal between Microsoft and Red Hat. "Under their agreement to work together Microsoft and Red Hat will provide testing, validation and coordinated technical support for mutual customers using server virtualization. Red Hat has joined Microsoft's Server Virtualization Validation Program, and Microsoft is now a Red Hat partner for virtualization interoperability and support." (Thanks to Don Marti)
Linux Adoption
Cuba launches own Linux variant to counter U.S. (Reuters)
Reuters reports on the launch of the Cuban Nova distribution. "Cuba launched its own variant of the Linux computer operating system this week in the latest front of the communist island's battle against what it views as U.S. hegemony. The Cuban variant, called Nova, was introduced at a Havana computer conference on "technological sovereignty" and is central to the Cuban government's desire to replace the Microsoft software running most of the island's computers."
Interviews
Interview: Eigen Developers on 2.0 Release (KDEDot)
Jonathan Riddell interviews the developers of Eigen. "Recently Eigen 2.0 was released. You might already have heard about Eigen, it is a small but very high performance maths library which has its roots in KDE. Below, the two core developers are interviewed about it."
Reviews
The Buzztard Project, Part 1 (Linux Journal)
Dave Phillips takes a look at Buzztard 0.4.0. "Buzztard is a good example of the modern design for a music tracker. The program provides a variety of elements necessary for music production, including a composition interface, an instrument design facility, internal audio effects processing, and much more. Buzztard follows the design considerations for the famous Buzz tracker for Windows. The tracker composition interface resembles the standard UI for most trackers, including pages for pattern and song creation. Buzz (and thus Buzztard) added further production amenities, most notably the deployment of "machines", Buzz-speak for instruments designed within and for the tracker itself. I'll have more to say about machines later, but now let's see what we need to build and install the latest Buzztard."
Palm pulls back the curtain on webOS technical details (ars technica)
Ars technica shares some advance information on Palm's Linux-based webOS platform. "The platform supports headless background services that interact with the user through passive notifications and interactive dashboard elements. Persistent data storage in webOS is facilitated by the HTML 5 database features. The platform's integrated media server supports audio and video playback through the open source GStreamer media engine."
Miscellaneous
Silverlight for Linux hits with Microsoft punch (The Register)
The Register discusses the release of Moonlight for Linux. "An open-source version of Silverlight has been released with Microsoft's support, as Flash rival Adobe began crowing about the new media player's death. Moonlight 1.0 from the Novell-backed Mono team was posted Wednesday, having passed all of Microsoft's regression tests. Moonlight plugs into Firefox and is available for all major Linux distributions including openSUSE, SUSE Linux Enterprise, Fedora, Red Hat, and Ubuntu. Moonlight builds on Silverlight 1.0, coming with a graphics pipeline, video and audio frameworks, and a JavaScript bridge that use the browser's JavaScript engine to execute."
Page editor: Forrest Cook
Announcements
Non-Commercial announcements
Apple: why iPhone jailbreaking should not be allowed
Here, by way of this EFF advisory, is Apple's plea to the Library of Congress [PDF] against a DMCA exemption which would allow jailbreaking on locked phones. "The acts of circumvention that the exemption would permit would result in infringing uses of copyrighted firmware stored on smart phones and of copyrighted content that runs on those phones, thereby failing the fundamental prerequisite requirement of Section 1201(a)(1)(B) for an exemption. Although that fact alone should preempt any need for further consideration, the proposed exemption should also be rejected because of a host of bad consequences that will flow from it. In the case of the iPhone, it will destroy the 'chain of trust' that Apple has carefully engineered into the product to protect users from serious functional problems that often result from unauthorized modifications to the device's OS."
Call for Prior Art
Red Hat and Novell have been accused of patent infringement. "IP Innovation L.L.C. and Technology Licensing Corporation (collectively, "Plaintiffs") have brought a patent-infringement action against Red Hat, Inc., and Novell, Inc., alleging infringement of U.S. Patent Numbers 5,072,412; 5,533,183; and 5,394,521. The patents concern a user interface that has multiple workspaces. The Plaintiffs' complaint identifies as accused products "Red Hat Linux system," the "Novell Suse Linux Enterprise Desktop," and the "Novell Suse Linux Enterprise Server."" This site has link where you can submit prior art to combat this claim.
Sun RPC code to be relicensed
Here's a weblog entry by Simon Phipps describing the difficulties involved in changing the licensing of really old software - and the Sun RPC code in particular. This code has been the subject of some worry for years now, since its license is not truly free. At the end of Simon's posting, he announces: "On Saturday I was able to tell Europe's Free Software developers that the licenses on the RPC code are no longer a barrier to Free software - we'll change the license to Sun's copyrights in the RPC code to a standard 3-clause BSD license, allowing inheritance of that licensing by both Debian and Fedora. I'm delighted to have been able to fix this problem, which arose not because of failure but because of the success of software freedom over many years and because of Sun's early commitment to it."
Commercial announcements
Red Hat Moves to Expand Server Virtualization Interoperability
Red Hat has announced a virtualization interoperability agreement with Microsoft. "Red Hat, Inc., the world's leading provider of open source solutions, today announced that, in response to strong customer demand, it has signed reciprocal agreements with Microsoft Corporation to enable increased interoperability for the companies' virtualization platforms. Each company will join the other's virtualization validation/certification program and will provide coordinated technical support for their mutual server virtualization customers. The reciprocal validations will allow customers to deploy heterogeneous, virtualized Red Hat and Microsoft solutions with confidence."
Contests and Awards
The 2008 LinuxQuestions.org Members Choice Award winners
The 2008 LinuxQuestions.org Members Choice Award Winners have been announced. "The polls are closed and the results are in. We had a record number of votes cast for the eighth straight year. Congratulations should go to each and every nominee. We once again had some extremely close races and a couple multi-year winners were unseated."
Calls for Presentations
ACM CCS '09: Call for workshop proposals
A call for proposals has gone out for the 2009 ACM Conference on Computer and Communications Security. The event takes place in Chicago, IL on November 9-13, 2009. Submissions are due by February 28. "Proposals are solicited for workshops to be held in conjunction with ACM CCS 2009. Each workshop provides a forum to address a specific topic at the forefront of security research. A workshop must be a full day in length."
2009 Linux Plumbers Conference Call For Topics
The 2009 Linux Plumbers Conference is currently in an early stage of its organization process. To carry things forward, the LPC organizers are currently soliciting ideas for overall topics to be discussed at this year's event. They have asked, in particular, for input from LWN readers, who are invited to post their ideas as comments to this article. If you have some thoughts on what would make a good discussion topic, please take a moment to post them here.
Upcoming Events
PyCon blog badges are avalable
Blog badges for promoting the upcoming PyCon are available. "If you blog, please let your readers know about PyCon. A blog badge is a nice way to enhance such a post."
Events: February 26, 2009 to April 27, 2009
The following event listing is taken from the LWN.net Calendar.
| Date(s) | Event | Location |
|---|---|---|
| February 24 February 26 |
VMworld Europe 2009 | Cannes, France |
| February 25 February 27 |
German Perl Workshop | Frankfurt Main, Germany |
| February 27 | PHP UK Conference | London, UK |
| February 28 | Belgian Perl Workshop | Leuven, Belgium |
| February 28 | uCon Security Conference | Recife, Brazil |
| March 1 March 4 |
Global Ignite week | Online |
| March 3 March 8 |
CeBIT 2009 | Hanover, Germany |
| March 4 March 7 |
DrupalCon DC 2009 | Washington D.C., USA |
| March 6 | Dutch Perl Workshop | Arnhem, The Netherlands |
| March 7 | Ukrainian Perl Workshop 2009 | Kiev, Ukraine |
| March 8 March 11 |
Bossa Conference 2009 | Recife, Brazil |
| March 9 March 13 |
Advanced Ruby on Rails Bootcamp with Charles B. Quinn | Atlanta, GA, USA |
| March 9 March 12 |
O'Reilly Emerging Technology Conference | San Jose, CA, USA |
| March 12 March 15 |
Pingwinaria 2009 - Polish Linux User Group Conference | Spala, Poland |
| March 14 | OpenNMS User Conference (Europe) 2009 | Frankfurt Main, Germany |
| March 14 March 15 |
Chemnitzer Linux Tage 2009 | Chemnitz, Germany |
| March 16 March 20 |
Android Bootcamp with Mark Murphy | Atlanta, USA |
| March 16 March 20 |
CanSecWest Vancouver 2009 | Vancouver, BC, Canada |
| March 18 | Linuxwochen Österreich - Klagenfurt | Klagenfurt, Austria |
| March 21 March 22 |
Libre Planet 2009 | Cambridge, MA, USA |
| March 23 March 27 |
iPhone Bootcamp | Atlanta, Georgia, USA |
| March 23 April 3 |
Google Summer of Code '09 Student Application Period | online, USA |
| March 23 March 27 |
ApacheCon Europe 2009 | Amsterdam, The Netherlands |
| March 24 March 26 |
UKUUG Spring 2009 Conference | London, England |
| March 25 March 29 |
PyCon 2009 | Chicago, IL, USA |
| March 27 March 29 |
Free Software and Beyond The World of Peer Production | Manchester, UK |
| March 28 | Open Knowledge Conference 2009 | London, UK |
| March 31 April 2 |
Solutions Linux France | Paris, France |
| March 31 April 3 |
Web 2.0 Expo San Francisco | San Francisco, CA, USA |
| April 3 April 5 |
PostgreSQL Conference: East 09 | Philadelphia, PA, USA |
| April 3 April 4 |
Flourish Conference | Chicago, IL, USA |
| April 6 April 8 |
CELF Embedded Linux Conference | San Francisco, CA, USA |
| April 6 April 7 |
Linux Storage and Filesystem Workshop | San Francisco, CA, USA |
| April 8 April 10 |
Linux Foundation Collaboration Summit | San Francisco, CA, USA |
| April 14 | OpenClinica European Summit | Brussels, Belgium |
| April 15 | Linuxwochen Österreich - Krems | Krems, Austria |
| April 16 April 17 |
Nordic Perl Workshop 2009 | Oslo, Norway |
| April 16 April 19 |
Linux Audio Conference 2009 | Parma, Italy |
| April 16 April 18 |
Linuxwochen Austria - Wien | Wien, Austria |
| April 20 April 24 |
samba eXPerience 2009 | Göttingen, Germany |
| April 20 April 23 |
MySQL Conference and Expo | Santa Clara, CA, USA |
| April 20 April 24 |
Perl Bootcamp at the Big Nerd Ranch | Atlanta, GA, USA |
| April 20 April 24 |
Cloud Slam '09 | Online, Online |
| April 22 April 25 |
ACCU 2009 | Oxford, United Kingdom |
| April 23 April 26 |
Liwoli 2009 | Linz, Austria |
| April 23 | Linuxwochen Austria - Linz | Linz, Austria |
| April 23 April 24 |
European Licensing and Legal Workshop for Free Software | Amsterdam, The Netherlands |
| April 25 May 1 |
Ruby & Ruby on Rails Bootcamp | Atlanta, Georgia, USA |
| April 25 April 26 |
LinuxFest Northwest 2009 10th Anniversary | Bellingham, Washington, USA |
| April 25 | Linuxwochen Austria - Graz | Graz, Austria |
| April 25 | Festival Latinoamericano instalación de Software libre | All Latin America, All Latin America |
| April 25 | Grazer Linux Tage 2009 | Graz, Austria |
If your event does not appear here, please tell us about it.
Audio and Video programs
Business of Open Source videos from LCA available
FOSSBazaar is hosting videos of the Business of Open Source mini-conference at Linux.conf.au (LCA). "The goal of this mini-conf was to share and learn about non-coding business aspects of making an open source project successful. Speakers include Jacinta Richardson (Running an open source training business), Bdale Garbee (Collaborating Successfully with Large Corporations), Joe 'Zonker' Brockmeier (Marketing open source projects) and others."
FOSDEM videos released
Videos from the Debian devroom at FOSDEM are available (click below for more information). LinuxMagazine also has videos of the Micro Distro Summit. "Nils Magnus of Linux Magazine Online pulled together the heads of three Linux distros for an interview and put them in a video: openSUSE's Joe Brockmeier, Debian's Steve McIntyre and Red Hat's Max Spevack."
Page editor: Forrest Cook
