LWN.net Weekly Edition for July 23, 2009
The grumpy editor's e-book reader
Your editor recently "celebrated" yet another birthday; one asks "which birthday?" at the risk of making him grumpy indeed. During that celebration, a surprising present turned up, in the form of an Amazon Kindle book reader. That presents an opportunity to play with a new toy, something your editor is not known for turning down, even when the toy is as problematic as the Kindle. In the process, your editor turned up some free software which helps to make the device rather more useful.The Kindle is a device which inspires mixed feelings. It is a nice piece of hardware, showing what can be done with electronic-ink displays attached to a Linux-based system. It is small, light, and able to store a long list of books. The display is nicely readable, even in strong outdoor light, though it is also somewhat slow to respond and changes pages with a distracting black flash. The built-in cellular modem makes the acquisition of books from almost anywhere easy; no computer is required. The keyboard is awful, but one does not normally expect to do a lot of typing on such a device.
What one does expect to do with it is to read. There is a great deal of written material on the net, much of it available under free (or, at least, "freely distributable") licenses. But your editor cannot be the only one who, despite being an avid reader, lacks enthusiasm for reading a novel from the computer screen after spending the working day staring at that same screen. Kindle-like devices offer a solution to this problem; they are portable and much easier on the eyes. They are still not as nice as a real book, but they are nice enough to make much of that online content more accessible. Electronic book readers are an interesting class of gadget, even for those of us who have no real interest in ditching words printed on dead trees.
Your editor's Kindle showed up just a couple of days after the widely-reported "memory hole" incident, in which Amazon deleted copies of two George Orwell books from the devices of customers who had "purchased" them. LWN has said little about this event for the simple reason that there is very little to add; the unbooking of 1984 must be shocking even to the most irony-challenged among us. But one could well add that Amazon has made it clear that running Linux does not necessarily make a device serve its owner. The Kindle is very much a closed, captive platform under the control of its vendor. As far as your editor can tell, nobody has yet claimed to have achieved root access on a Kindle 2 device.
Managing the device
![[Kindle]](https://static.lwn.net/images/ns/grumpy/kindle2-sm.jpg)
The Kindle has a USB port, and it shows up as a normal USB mass storage device. So gaining access to the library from a Linux machine is a straightforward thing to do. Among other things, this access can be used to back up one's books, perhaps giving a degree of permanence to a book collection which, otherwise, appears to be subject to a discouraging degree of control from outside. It would also be nice to be able to place new files there; that, too, is possible, with one little problem: the Kindle 2 lacks a PDF reader.
Now, one could come up with no end of choice words for whoever thought that an electronic book without PDF capability made any sense whatsoever. But it's better to do something about it. Your editor spent some time searching for answers before stumbling across an interesting program named calibre. This GPLv3-licensed, Python-implemented, multi-platform program aims to solve the the PDF problem, and quite a bit more as well. It is, in fact, a general electronic book library manager with support for a number of reader devices.
Installing calibre was only mildly painful. It requires Python 2.6 and a fairly wide range of libraries; your editor's Fedora 11 machine (the time and courage to return to Rawhide have been lacking) does not have a calibre package, but all of the dependencies are available. The installation instructions are based on the idea that feeding text from a web site directly into a root shell is a good idea, but one can get around that. There is also the installation of a udev rule which sets the permissions for USB-connected Sony readers (for the Kindle, the program doesn't open the USB device directly).
Let it be said: calibre (version 0.5.14) needs a lot of work. The
interface is strange and requires a certain amount of figuring out. If it
can't communicate with the reader, there is no real information as to why
(one hint: it expects that the device filesystem will be automatically
mounted at plugin time, something your editor has disabled on his system).
It will happily tell you that a given book is not available in the right
format for the reader, but is short on information on how to get it into an
appropriate format. One needs to be prepared to spend some time just
messing with it. These gripes notwithstanding, calibre has the makings of
a nice tool.
The main screen is dominated by a list of books in the library; clicking on the "reader" icon at the top yields a list of books on the device instead. One could imagine a useful combined listing mode which showed all books, with a concise indication of where they are to be found, but calibre does not do that. There's a pretty - but relatively useless - "browse by cover" mode. As with the Kindle itself, the book list is a single, flat listing, with no provision for organizing the books into hierarchies. This can only get painful as the list of books grows. Yes, this is 2009, we do everything with tags now, and calibre supports tags. Your editor would still like directories. Call them "bookshelves" if that fits the theme better. It would also be really nice if calibre could treat a group of files as all being part of a single book. No such luck.
The "view" operation can be used to read a book on the Linux system. It opens an internal reader for a number of formats; this reader seems to fail, silently, fairly often. For PDF files, calibre just starts evince, which works just fine.
The "send to device" button is, naturally enough, the way to move a specific file (or set of files) to the reader. There doesn't appear to be a "just keep the two in sync" option; books all must be loaded onto the reader explicitly. It would sure be nice if "send to device" would just convert the file into an appropriate format if need be, but that doesn't happen; it throws up a dialog saying that the transfer isn't possible. In other words, the user must explicitly perform a format conversion on (say) a PDF file before it can be sent to the Kindle. "Convert E-books" does that, providing a nice set of options on how the conversion is to be done. It works, but it should work automatically.
The conversion of PDF files into the "MOBI" format understood by the Kindle works reasonably well, but the books suffer somewhat in the translation. Paragraph breaks tend to vanish, page headers and footers get mixed into the text, and so on. The formatting of code samples loses little details like indentation. All told, the result isn't quite what it should be; it seems like it should be possible to do better.
One nice feature built into calibre is the "fetch news" operation. The
program can go to the web sites of a large number of publications, download the
current edition of whatever news is published there, and convert it into a
format suitable for loading into the reader. If you leave calibre running,
it can perform regular downloads, keeping the library populated with
current newspaper and library editions. Needless to say, this feature is
appealing when compared with the payment-required newspaper offerings from
Amazon. On the other hand, as a web publisher, your editor does have a
certain affection for the "payment required" mode of operation.
Also worthy of note is Savory, a repackaging of the calibre format-conversion code which runs on the Kindle itself. This tool lurks on the device as a daemon; whenever it sees a new PDF file, it goes off and converts it to the MOBI format automatically. Getting Savory to work can be a bit tricky, but, once the right incantations have been made, it works as advertised. The process is quite slow (the Kindle is not known for the data-crunching power of its CPU) and the end results are, not surprisingly, about the same as those obtained by using calibre directly - with one difference. Savory's conversion process actually makes two copies, one of which is a series of PNG images reflecting the real appearance of the source PDF file. The images have their own problems (they are not amenable to searching, for example), but they do look nicer. In summary, Savory is a nice enhancement, but it's in no way the same as having the device be able to just display PDF files natively.
Source
The Kindle is based on GPL-licensed software. Since Amazon is distributing this software with the device, it is required by the GPL to either include the source with the device or include a written offer to provide the source. Your editor read through all of the fine print, including absolute restrictions on modifying the device and discouraging stuff like:
Big brother does, indeed, know what you are reading. But your editor could not find the written source offer. So, technically, it would appear that Amazon is in violation of the GPL. That said, Amazon has made the source available for each version of the operating software shipped with Kindle devices; one simply needs to know where to look for it.
That source distribution takes the form of a 140MB compressed tarball, which, in turn, contains 44 other compressed tarballs. Among other things, Amazon has tossed in the source for the 2.6.22.19 kernel, GCC, bootchart, powertop, and iptables. There is no separate patch to the kernel, but a quick diff shows significant changes, mostly in the form of the addition of support for the Freescale i.MX27 and i.MX31 processor architectures. The i.MX27 code is not currently upstream, even in the 2.6.31-rc kernels. Amazon has patched in drivers for a Freescale PATA controller, i2c controllers, the Kindle "five-way controller," the "Fiona" keyboard, a "magnetic sensor" device, a trackball device, a video output device, a "run time integrity checker" device, a number of electronic ink devices, and far more. There's also the yaffs2 filesystem and, bizarrely, a version of Andi Kleen's superseded unlocked_fasync patch. None of this code is upstream, and there appears to be little interest in getting it there.
Sadly, the Kindle is a closed device, so there is little point in trying to build and boot this code. That integrity checker device seems likely to get in the way. The unhackable nature of the device does not come as a surprise; that is how things tend to be done these days. But one can still wish that things were different. A user-modifiable Kindle would not just be more resistant to Orwellian monitoring and control; it could also be extended in ways that Amazon never dreamed of. Maybe it could even get a PDF reader. What a fun device that could be.
Fighting small bugs
Paper cuts, points of pain, obstacles and annoyances — whatever description you prefer, in the last six weeks, small bugs have started receiving closer attention from developers of the free software desktop. In Ubuntu, they are the focus of One Hundred Paper Cuts, and in Fedora of Fit and Finish, but in both cases, the goal is the same: to significantly improve the user experience by tackling bugs that can be quickly corrected. As an important side effect, these efforts are also allowing free software developers to approach usability in new ways.
Concentrated efforts at bug-squashing have a long history in free software development, so the idea of focusing on small bugs probably has multiple origins. However, one origin is the 0.8 release of GNOME Do in January 2009. The release fixed 111 bugs — over three times the number fixed in the 0.8.1 release — and received enthusiastic feedback from users.
When GNOME Do leader David Siegel joined the Design and User Experience
team at Canonical a few weeks after the 0.8 Do release, he took the small
bug meme with him.
The Design and User Experience team is the group within Canonical whose
task is to realize Mark Shuttleworth's challenge
of improving the Ubuntu desktop, and Siegel soon saw similarities to Do:
"We started to notice all these small things that were in each
release that were never getting fixed
", Siegel said. "
And I said to Ivanka
Majic, who's the leader of the Design team, 'We need a way to red flag
these things that are obviously wrong, and make sure they are fixed before
they go out.'
" Within a couple of months, Siegel found himself
leading One Hundred Paper Cuts.
According to Red Hat employee Matthias Clasen, a similar situation exists
in Fedora. "There is a challenge in working on the Fedora desktop
between feeling squeezed to finish cool new features in time for the next
release, and fighting to get Rawhide [Fedora's development branch] into a
somewhat working state, with too little time to devote to fit and
finish
" — that is, to polishing and removing rough edges. Like
Siegel, Clasen now finds himself at the head of an effort to provide that
missing attention to detail.
Whether Fedora and Ubuntu influenced each other in these developments is
uncertain. However, in both cases, the advantage of focusing on small bugs
is obvious. As Siegel explained, "They're low-hanging fruit. They
allow us to quickly, inexpensively improve the user experience. We don't
have to create new interfaces; we just have to fix these tiny, trivial
bugs. It's just a small component, but it's something that can have an
immediate impact.
"
Same problem, different approaches
Despite the similarity of the problems, Ubuntu and Fedora have organized
their solutions in somewhat different ways.
At Canonical, the One Hundred Paper Cuts project set a goal of addressing
one hundred bugs during the development cycle of the upcoming "Karmic Koala"
release, which is scheduled for October 2009. To narrow the focus enough to
make it manageable, the project decreed that the bugs would center around
what Siegel calls "the space between applications
" or features
such as the panel and the Nautilus file browser. Users were invited to
report a paper cut bug via Ubuntu's Launchpad, and the initial one hundred
paper cuts were chosen from the several thousand that were submitted.
For this first effort, the project chose (despite its name) to divide its efforts into ten rounds of eleven bugs each, ten in each round concerning Ubuntu's main GNOME desktop, and one concerning Kubuntu, the Ubuntu KDE variation. The project also defined what it would cover with extreme clarity. According to Siegel, a paper cut is a bug that users would encounter with a default installation from a Live CD. Although it would not actually prevent users from completing everyday tasks, it might cause momentary annoyance or distraction.
Theoretically, a paper cut must be correctable by one developer in one
day. But, in practice, Siegel said, "We set a sort of gray area. For
example, some of the paper cuts we've confirmed and want to fix for Karmic
really take weeks to fix, but the work is going to take place anyway, and
the paper cut is just a little bit of extra work beyond that.
"
By contrast, Fedora's Fit and Finish project chose a less formal
approach. As Clasen explained, Fedora's Quality Assurance team was already
in the habit of holding test days, in which participants prepared by
downloading designated software and discussing what they found on IRC. Fit
and Finish simply decided to hold its own test days during the development
of Fedora 12, based on input submitted to the project's Bugzilla or by
email. For each test day, instructions for participation are listed on a
separate page, and both bugs and "things that work right
" are
summarized on the page after the discussion.
One difference from Ubuntu is that Fedora chose "to focus on user
tasks, as opposed to individual features,
" according to Clasen. Nor
did Fit and Finish limit the number of bugs to be covered in a single test
day or to be fixed. However Clasen did add, "I realize that numbers
— like the points awarded by bugzilla.gnome.org — can be a
strong motivation, so we may revisit this at some point.
"
Early results
To date, Fit and Finish has held just one test day on the subject of
display
configuration, although pages for the topics batteries
and suspend
and peripherals
are already posted. Clasen noted that this first day did
not have "an overwhelming participation.
" He added, though,
that the relatively low turnout was probably due to the fact that many
desktop developers were at the Grand Canaria Desktop Summit that was in
progress on the same day. A week later,the bugs arising from the testing
day have been filed and assigned to a tester, but none have been closed.
In comparison, One Hundred Paper Cuts has had two rounds so far, and the bugs to tackle in each future round are already posted. Of the ten bugs in the first round, seven are now listed as fixed, one as having had a fix committed, and another two as "in progress," with only one listed as incomplete. In the second round, which finished on July 11, two are marked as fixed, another three have a fix committed but not yet applied, four are in progress, and two are listed as confirmed. The preliminary appearance is that One Hundred Paper Cuts is producing quicker results, possibly because its goals are better defined.
However, a disadvantage of the One Hundred Paper Cuts approach is the
appearance it creates that some bugs are being given special treatment. For
this reason, Siegel felt compelled to emphasize that "Paper cuts are just
an additional level of importance being attached to bugs. A lot of people
whose bugs get rejected for paper cut status will get angry and frustrated
and say, 'Does this mean that the bug's not going to get fixed?' But it
just means it's not going to be the focus. Many, many bugs will be fixed
for Karmic; this set of one hundred is just getting a little extra push and
a little extra attention.
"
As might be expected given Fit and Finish's more narrow topic, the bugs it
generated fall into recognizable categories. "Naturally, a lot of the
bugs that we have collected on that day are X driver bugs, bugs in the
GNOME modules that play a role in display configuration (the display
capplet, gnome-settings-daemon, and libgnome-desktop),
" said
Clasen. "Another cluster of bugs has to do with applications that
have some sort of 'presentation mode,'
" that requires a dual monitor
setup.
"In terms of their severity,
" Clasen added, "The issues
ranged from minor UI annoyances (wrong colors, too large fonts) to feature
improvements (make the gthumb slideshow mode display on a connected
projector) to serious bugs (rotating a second monitor renders the screen at
an offset.
"
In comparison, bugs addressed by One Hundred Paper Cuts tend to be of low or medium severity and more diverse. Many, though, center on Nautilus and the names given to menu items, and the composition and behavior of the path bar and toolbar. Others, no doubt inspired by Canonical's efforts to improve notifications, center on system messages, some asking for messages that explain more clearly, and others for the removal or editing of unclear statements.
Usability testing at last
At this point, any evaluation of Fit and Finish or One Hundred Paper cuts
must be tentative. Both are scheduled to be evaluated after the development
versions they are focused upon are officially released.
However, one early problem that has already emerged is that the upstream
project — GNOME — does not give the small bugs the same
priority that Ubuntu and Fedora are assigning them. "Our goal is to
get them fixed on a weekly basis,
" Siegel says, "But an
upstream bug will just sit there. They don't have the same sense of
urgency. That's been just a little bit frustrating, because priorities are
different.
"
Still, while the work flow in the two projects may be refined by internal
review, talking to the organizers of these small bug projects, you get the
impression that neither effort is going away. As Siegel described the
situation, such projects represent "the biggest bang for your
development buck.
" In other words, they have quick results, and are
obvious to the average user.
Another advantage of these projects is that they encourage user
participation in the development cycle. As a result of One Hundred Paper
Cuts, "A lot of people are submitting their first time bugs,
"
Siegel observes — an advantage that, after his experience with GNOME
Do, he describes as "definitely calculated.
"
In much the same way, Clasen observed that "test days also serve as
'meet our user' days. While the audience is far from unbiased — most
participants are certainly tech-savvy fedora-devel-list readers —
having written use cases helps a lot when trying to shed the developer
perspective.
" An important aspect of this exchange is that it
encourages developers to look beyond their own sets of packages, as
happened when X Window developer Adam Jackson fixed and enhanced GNOME's
handling of monitors.
In the short term, too, such projects have the advantage of encouraging
more people to try development releases — a goal that many projects
often find elusive. Referring to Fedora's development release, Clasen says,
"If Rawhide is more stable, more people will use it, broadening our
tester base. And if we don't have to fight Rawhide breakage, we have more
time to devote to the user experience issues identified by our test
days.
"
Usability testing has always been difficult in free software, partly because people rarely meet face to face and partly because it requires a trained perspective to do well. But with projects like Fit and Finish and One Hundred Paper Cuts, free software may have just discovered its own method for approaching usability issues and for giving people a chance to learn by doing. In the end, this encouragement of usability testing might be as significant a result of the small bug meme as the improvements it brings to the desktop.
Community Leadership Summit 2009
More than 200 people gathered at San Jose's McEnery Convention Center on July 18 and 19 for the inaugural Community Leadership Summit (CLS). The event was primarily an unconference, with both days' programs assembled on the fly by the attendees themselves. The majority of those attendees identified themselves as participants in free software or open source communities, but a significant minority came from other realms — closed source companies interested in dealing with their customer communities, web services, and online communities unaffiliated with software altogether. Regardless of the field, of course, many of the issues are the same — from dealing with the tensions within online communities, to grappling with the technical challenges of communication overload, to avoiding burnout.
![[Session
board]](https://static.lwn.net/images/cls2009-sessionboard_sm.jpg)
The CLS was organized in large part by Canonical's Jono Bacon, who explained in the Saturday morning plenary session that he wanted to have an event that dealt with community management issues above the "product" level — thus enabling participation by people from all Linux distributions, desktop environments, languages, and user or development groups. The unconference format facilitated that "open to all" principle; only the opening and closing slots of each day were pre-planned, participants filled out the rest of the schedule by announcing sessions that they themselves were interested in facilitating, then selecting an open time slot and meeting room on the large program board in the hallway. The sessions were round-table discussions, not presentations. In several instances, the person who proposed the session announced at the outset that he or she had no answers and was primarily interested in hearing the thoughts of the others in the room.
Common questions
Over the course of the two days, some recurring themes emerged in multiple sessions led by different facilitators. The mechanics of community management was one such subject; several sessions dealt explicitly with how leaders interacted with their communities: the tools of communication, tools for tracking issues, conflicts, and participation, and the metrics used to measure community health, growth, and participation. From the experiences of the attendees, it is clear that there are no clear-cut solutions in this area. Even in software itself, most community managers are re-using tools created for other purposes, such as bug tracking software and customer relationship management (CRM) systems, with mixed results.
The role of women in online communities was also central to several sessions, including the different communication styles exhibited by men and women and how to adapt to both in the community, and how to respond to conflict and gender bias (both explicit and perceived). The open source community has been addressing the inclusion of women more and more frequently in recent years, both in attracting more participants and in adjusting the male-dominated "engineer" culture that many see as a barrier to entry in open source. The discussion is ongoing, naturally, but one key benefit to addressing the topic at the CLS was the opportunity to learn from the experiences of other online communities, including those that are not majority-male.
Finally, several sessions dealt with the legal issues facing online communities, focusing on open source communities in particular. Specifics included trademark and branding issues, budgeting, fundraising and volunteer compensation, and the details of nonprofit tax exempt foundations. As widespread as open source software is, many groups — particularly smaller ones — still face the same issues and raise the same questions.
The reinvention problem
Danese Cooper of the Open Source Initiative (OSI) led an interesting discussion on the tricky task of reinventing an existing community. She drew from the OSI's recent experiences as it tries to redefine itself and its associated community, but pointed to several other examples as well, including when a community grows up naturally around a vendor product and then must change when the product changes hands or is itself redefined, as has happened with Java. In some cases, redefining the community is not a choice — such as the change that naturally occurs when a product changes or the redefinition following the forking of a large project — but it can also be a conscious decision taken in order to avert obsolescence or burnout.
OSI has learned some valuable lessons through its early attempts and false starts, Cooper said, including the fact that it is not enough to simply gather interested stakeholders together and expect a community to coalesce by virtue of shared values and interests. OSI's attempt to bootstrap a community by "starting with the membership" failed, she said, because of infighting and arguments. The organization has had more success growing a community naturally by starting actual projects, then allowing interested participants to join in the effort voluntarily.
Participants in the discussion related experiences with redefining communities like Java, LiveJournal, and the Open Web Foundation (OWF). The OSI's success with projects as a driving force was an indicator that projects are the "currency" of the OSI community, according to the discussion, but that different groups might find a different currency to be the solution to their own problem. Most agreed that the underlying problem was not one of attracting people, but of redefining the vision for the community that would attract the right people. Especially for communities that involve both companies and outside volunteers, agreeing upon that vision can be a difficult process.
![[Jono Bacon]](https://static.lwn.net/images/cls2009-bacon_sm.jpg)
That point led into a discussion on the issue of forking a community when participants disagree over vision or core values. Although, on the surface, forking an existing community sounded like a negative, the group decided that there were sufficiently many examples of positive community forks to conclude that it is sometimes a very good idea. One example was the Ubuntu Linux distribution. Debian is (and always has been) committed to building a completely free Linux distribution. Among others, Mark Shuttleworth felt that the Debian Project was limiting itself (taking too long between releases, etc.), but rather than attempting to change the way the project operated, he created Ubuntu as a derivative with different goals and a different message. The result has been a success for both distributions, whereas attempting to force the Debian Project to change would likely have ended in failure.
Open source communities in the developing world
Perhaps the most challenging session was Nnenna Nwakanma's look at developing open source in the developing world. Nwakanma is a member of the Free Software and Open Source Foundation for Africa (FOSSFA), which promotes open source software across the African continent. Both Nwakanma and Bruno Souza from Brazil spoke about the obstacles facing open source communities in developing countries and, in particular, the ways in which the traditional approaches that have proven successful in North America and Europe fail under the radically different circumstances found in other countries.
Some of the differences are well-known, such as the fact that governments are by far the largest IT spenders in developing countries. Other differences took attendees by surprise, such as the challenges in developing sustainable open source communities. Many of the strategies common in the West do not work as well — or at all — in Africa, Nwakanma said. For example, most open source projects and communities use the Internet as their default (if not their only) means of communicating, organizing, and working. In contrast, the vast majority of the people in Africa do not have Internet access at all, much less in the evening at home, so local user groups that meet in person are the premiere way of spreading and educating about open source. African open source advocates also have significant trouble keeping developers active after they leave college, since so many people have difficultly simply finding paying jobs. FOSSFA has additional difficulties that result from targeting the entire continent of Africa; its initiatives must be equally accessible in all 53 African countries, or else risk fracturing the community along regional lines.
Despite the challenges, Nwakanma did have ideas for building the open source community in Africa, including targeting school age children with open source education and clubs, observing that proprietary software vendors relentlessly pursue lucrative government contracts, but are never interested in investing in school-age children as potential developers. The other attendees shared their ideas as well, such as linking local open source groups in Africa with established Linux user groups (LUGs) in the West in a "sister city"-style program, and publicizing opportunities for visitors to Africa to help local groups by volunteering to speak about open source.
More community leadership
At the closing session on Sunday, the vast majority of attendees said that CLS was valuable and were enthusiastic to see it return next year. Opinion was split about the time and place; this year's event was the weekend immediately preceding the massive O'Reilly Open Source Convention (OSCON), which was a plus for those already planning to attend OSCON but a minus for those who found OSCON too expensive. Bacon promised that CLS would reappear in some form next year, to further the discussions that so many found useful. Managing online communities is a topic that is growing in importance, which should guarantee the continued success of the event. But it is also critically important for the open source movement as a whole, because it depends on healthy and vibrant communities for its survival.
A Belorussian translation has been provided by Patricia Clausnitzer
Security
Fun with NULL pointers, part 1
By now, most readers will be familiar with the local kernel exploit recently posted by Brad Spengler. This vulnerability, which affects the 2.6.30 kernel (and a test version of the RHEL5 "2.6.18" kernel), is interesting in a number of ways. This article will look in detail at how the exploit works and the surprising chain of failures which made it possible.The TUN/TAP driver provides a virtual network device which performs packet tunneling; it's useful in a number of situations, including virtualization, virtual private networks, and more. In normal usage of the TUN driver, a program will open /dev/net/tun, then make an ioctl() call to set up the network endpoints. Herbert Xu recently noticed a problem where a lack of packet accounting could let a hostile application pin down large amounts of kernel memory and generally degrade system performance. His solution was a patch which adds a "pseudo-socket" to the device which can be used by the kernel's accounting mechanisms. Problem solved, but, as it turns out, at the cost of adding a more severe problem.
The TUN device supports the poll() system call. The beginning of the function implementing this functionality (in 2.6.30) looks like this:
static unsigned int tun_chr_poll(struct file *file, poll_table * wait) { struct tun_file *tfile = file->private_data; struct tun_struct *tun = __tun_get(tfile); struct sock *sk = tun->sk; unsigned int mask = 0; if (!tun) return POLLERR;
The line of code which has been underlined above was added by Herbert's patch; that is where things begin to go wrong. Well-written kernel code takes care to avoid dereferencing pointers which might be NULL; in fact, this code checks the tun pointer for just that condition. And that's a good thing; it turns out that, if the configuring ioctl() call has been made, tun will indeed be NULL. If all goes according to plan, tun_chr_poll() will return an error status in this case.
But Herbert's patch added a line which dereferences the pointer prior to the check. That, of course, is a bug. In the normal course of operations, the implications of this bug would be somewhat limited: it should cause a kernel oops if tun is NULL. That oops will kill the process which made the bad system call in the first place and put a scary traceback into the system log, but not much more than that should happen. It should be, at worst, a denial of service problem.
There is one little problem with that reasoning, though: NULL (zero) can actually be a valid pointer address. By default, the very bottom of the virtual address space (the "zero page," along with a few pages above it) is set to disallow all access as a way of catching null-pointer bugs (like the one described above) in both user and kernel space. But it is possible, using the mmap() system call, to put real memory at the bottom of the virtual address space. There are some valid use cases for this functionality, including running legacy binaries. Even so, most contemporary systems disable page-zero mappings through the use of the mmap_min_addr sysctl knob.
[PULL QUOTE: Security module checks are supposed to be additive to the checks which are already made by the kernel, but it didn't work that way this time. END QUOTE] This knob should prevent a user-space program from mapping the zero page, and, thus, should ensure that null pointer dereferences cause a kernel oops. But, for unknown reasons, the mmap() code in the 2.6.30 kernel explicitly declines to enforce mmap_min_addr if the security module mechanism has been configured into the kernel. That job, instead, is left to the specific security module being used. Security module checks are supposed to be additive to the checks which are already made by the kernel, but it didn't work that way this time; with regard to page zero, security modules can grant access which would otherwise be denied. To complete the failure, Red Hat's default SELinux policy allows mapping the zero page. So, in this case, running SELinux actually decreased the security of the system.
Not that life is a whole lot better without SELinux. In the absence of SELinux, the exploit will run up against the mmap_min_addr limit, which would seem like enough to bring things to a halt. That particular difficulty can be circumvented, though, through the use of the personality() system call. Enabling the SVR4 personality causes a read-only page to be mapped at address zero when a program is invoked with exec(), but only if the process in question has the CAP_SYS_RAWIO capability. So one more trick is required: the top-level exploit code will set the SVR4 personality, then use exec() to run the pulseaudio server with a special plugin module. Pulseaudio is installed setuid root, so it will get the zero page mapped at invocation time. By the time the plugin code is called, pulseaudio has dropped its privileges, but, by then, the zero page will be available to the exploit code, which can make the page writeable and place its own data there.
As a result of all this, it is possible for a user-space process to map the zero page and prevent tun_chr_poll() from causing a kernel oops. But, one would think, that would not get an attacker very far, since that function checks tun against NULL as the very next thing it does. This is where the next interesting step in the chain of failures happens: the GCC compiler will, by default, optimize the NULL test out. The reasoning is that, since the pointer has already been dereferenced (and has not been changed), it cannot be NULL. So there is no point in checking it. Once again, this logic makes sense most of the time, but not in situations where NULL might actually be a valid pointer.
So, an attacker is able to get into the body of tun_chr_poll() with a NULL tun pointer. One then needs to figure out how to get control of the kernel using this situation. The next step takes advantage of this code from a little further into tun_chr_poll():
if (sock_writeable(sk) || (!test_and_set_bit(SOCK_ASYNC_NOSPACE, &sk->sk_socket->flags) && sock_writeable(sk))) mask |= POLLOUT | POLLWRNORM;
The value of sk, remember, came from the dereferencing of tun, so it's under the attacker's control. SOCK_ASYNC_NOSPACE is zero, so the test_and_set_bit() call can be used to unconditionally set the least significant bit of any word in memory. As kernel memory corruptions go, this is a small one, but it turns out to be enough. In Brad's exploit, sk->sk_socket->flags points into the TUN driver's file_operations structure; in particular, it points to the mmap() function. The TUN driver does not support mmap(), so that pointer is normally NULL; after the poll() call, that pointer is now one instead.
The final step in the exploit is to call mmap() on a file descriptor for the open TUN device. Since the internal mmap() operation is no longer NULL (it has been set to one), the kernel will jump to it. That address also lives within the zero page mapped by the exploit, so it is under the attacker's control. The exploit will have populated that address with another jump to its own code. So, when the kernel calls (what it thinks is) the TUN driver's mmap() function, the result is arbitrary code being run in kernel mode; at that point the exploit has total control.
In well-designed systems, catastrophic failures are rarely the result of a single failure. That is certainly the case here. Several things went wrong to make this exploit possible: security modules were able to grant access to low memory mappings contrary to system policy, the SELinux policy allowed those mappings, pulseaudio can be exploited to make a specific privileged operation available to exploit code, a NULL pointer was dereferenced before being checked, the check was optimized out by the compiler, and the code used the NULL pointer in a way which allowed the attacker to take over the system. It is a long chain of failures, each of which was necessary to make this exploit possible.
This particular vulnerability has been closed, but there will almost certainly be others like it. See the second article in this series for a look at how the kernel developers are responding to this exploit.
Brief items
JITter Bug (Linux Journal)
Linux Journal looks into a security problem with Mozilla's just-in-time compiler. "Two weeks ago, Mozilla was celebrating the triumphant release of the much-delayed Firefox 3.5. The browser brings its users a pantheon of new features, with perhaps the most celebrated being the TraceMonkey JavaScript engine, said to provide speed enhancements twice as fast as Firefox 3.0 and up to ten times that of Firefox 2.0. One element of the acclaimed performance booster is giving its developers something of a headache this week, however. The first zero-day exploit for Firefox 3.5 was revealed publicly on Monday, in the form of a vulnerability in the browser's Just-in-time compiler."
New vulnerabilities
compat-wxGTK26: arbitrary code execution
Package(s): | compat-wxGTK26 | CVE #(s): | CVE-2009-2369 | ||||||||||||||||||||||||||||||||
Created: | July 20, 2009 | Updated: | September 3, 2010 | ||||||||||||||||||||||||||||||||
Description: | An integer overflow in the wxImage::Create() function (through version 2.8.10) allows for a denial of service attack and possible arbitrary code execution. | ||||||||||||||||||||||||||||||||||
Alerts: |
|
fckeditor: missing input sanitizing
Package(s): | fckeditor | CVE #(s): | CVE-2009-2265 | ||||||||||||
Created: | July 17, 2009 | Updated: | July 22, 2009 | ||||||||||||
Description: | From the Debian advisory: Vinny Guido discovered that multiple input sanitizing vulnerabilities in Fckeditor, a rich text web editor component, may lead to the execution of arbitrary code. | ||||||||||||||
Alerts: |
|
firefox and related: multiple vulnerabilities
Package(s): | firefox | CVE #(s): | CVE-2009-2462 CVE-2009-2463 CVE-2009-2464 CVE-2009-2465 CVE-2009-2466 CVE-2009-2467 CVE-2009-2469 CVE-2009-2471 CVE-2009-2472 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Created: | July 22, 2009 | Updated: | June 14, 2010 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Description: | The firefox 3.0.12 release fixes several significant security issues. Related packages (seamonkey, xulrunner, ...) are being released with fixes for the same problems. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Alerts: |
|
mediawiki: cross-site scripting
Package(s): | mediawiki | CVE #(s): | |||||||||
Created: | July 20, 2009 | Updated: | July 22, 2009 | ||||||||
Description: | The mediawiki 1.15.1 and 1.14.1 releases contain fixes for a cross-site scripting vulnerability introduced in 1.15.0 and 1.14.0. | ||||||||||
Alerts: |
|
nagios: mysterious vulnerability
Package(s): | nagios | CVE #(s): | CVE-2008-6373 | ||||
Created: | July 20, 2009 | Updated: | July 22, 2009 | ||||
Description: | From the Gentoo advisory: An unspecified vulnerability in Nagios related to CGI programs, "adaptive external commands," and "writing newlines and submitting service comments" has been reported. | ||||||
Alerts: |
|
perl-IO-Socket-SSL: site spoofing
Package(s): | perl-IO-Socket-SSL | CVE #(s): | |||||||||
Created: | July 20, 2009 | Updated: | July 22, 2009 | ||||||||
Description: | The perl-IO-Socket-SSL library only checks the prefix of hostnames when performing certificate matching, making site-spoofing attacks possible. | ||||||||||
Alerts: |
|
pulseaudio: privilege escalation
Package(s): | pulseaudio | CVE #(s): | CVE-2009-1894 | ||||||||||||||||||||
Created: | July 16, 2009 | Updated: | July 28, 2009 | ||||||||||||||||||||
Description: | PulseAudio has a local privilege escalation vulnerability.
From the Gentoo alert:
Tavis Ormandy and Julien Tinnes of the Google Security Team discovered that the pulseaudio binary is installed setuid root, and does not drop privileges before re-executing itself. The vulnerability has independently been reported to oCERT by Yorick Koster. A local user who has write access to any directory on the file system containing /usr/bin can exploit this vulnerability using a race condition to execute arbitrary code with root privileges. | ||||||||||||||||||||||
Alerts: |
|
ruby: certificate spoofing
Package(s): | ruby | CVE #(s): | CVE-2009-0642 | ||||||||||||||||
Created: | July 20, 2009 | Updated: | December 8, 2009 | ||||||||||||||||
Description: | The ruby library does not properly validate X.509 certificates, enabling an attacker to use expired or invalid certificates. | ||||||||||||||||||
Alerts: |
|
wordpress: file inclusion and information disclosure
Package(s): | wordpress | CVE #(s): | CVE-2009-2334 CVE-2009-2335 CVE-2009-2336 | ||||||||||||||||||||||||||||
Created: | July 20, 2009 | Updated: | January 28, 2010 | ||||||||||||||||||||||||||||
Description: | The wordpress system suffers from vulnerabilities which can allow an attacker to include (and execute) arbitrary local files and to enumerate valid user names. The 2.8.1 release contains the fixes. | ||||||||||||||||||||||||||||||
Alerts: |
|
Page editor: Jake Edge
Kernel development
Brief items
Kernel release status
The current development kernel remains 2.6.31-rc3; no kernel prepatches have been released in the last week. Patches continue to flow into the mainline repository, though, and the -rc4 release may be out by the time you read this.The current stable kernel is 2.6.30.2, released (along with 2.6.27.27) on July 19. Both updates contain a number of security-relevant fixes, including some inspired by the recent NULL-pointer exploit. Note that users have reported boot-time problems with 2.6.27.27; it seems that this kernel ran afoul of a GCC 4.2 bug which causes it to be miscompiled.
2.4.37.3, the first 2.4 update in some time, was released on July 19. This release, too, was motivated by the NULL pointer exploit; it also fixes some serious problems with the r8169 network driver.
Kernel development news
Quotes of the week
In brief
Hyper-V. Very few kernel submissions draw as much attention as Microsoft's contribution of its Hyper-V drivers to the staging tree. The drivers enable the virtualization of Linux under Windows, a feature which some find useful. In general, reactions included surprise and concern, and at least one prediction of immediate and utter doom. Much of the development community, though, treated it like just another patch submission. The quality of the code is not held to be great, but fixing such things up is what the staging tree is for.For the curious, a little bit of history behind this submission can be found in this weblog posting by Stephen Hemminger.
VFAT. Andrew "Tridge" Tridgell is back with a new set of VFAT patches aimed at working around the patents being asserted against that filesystem. He has made progress in addressing the interoperability problems reported by testers, though a few small issues remain. As always, he's looking for testers who can identify any remaining problems with the patch.
Checkpoint/restart. Oren Laadan has posted a new set of checkpoint/restart patches which, he says, is "already suitable for many types of batch jobs." The patch adds a new clone_with_pids() system call allowing restored processes to be created with the same process ID they had at checkpoint time; it's not clear whether the security concerns with that capability have been addressed or not. There are still plenty of open issues with checkpoint/restart, including pending signals, FIFO devices, pseudo terminals, and more. It's a messy problem to try to solve, but this patch set seems to be getting closer. There's instructions in the patch for those who would like to experiment with it.
Flexible arrays. Kernel developers often find themselves needing to allocate multi-page chunks of contiguous memory. Typically such allocations are done with vmalloc(), but that solution is not ideal. The address space for vmalloc() allocations is restricted (on 32-bit systems, at least), and these allocations are rather less efficient than normal kernel memory allocations.
Responding to a request from Andrew Morton, Dave Hansen has proposed the addition of a flexible array API to the kernel. Flexible arrays would handle large allocations, but, under the hood, they use single-page chunks which can be allocated in a normal (and reliable) fashion. In brief, a flexible array is created with:
struct flex_array *flex_array_alloc(int element_size, int total, gfp_t flags);
Once the array is created, data can be moved into and out of it with:
int flex_array_put(struct flex_array *fa, int element_nr, void *src, gfp_t flags); void *flex_array_get(struct flex_array *fa, int element_nr);
There's a number of other functions for freeing parts of an array, preallocating memory, etc.; see the patch posting for the full API.
Coarse clocks. Some applications want to get access to the system time as quickly as possible, but they are not concerned about obtaining absolute accuracy. To fill this need, John Stultz has proposed a couple of new clock types: CLOCK_REALTIME_COARSE and CLOCK_MONOTONIC_COARSE. In essence, these clocks work by returning the system's latest idea of the current time without actually asking any hardware. The idea was reasonably well received, with one concern: developers would hate to see this feature become one more obstacle to removing the periodic clock tick (and jiffies) in the future. This removal is far from imminent - there's a lot of work to be done first - but it remains desirable for a number of reasons.
Fun with NULL pointers, part 2
Fun with NULL pointers, part 1 took a detailed look at the long chain of failures which allowed the kernel to be compromised by way of a NULL pointer dereference. Eliminating that particular bug was a straightforward fix; it was, in fact, fixed before the nature of the vulnerability was widely understood. The importance of this particular problem is, in one sense, relatively small; there are very few distributions which shipped vulnerable versions of the kernel. But this exploit suggests that there could be a whole class of related problems in the kernel; there is a definite chance that similar vulnerabilities could be discovered - if, indeed, they have not already been found.
One obvious problem is that when the security module mechanism is
configured into the kernel, the administrator-specified limits on the
lowest valid user-space virtual address are ignored
security modules are allowed to override the administrator-specified limit
(mmap_min_addr)
on the lowest valid user-space address. This behavior is a
violation of the understanding by which security modules operate: they are
supposed to be able to restrict privileges, but never increase them. In
this case, the mere presence of SELinux increased
privilege, and the policy enforced by most SELinux deployments failed to
close that hole (comments in the exploit code suggest that AppArmor fared
no better).
Additionally, with security modules configured out entirely, mmap_min_addr was not enforced at all. The mainline now has a patch which causes the map_min_addr sysctl knob to always be in effect; this patch has also been put into the 2.6.27.27 and 2.6.30.2 updates (as have many of the others described here).
Things are also being fixed at the SELinux level. Future versions of Red Hat's SELinux policy will no longer allow unconfined (but otherwise unprivileged) processes to map pages into the bottom of the address space. There are still some open problems, though, especially when programs like WINE are thrown into the mix. It's not yet clear how the system can securely support a small number of programs needing the ability to map the zero page. Ideas like running WINE with root privilege - thus, perhaps, carrying Windows-like behavior a little too far - have garnered little enthusiasm.
There is another way around map_min_addr which also must be addressed: a privileged process which is run under the SVR4 personality will, at exec() time, have a read-only page mapped at the zero address. Evidently some old SVR4 programs expect that page to be there, but its presence helps to make null-pointer exploits possible. So another patch merged into mainline and the stable updates resets the SVR4 personality (or, at least, the part that maps the zero page) whenever a setuid program is run. This patch is enough to defeat the pulseaudio-based trick which was used to gain access to a zero-mapped page.
This change is not enough for some users, who have requested the ability to turn off the personality feature altogether. The ability to run binaries from 386-based Unix systems just lacks the importance it had in, say, 1995, so some question whether the personality feature makes any sense given its costs. Linus answered:
In particular, it seems that the ability to disable address-space randomization (which is a personality feature) is useful in a number of situations. So personality() is likely to stay, but its zero-page mapping feature might go away.
Yet another link in the chain of failure is the removal of the null-pointer check by the compiler. This check would have stopped the attack, but GCC optimized it out on the theory that the pointer could not (by virtue of already having been dereferenced) be NULL. GCC (naturally) has a flag which disables that particular optimization; so, from now on, kernels will, by default, be compiled with the -fno-delete-null-pointer-checks flag. Given that NULL might truly be a valid pointer value in the kernel, it probably makes sense to disable this particular optimization indefinitely.
One could well argue, though, that while all of the above changes are good, they also partly miss the point: a quality kernel would not be dereferencing NULL pointers in the first place. It's those dereferences which are the real bug, so they should really be the place where the problem is fixed. There is some interesting history here, though, in that kernel developers have often been advised to omit checks for NULL pointers. In particular, code like:
BUG_ON(some_pointer == NULL); /* dereference some_pointer */
has often seen the BUG_ON() line removed with a comment like:
This reasoning is based on the idea that dereferencing a NULL pointer will cause a kernel oops. On its face, it makes sense: if the hardware will detect a NULL-pointer dereference, there is little point in adding the overhead of a software check too. But that reasoning is demonstrably faulty, as shown by this exploit. There are even legitimate reasons for mapping page zero, so it will never be true that a NULL pointer is necessarily invalid. One assumes that the relevant developers understand this now, but there may be a lot of places in the kernel where necessary pointer checks were removed from the code.
Most of the NULL pointer problems in the kernel are probably just oversights, though. Most of those, in turn, are not exploitable; if there is no way to cause the kernel to actually encounter a NULL pointer in the relevant code, the lack of a check does not change anything. Still, it would be nice to fix all of those up.
One way of finding these problems may be the Smatch static analysis tool. Smatch went quiet for some years, but it appears that Dan Carpenter is working on it again; he recently posted a NULL pointer bug that Smatch found for him. If Smatch could be turned into a general-purpose tool that could find this sort of problem, the result should be a more secure kernel. It is unfortunate that checkers like this do not seem to attract very many interested developers; free software is very much behind the state of the art in this area and it hurts us.
Another approach is being taken by Julia Lawall, who has put together a Coccinelle "semantic patch" to find and fix check-after-dereference bugs like the one found in the TUN driver. A series of patches (example) has been posted to fix a number of these bugs. Cases where a pointer is checked after the first dereference are probably a small subset of all the NULL pointer problems in the kernel, but each one indicates a situation where the programmer thought that a NULL pointer was possible and problematic. So they are all certainly worth fixing.
All told, the posting of this exploit has served as a sort of wakeup call for the kernel community; it will, with luck, result in the cleaning up of a lot of code and the closing of a number of security problems. Brad Spengler, the author of the exploit, is clearly hoping for a little more, though: he has often expressed concerns that serious kernel security bugs are silently fixed or dismissed as being denial-of-service problems at worst. Whether that will change remains to be seen; in the kernel environment, many bugs can have security implications which are not immediately obvious when the bug is fixed. So we may not see more bugs explicitly advertised as security issues, but, with luck, we will see more bugs fixed.
A short history of btrfs
You probably have heard of the cool new kid on the file system block, btrfs (pronounced "butter-eff-ess") - after all, Linus Torvalds is using it as his root file system on one of his laptops. But you might not know much about it beyond a few high-level keywords - copy-on-write, checksums, writable snapshots - and a few sensational rumors and stories - the Phoronix benchmarks, btrfs is a ZFS ripoff, btrfs is a secret plan for Oracle domination of Linux, etc. When it comes to file systems, it's hard to tell truth from rumor from vile slander: the code is so complex, the personalities are so exaggerated, and the users are so angry when they lose their data. You can't even settle things with a battle of the benchmarks: file system workloads vary so wildly that you can make a plausible argument for why any benchmark is either totally irrelevant or crucially important.
In this article, we'll take a behind-the-scenes look at the design and development of btrfs on many levels - technical, political, personal - and trace it from its origins at a workshop to its current position as Linus's root file system. Knowing the background and motivation for each step will help you understand why btrfs was started, how it works, and where it's going in the future. By the end, you should be able to hand-wave your way through a description of btrfs's on-disk format.
Disclaimer: I have two huge disclaimers to make: One, I worked on ZFS for several years while at Sun. Two, I have already been subpoenaed and deposed for the various Sun/NetApp patent lawsuits and I'd like to avoid giving them any excuse to subpoena me again. I'll do my best to be fair, honest, and scrupulously correct.
btrfs: Pre-history
Imagine you are a Linux file system developer. It's 2007, and you are at the Linux Storage and File systems workshop. Things are looking dim for Linux file systems: Reiserfs, plagued with quality issues and an unsustainable funding model, has just lost all credibility with the arrest of Hans Reiser a few months ago. ext4 is still in development; in fact, it isn't even called ext4 yet. Fundamentally, ext4 is just a straightforward extension of a 30-year-old format and is light-years behind the competition in terms of features. At the same time, companies are clamping down on funding for Linux development; IBM's Linux division is coming to the end of its grace period and needs to show profitability now. Other companies are catching wind of an upcoming recession and are cutting research across the board. They want projects with time to results measured in months, not years.
Ever hopeful, the file systems developers are meeting anyway. Since the workshop is co-located with USENIX FAST '07, several researchers from academia and industry are presenting their ideas to the workshop. One of them is Ohad Rodeh. He's invented a kind of btree that is copy-on-write (COW) friendly [PDF]. To start with, btrees in their native form are wildly incompatible with COW. The leaves of the tree are linked together, so when the location of one leaf changes (via a write - which implies a copy to a new block), the link in the adjacent leaf changes, which triggers another copy-on-write and location change, which changes the link in the next leaf... The result is that the entire btree, from top to bottom, has to be rewritten every time one leaf is changed.
Rodeh's btrees are different: first, he got rid of the links between leaves of the tree - which also "throws out a lot of the existing b-tree literature", as he says in his slides [PDF] - but keeps enough btree traits to be useful. (This is a fairly standard form of btrees in file systems, sometimes called "B+trees".) He added some algorithms for traversing the btree that take advantage of reference counts to limit the amount of the tree that has to be traversed when deleting a snapshot, as well as a few other things, like proactive split and merge of interior nodes so that inserts and deletes don't require any backtracking. The result is a simple, robust, generic data structure which very efficiently tracks extents (groups of contiguous data blocks) in a COW file system. Rodeh successfully prototyped the system some years ago, but he's done with that area of research and just wants someone to take his COW-friendly btrees and put them to good use.
btrfs: The beginning
Chris Mason took these COW-friendly btrees and ran with them. Back in the day, Chris worked on Reiserfs, where he learned a lot about what to do and what not to do in a file system. Reiserfs had some cool features - small file packing, btrees for fast lookup, flexible layout - but the implementation tended to be haphazard and ad hoc. Code paths proliferated wildly, and along with them potential bugs.
Chris had an insight: What if everything in the file system - inodes, file data, directory entries, bitmaps, the works - was an item in a copy-on-write btree? All reads and writes to storage would go through the same code path, one that packed the items into btree nodes and leaves without knowing or caring about the item type. Then you only have to write the code once and you get checksums, reference counting (for snapshots), compression, fragmentation, etc., for anything in the file system.
Chris came up with the following basic structure for btrfs ("btrfs" comes from "btree file system"). Btrfs consists of three types of on-disk structures: block headers, keys, and items, currently defined as follows:
struct btrfs_header { u8 csum[32]; u8 fsid[16]; __le64 blocknr; __le64 flags; u8 chunk_tree_uid[16]; __le64 generation; __le64 owner; __le32 nritems; u8 level; } struct btrfs_disk_key { __le64 objectid; u8 type; __le64 offset; } struct btrfs_item { struct btrfs_disk_key key; __le32 offset; __le32 size; }
Inside the btree (that is, the "branches" of the tree, as opposed to the leaves at the bottom of the tree), nodes consist only of keys and block headers. The keys tell you where to go looking for the item you want, and the block headers tell you where the next node or leaf in the btree is located on disk.
The leaves of the btree contain items, which are a combination of keys
and data. Similarly to reiserfs, the items and data are packed in
extremely space-efficient way: the item headers (that is, the item
structure described above) are packed together starting at the
beginning of the block, and the data associated with each item is
packed together starting at the end of the block. So item headers and
data grow towards each other, as shown in the diagram to the right.
Besides being code efficient, this scheme is space and time efficient as well. Normally, file systems put only one kind of data - bitmaps, or inodes, or directory entries - in any given file system block. This wastes disk space, since unused space in one kind of block can't be used for any other purpose, and it wastes time, since getting to one particular piece of file data requires reading several different kinds of metadata, all located in different blocks in the file system. In btrfs, items are packed together (or pushed out to leaves) in arrangements that optimize both access time and disk space. You can see the difference in these (very schematic, very simplified) diagrams. Old-school filesystems tend to organize data like this:
Btrfs, instead, creates a disk layout which looks more like:
In both diagrams, red blocks denote wasted disk space and red arrows denote seeks.
Each kind of metadata and data in the file system - a directory entry, an inode, an extended attribute, file data itself - is stored as a particular type of item. If we go back to the definition of an item, we see that its first element is a key:
struct btrfs_disk_key { __le64 objectid; u8 type; __le64 offset; }
Let's start with the objectid
field. Each object in the
file system - generally an inode - has a unique objectid. This is
fairly standard practice - it's the equivalent of inode numbers. What
makes btrfs interesting is that the objectid makes up the most
significant bits of the item key - what we use to look up an item in
the btree - and the lower bits are different kinds of items related to
that objectid. This results in grouping together all the information
associated with a particular objectid. If you allocate adjacent
objectids, then all the items from those objectids are also allocated
close together. The <objectid, type>
pair automatically
groups related data close to each other regardless of the actual
content of the data, as opposed to the classical file system approach,
which writes separate optimized allocators for each kind of file
system data.
type
field tells you what kind of data is stored in
the item. Is it the inode? Is it a directory entry? Is it an extent
telling you where the file data is on disk? Is it the file data
itself? With the combination of objectid and the type, you can look
up any file system data you need in the btree.
We should take a quick look at the structure of the btree nodes and
leaves themselves. Each node and leaf is an extent in the btree -
nodes are extents full of <key, block header>
pairs, and
leaves contain items. Large file data is stored outside of the btree
leaves, with the item describing the extent kept in the leaf
itself. (What constitutes a "large" file is tunable based on the
workload.) Each extent describing part of the btree has a checksum and
a reference count, which permits writable snapshots. Each extent also
includes an explicit back reference to each of the extents that refer
to it.
Back references give btrfs a major advantage over every other file system in its class because now we can quickly and efficiently migrate data, incrementally check and repair the file system, and check the correctness of reference counts during normal operation. The proof is that btrfs already supports fast, efficient device removal and shrinking of the available storage for a file system. Many other file systems list "shrink file system" as a feature, but it usually ends up implemented inefficiently and slowly and several years late - or not at all. For example, ext3/4 can shrink a file system - by traversing the entire file system searching for data located in the area of the device being removed. It's a slow, fraught, bug-prone process. ZFS still can't shrink a file system.
The result is beautifully generic and elegant: Everything on disk is a
btree containing reference counted, checksummed extents of items,
organized by <objectid, type>
keys. A great deal of the
btrfs code doesn't care at all what is stored in the items, it just
knows how to add or remove them from the btree. Optimizing disk
layout is simple: allocate things with similar keys close together.
btrfs: The politics
At the same time that Chris was figuring out the technical design of btrfs, he was also figuring out how to fund the development of btrfs in both the short and the long term. Chris had recently moved from SUSE to a special Linux group at Oracle, one that employs several high-level Linux storage developers, including Martin K. Petersen, Zach Brown, and Jens Axboe. Oracle funds a lot of Linux development, some of it obviously connected to the Oracle database (OCFS2, DIF/DIX), and some of it less so (generic block layer work, syslets). Here's how Chris put it in a recent interview with Amanda McPherson from the Linux Foundation:
Amanda: Why did you start this project? Why is Oracle supporting this project so prominently?
Chris: I started Btrfs soon after joining Oracle. I had a unique opportunity to take a detailed look at the features missing from Linux, and felt that Btrfs was the best way to solve them.
Linux is a very important platform for Oracle. We use it heavily for our internal operations, and it has a broad customer base for us. We want to keep Linux strong as a data center operating system, and innovating in storage is a natural way for Oracle to contribute.
In other words, Oracle likes having Linux as a platform, and is willing to invest development effort in it even if it's not directly related to Oracle database performance. Look at it this way: how many operating systems are written and funded in large part by your competitors? While it is tempting to have an operating system entirely under your control - like Solaris - it also means that you have to pay for most of the development on that platform. In the end, Oracle believes it is in its own interest to use its in-house expertise to help keep Linux strong.
After a few months of hacking and design discussions with Zach Brown and many others, Chris posted btrfs for review. From there on out, you can trace the history of btrfs like any other open source project through the mailing lists and source code history. Btrfs is now in the mainline kernel and developers from Red Hat, SUSE, Intel, IBM, HP, Fujitsu, etc. are all working on it. Btrfs is a true open source project - not just in the license, but also in the community.
btrfs: A brief comparison with ZFS
People often ask about the relationship between btrfs and ZFS. From one point of view, the two file systems are very similar: they are copy-on-write checksummed file systems with multi-device support and writable snapshots. From other points of view, they are wildly different: file system architecture, development model, maturity, license, and host operating system, among other things. Rather than answer individual questions, I'll give a short history of ZFS development and compare and contrast btrfs and ZFS on a few key items.
When ZFS first got started, the outlook for file systems in Solaris was rather dim as well. Logging UFS was already nearing the end of its rope in terms of file system size and performance. UFS was so far behind that many Solaris customers paid substantial sums of money to Veritas to run VxFS instead. Solaris needed a new file system, and it needed it soon.
Jeff Bonwick decided to solve the problem and started the ZFS project inside Sun. His organizing metaphor was that of the virtual memory subsystem - why can't disk be as easy to administer and use as memory? The central on-disk data structure was the slab - a chunk of disk divided up into the same size blocks, like that in the SLAB kernel memory allocator, which he also created. Instead of extents, ZFS would use one block pointer per block, but each object would use a different block size - e.g., 512 bytes, or 128KB - depending on the size of the object. Block addresses would be translated through a virtual-memory-like mechanism, so that blocks could be relocated without the knowledge of upper layers. All file system data and metadata would be kept in objects. And all changes to the file system would be described in terms of changes to objects, which would be written in a copy-on-write fashion.
In summary, btrfs organizes everything on disk into a btree of extents containing items and data. ZFS organizes everything on disk into a tree of block pointers, with different block sizes depending on the object size. btrfs checksums and reference-counts extents, ZFS checksums and reference-counts variable-sized blocks. Both file systems write out changes to disk using copy-on-write - extents or blocks in use are never overwritten in place, they are always copied somewhere else first.
So, while the feature list of the two file systems looks quite similar, the implementations are completely different. It's a bit like convergent evolution between marsupials and placental mammals - a marsupial mouse and a placental mouse look nearly identical on the outside, but their internal implementations are quite a bit different!
In my opinion, the basic architecture of btrfs is more suitable to storage than that of ZFS. One of the major problems with the ZFS approach - "slabs" of blocks of a particular size - is fragmentation. Each object can contain blocks of only one size, and each slab can only contain blocks of one size. You can easily end up with, for example, a file of 64K blocks that needs to grow one more block, but no 64K blocks are available, even if the file system is full off nearly empty slabs of 512 byte blocks, 4K blocks, 128K blocks, etc. To solve this problem, we (the ZFS developers) invented ways to create big blocks out of little blocks ("gang blocks") and other unpleasant workarounds. In our defense, at the time btrees and extents seemed fundamentally incompatible with copy-on-write, and the virtual memory metaphor served us well in many other respects.
In contrast, the items-in-a-btree approach is extremely space efficient and flexible. Defragmentation is an ongoing process - repacking the items efficiently is part of the normal code path preparing extents to be written to disk. Doing checksums, reference counting, and other assorted metadata busy-work on a per-extent basis reduces overhead and makes new features (such as fast reverse mapping from an extent to everything that references it) possible.
Now for some personal predictions (based purely on public information - I don't have any insider knowledge). Btrfs will be the default file system on Linux within two years. Btrfs as a project won't (and can't, at this point) be canceled by Oracle. If all the intellectual property issues are worked out (a big if), ZFS will be ported to Linux, but it will have less than a few percent of the installed base of btrfs. Check back in two years and see if I got any of these predictions right!
Btrfs: What's next?
Btrfs is heading for 1.0, a little more than 2 years since the first announcement. This is much faster than many file systems veterans - including myself - expected, especially given that during most of that time, btrfs had only one full-time developer. Btrfs is not ready for production use - that is, storing and serving data you would be upset about losing - but it is ready for widespread testing - e.g., on your backed-up-nightly laptop, or your experimental netbook that you reinstall every few weeks anyway.
Be aware that there was a recent flag day in the btrfs on-disk format: A commit shortly after the 2.6.30 release changed the on disk format in a way that isn't compatible with older kernels. If you create your btrfs file system using the old, 2.6.30 or earlier kernel and tools, and boot into a newer kernel with the new format, you won't be able to use your file system with a 2.6.30 or older kernel any longer. Linus Torvalds found this out the hard way. But if this does happen to you, don't panic - you can find rescue images and other helpful information on the the btrfs wiki.
A kernel.org update
Your editor made a brief visit to the 2009 Linux Symposium, held in Montreal for the first time. One of the talks which could be seen during that short time was an update on kernel.org, presented by John Hawley. It was an interesting look into a bit of infrastructure that many of us rely upon, but which we tend to take for granted.The "state of the server address" started off with the traditional display of bizarre email sent to kernel.org. Suffice to say, the kernel.org administrators get a lot of strange mail. They also have no qualms about displaying that mail (lightly sanitized) for amusement value.
The board of kernel.org is currently made up of five people: H. Peter Anvin, Jeff Uphoff, Chris Wright, Kees Cook, and Linus Torvalds. Linus, it is said, never attends the board meetings; John assumes that he's busy doing something related to the kernel. Peter continues to serve as the president of the organization, doing the work required to keep it as a nonprofit corporation in good standing. Much of the rest of the work is done by John, who was hired in September, 2008, to be the first full-time system administrator for kernel.org. He is employed by the Linux Foundation to do this job.
Over the last year, kernel.org has handled the mirroring a of a number of major distribution releases. They have added two new distributions (Gentoo and Moblin) to the mirror network, and Slackware is being added into the mix now. A number of new wiki instances have been added to wiki.kernel.org. John says that wikis are easy to create; he encourages relevant projects to ask for a kernel.org wiki if it would be helpful.
Internally, kernel.org runs on ten "disgustingly nice" machines donated by HP. John was strong in his praise of HP and ISC (which provides the bulk of the considerable bandwidth used by kernel.org); without them, kernel.org would not function the way it does. Beyond ISC, there are a couple of machines hosted at the OSU open source lab and one at Umeå University in Sweden. A lengthy process has finally gotten all of these machines upgraded to Fedora 9 - just in time, John noted wryly, for Fedora to end support for that distribution. So another round of upgrades in in the works for the near future.
Another significant change over the last year is the adoption of GeoDNS for the kernel.org domains. GeoDNS enables the DNS server to take the location of the requesting system into account and return the addresses of an appropriate set of servers. So kernel.org users now use local kernel.org mirrors, even if they do not explicitly ask for one using a country-specific host name.
One upcoming initiative is archive.kernel.org. This site is intended to be a permanent archive for older distribution updates. Should somebody find the urge to, say, install Red Hat Linux 5 on a system, it can be satisfied by a visit to archive.kernel.org. Filling in the archive is a work in progress; a number of older distribution releases seem to have fallen off the net. But, experience shows, many of the older releases will be located over time.
Another work in progress is "boot.kernel.org". This site is intended to be a repository of network-bootable distributions. The distributor can create a tiny boot image which does little more than setting up the network and downloading the next stage from boot.kernel.org. The idea here is that it will become easy to boot rescue or live CD distributions from the net. Distributions which support network installation can also be hosted on boot.kernel.org. This feature should be ready for a public launch sometime in the near future.
John closed with more amusing email. But, silliness aside, it seems clear that kernel.org is on a solid foundation. It is supporting our community areas going well beyond the kernel itself, and it looks well set to continue doing so for some time.
Patches and updates
Kernel trees
Architecture-specific
Core kernel code
Development tools
Device drivers
Filesystems and block I/O
Memory management
Networking
Security-related
Virtualization and containers
Miscellaneous
Page editor: Jonathan Corbet
Distributions
News and Editorials
Toorox
I was very excited when I read the release announcement for a new Linux distribution based on Gentoo Linux. I used Gentoo for several years and admire Sabayon Linux, so I was quite anxious to test another derivative. After all the testing I was pleased to find that Toorox 06.2009 delivers a true Gentoo experience but is much easier to install and use.
Introduction
Toorox Linux ships as a live DVD featuring KDE 4.2.4, Linux 2.6.28, and lots of useful applications. It uses KNOPPIX hardware detection, but what makes Toorox stand out is its extra tools and utilities.
One interesting feature set is the system installation tools. From the live DVD environment one can choose to install to a hard drive or USB stick. These tools aren't especially elaborate, but they get the job done. The best thing for me was the ability to install on a partition further down the disk than 16. One can select one of the listed partitions or input the desired number. Beyond the install partition the simple installer only asks for a user account, root password, and whether to install GRUB. What makes these installers outstanding is the fact that they work, which is a goal that Gentoo developers themselves continue to struggle with.
The Systemconfig browser contains links to KDE configuration tools such as the System Settings, desktop configuration module, Display options, Effects, and System Information. However, there's much more. One of the most handy additions is the "Driver, Multimedia" folder. This leads users to scripts that install proprietary graphic drivers, files and codecs for multimedia enjoyment, as well as the Adobe Flash browser plugin. So, while Toorox may not ship with these proprietary and closed-source files, it does provide an easy way to obtain them.
Portage

Portage is the Gentoo package manager that downloads, compiles, and installs source packages onto users' systems. Portage was the main feature of Gentoo that propelled it to popularity in the past, but massive emerge failures probably contributed to Gentoo's decline after the departure of Daniel Robbins. However, the development team has persevered and the stable software tree functions well today.
The best thing found in the Systemconfig browser is probably Porthole.
Porthole is a graphical front-end for Portage. I've tested several
graphical Portage front-ends over the years, but most are
disappointing. Porthole seems to be the most consistently stable and
reliable front-end I've used. It allows for the configuration of advanced
compiler options, similar to what one might set in the /etc/make.conf
file. It functions very much like Synaptic does for APT. It can list
packages by category, allows searching by package name, and all that's
required to install is right-clicking on the package name and selecting
"emerge" or
"pretend emerge". The latter doesn't install the package, but will provide
alerts for various problems that may be encountered when installing it for
real.
Unlike Sabayon, Toorox uses the Gentoo package tree and is fully compatible with Gentoo. This also means any updates will come straight from Gentoo. Portage in Toorox is set up to install from the unstable branch of software, probably to accommodate KDE 4 and its dependencies. The unstable branch includes the newest versions of software, sometimes beta, that may not compile or function properly.
Used as configured, Porthole had no problems installing individual packages without dependencies or desired packages with just a few dependencies, but new users may wish to avoid "update world" (synonymous with APT dist-upgrade).
Hardware Observations
Toorox ships with the main 2.6.28 Gentoo kernel. Gentoo has several kernel choices available such as a hardened kernel with extra security patches and higher security settings, a multimedia-oriented kernel, Xen, vanilla, TuxOnIce, and OpenVZ. All are highly patched except the vanilla sources and some are very specialized, such as the Xbox kernels.
Hardware support is thus up-to-par and sometimes surpasses that found with the vanilla kernel. Toorox uses the KNOPPIX hardware detection scripts and does an adequate job. Most hardware is autodetected and autoconfigured. Although battery monitoring was automatic, I needed to manually configure powersaving features.
On machines with at least 1 GB of RAM, Toorox performance was acceptable. However, on machines with, for example, 512 MB, Toorox was a bit slow. Internet connections with supported hardware and using the DHCP protocol were configured automatically. Screen resolution is chosen by the user at the start of the live DVD and passed on to the hard drive install, but a machine with dual monitors required manual configuration. In fact, I had to disconnect my secondary monitor in order to fully boot the live DVD or fresh install.
Interface and Software

Toorox has done a nice job of trying to make the desktop accommodating for new users. On the desktop is a widget that lists all the partitions that can be mounted and browsed in Dolphin with a single click. Another desktop widget features some helpful, if redundant, applications such as Starter, Systemconfig, File Manager (Dolphin), and the Terminal. One other contains connection information. The panels again house widgets and launchers for much the same: Starter, Systemconfig, Terminal, and Trash; with a few additions such as Iceweasel and Device Notifier.
Starter is a link to an application browser containing many of the applications Toorox developers think may be of particular interest to users. Toorox even comes with two menus. One is the newer KickOff menu popularized by openSUSE and now default in KDE and the other is a traditional category/list menu. No one should have any difficulties finding applications to use.
The Toorox image is 1.7 GB and contains lots of applications. The standards are there such as OpenOffice.org, the GIMP, KDE essentials, and lots of games. But there are lots of extras as well such as Wine, VLC, XChat, XSane, Epiphany, Debian's Iceweasel, and Qt Designer. There are thousands more in the Portage package tree.
Conclusion
All told, Toorox does a nice job of packaging up Gentoo, KDE 4, and a working installer to give an interesting and off-the-beaten-path Linux experience. There will always be niggles with Gentoo and so Toorox has its share - especially since it is using the unstable branch of software. But overall the system is stable, responsive, and just plain fun to use.
The default language for Toorox is German, but English is fully supported and easily chosen when booting the live DVD. Toorox has forums for user support, but they, too, are in German. Fortunately, for any issues I needed to resolve, the regular Gentoo forum was the answer. The best part about the Gentoo forum is that most questions have already been answered and are a mere search away.
Most of my recurring issues with Toorox are actually because of KDE 4. This is the best implementation of KDE 4 that I've tried, but, due to the way I work, I still had issues. Toorox developers have tried to make the KDE 4 desktop as usable as possible while making sure folks can find the modules needed to customize to their liking. For those without 50,000 email messages, 1,500 news feeds, and who don't accumulate too many open Konqueror windows, Toorox's KDE 4 could probably fill the bill. Under heavy loads KDE 4 performed rather poorly, even becoming completely unresponsive at times.
Toorox is ideal for someone who'd like to be introduced to Gentoo without the complications and time investment. It'd be great for those who like to use something different from the many cookie-cutter derivatives offered today. Unlike Sabayon, whose developers compile their own packages and use their own repositories, Toorox is very close to being a true Gentoo desktop with the advantages of easy installer, simple software management, some great tools, good looks, and a ready-to-use KDE 4 desktop.
New Releases
The POSSE Education Fedora Remix
The Fedora Education SIG has announced the release of the POSSE Education Fedora Remix, a version of the Fedora distribution meant to help educators contribute to free educational software. "It contains development environments, tools, documentation, and getting-started resources for contributing to a number of projects including Fedora, Mozilla, Sugar Labs and KDE Education and can be used by individuals or by teachers, students, and classrooms that want to contribute to open source projects as part of their course effort."
Ubuntu 8.04.3 LTS released
The Ubuntu team has announced the release of Ubuntu 8.04.3 LTS, the third maintenance update to Ubuntu's 8.04 LTS release. This release includes updated server, desktop, and alternate installation CDs for the i386 and amd64 architectures.
Distribution News
Fedora
Fedora Board Recap 2009-07-16
Click below for a brief recap of the July 16, 2009 meeting of the Fedora Advisory Board. Topics include Fedora Spin Prioritization, Russian Fedora, Use of fedoraproject.org email Addresses, CSI - Security Policy, and Extended Life Cycle.
Gentoo Linux
Gentoo Celebrates 10 Years
The Gentoo Project is celebrating its 10 year anniversary on IRC, Sunday October 4, 2009. "Gentoo is turning 10 years old. For the last ten years, Gentoo has been committed to bringing the cutting edge source based distro to users that need more flexibility than binary packages can give them. With a vibrant community and over 300 developers, much has been accomplished since the beginning, Gentoo remains true to its origin." There's also a screenshot contest open to developers and users.
Ubuntu family
Watch Your Back(ground) (Linux Journal)
Linux Journal covers a call for submissions from the Ubuntu Artwork Team Lead, Kenneth Wimer. "Rules regarding submitted artwork are fairly simple: submissions should avoid using the Ubuntu logo, at least in a prominent fashion, as "It appears in enough places already." They should avoid text, which does not scale well and presents a significant translation hurdle, as well as avoid version numbers, as the backgrounds should be usable and relevant for previous and future releases. Consideration of the overall theme is important, and restraint is encouraged with regard to tone and contrast in color, so as not to overpower the rest of the theme's elements. Small patterns require special care, as they present scaling challenges. Submissions must not include artwork that is not freely licensed (that is, that allows editing and redistribution) unless explicit permission is granted for such use."
Other distributions
The OSWatershed.org project
Scott Shawcroft has announced the OSWatershed.org project. "OpenSourceWatershed is a project aimed at understanding the relationship between distributions (downstream) and the individual software components (upstream). It is the basis for a larger study of distributions and their evolution." He concludes that Arch Linux tends to be the least "obsolete," in that only 45% of its packages are behind the leading edge. Debian and openSUSE, instead, are said to be 95% obsolete. The slides from his OSCON talk [PDF] are also available.
Distribution Newsletters
CentOS Pulse #0903
The CentOS Pulse for July 16, 2009 is out. "This issue of Pulse contains some general and security [related] news, an interview with CentOS developer Ralph Angenendt and information about 'The Definitive Guide to CentOS' which is a book available in both print and e-book format."
DistroWatch Weekly, Issue 312
The DistroWatch Weekly for July 20, 2009 is out. "Leading the news this past week is Mandriva, who has released several new projects including updated 2009 Spring USB and MLO Live CD editions, as well as Enterprise Server 5. We also take a look at the issues and difficulties involved in making CentOS 5.3 run on a netbook. Elsewhere this past week, Moblin benefits with contributions from HyperSpace, while version 4 of ULTILEX is released - a new distro which ships several other distros on a single live CD or USB stick. We also include interviews with Richard Stallman and Mark Shuttleworth, and finally a case study which looks at the relationship between distributions and upstream projects. Have a great Monday and the rest of the week!"
Fedora Weekly News 185
The Fedora Weekly News for July 19, 2009 is out. "Highlights from this week's issue include an overview of feature details for Fedora 12 (Constantine) in our Announcements beat, followed by news from all over the Fedora Planet, including instructions on how to install Chromium (the open source version of Google's Chrome browser) on Fedora, thoughts on the Association for Competitive Technology's recent accusations against the European Commission "of having a bias in favor of open source", and a review of Hannah Montana Linux, along with much more. This week's Ambassadors beat features an event report from Tripura, India and highlights the worldwide Fedora Ambassador map -- find your closest Ambassadors! The Quality Assurance beat features details on the second upcoming Fit and Finish Test Day, to focus on power management and suspend/resume in Fedora with opportunities to participate in the testing. Also a review of this past week's meetings, Fedora 12 bug blocker review and Fedora 11 bug triage. The Art beat this week features details on the Fedora 12 design schedule and also more detail on wallpaper development that FWN has reported on in recent weeks. This week's issue rounds out with much Fedora virtualization news goodness, including details on transition from the Enterprise Management Tools lists, some very helpful Fedora virtual machine disk setup tips, and details of new versions of libguestfs and virt-v2v. This is but a sampling of this week's content and we hope you enjoy this week's issue!"
OpenSUSE Weekly News/80
This issue of the OpenSUSE Weekly News has the following: Register for the openSUSE Conference!, Lydia Pintscher: The Way to Amarok 2.2, Sebastian Schöbinger: Adding a music profile with the KDE Energy Manager Plasmoid, Michael Andres: libzypp-6.10.4: Tune automatically created solver testcases (zypper dup), Interviews from the LinuxTag @ Radiotux, and more.Ubuntu Weekly Newsletter #151
The Ubuntu Weekly Newsletter for July 18, 2009 is out. "In this issue we cover: Ubuntu 8.04.3 released, Kubuntu Council, Kubuntu Wiki, Technical Board: Nominations, Karmic Translations are now Open, New Ubuntu Members, Ubuntu Zimbabwe, Empathy is now in Karmic, AppArmor now available in Karmic: Testing Needed, Ubuntu IRC Council News, OpenJDK 6 Certification for Ubuntu 9.04, Ubuntu Podcast Quickie #9, Ubuntu-based distro touted for power management, and much, much more!"
Page editor: Rebecca Sobol
Development
VLC media player 1.0.0 debuts
The VideoLAN project has announced the release of version 1.0.0 of the VLC media player, its main software effort. The VideoLAN project description states:
![[VideoLAN Logo]](https://static.lwn.net/images/ns/VideoLANlogo.png)
The VLC media player description states:
VLC media player is an all-encompassing application and the feature list is quite extensive. The What can vlc do? document gives an overview of VLC's capabilities and the VideoLAN Wiki has a large collection of documentation about the software. Some of the more notable features include cross-platform operation, support for a wide variety of audio and video formats, the ability to play from many input sources, and to send output to many destinations. In addition to local media sources, a number of network-based streaming formats are supported. All of the audio and video CODECs are built-in. VLC can also perform transcoding and live audio and video filtering. In addition, the software supports metadata operations such as adding subtitles and decoding tags. Finally, VLC is also able to perform unicast and multicast streaming. See the streaming feature list for more information on that capability.
VLC 1.0.0 is a milestone release, from the announcement:
![[VLC Screenshot]](https://static.lwn.net/images/ns/VLCscreen1-sm.png)
Your author installed VLC 1.0 on an Ubuntu 9.04 "Jaunty Jackalope" system by following the Ubuntu installation instructions. Installation was fairly straightforward and the software was run using the vlc command.
The test system's rather ancient Athlon 1700 processor was able to use VLC to play an assortment of audio files (.wav, .flac and .mp3) with no problems. VLC was able to play audio CDs from the local CDROM drive by selecting Media->Open Disc then choosing Audio Disc. VLC has the ability to browse various media sources and playlists can be assembled from those sources. A typical assortment of audio visualization features such as an oscilloscope and a spectrum analyzer are available. Audio effects include a graphic equalizer and a programmable audio spatializer effect that can be used to enhance the stereo separation of the audio.
A .mov file that was created on a Nikon S10 digital camera was played, both the audio and video playback stopped and restarted on regular intervals. Perhaps the processor speed is insufficient for the task. It should be noted that video files from this camera have had similar problems playing back on other video software such as MPlayer and Cinelerra. While VLC provides the normal assortment of start/stop and rewind buttons, it lacks the ability to step through individual video frames.
The video source was switched to a local USB webcam by clicking the Media->Open Capture Device menus and adding /dev/video1 as the source. Some of the video effects were tried, and everything worked as advertised. There was a substantial time delay (around 2 seconds) in copying the video image to the screen. For comparison, the video application Cheese was run on the same system and it was able to display the webcam image with very little delay.
The playing of streaming network media was also tested. The Media->Services Directory menu was activated and the Shoutcast TV Listings item was selected. View->Playlist was selected and Shoutcast TV listings was chosen. A large collection of media sources showed up in the window. Double-clicking on them connected to the various audio and video sources and the broadcasts played without any problems.
At first glance, VLC appears to be a fairly simple media player but after poking around, the software reveals a huge breadth and depth of capabilities. In most cases, the software performed quite well on limited hardware. The inclusion of a wide selection of CODECs makes VLC easy to install and use. If you need a single application to access local and networked media, VLC is an excellent choice.
System Applications
Audio Projects
JACK 1.9.3 released
Version 1.9.3 of the JACK Audio Connection Kit has been announced. "Future JACK2 will be based on C++ jackdmp code base. Jack 1.9.3 is the "renaming" of jackdmp and the result of a lot of developments started after LAC 2008."
CORBA
omniORBpy 3.4 released
Version 3.4 of omniORBpy has been announced. "I am pleased to announce that omniORB 4.1.4 and omniORBpy 3.4 are now available. omniORB is a robust, high performance CORBA implementation for C++. omniORBpy is a version for Python. They are freely available under the terms of the GNU LGPL (and GPL for stand-alone tools). These are mainly bug fix releases, with a number of minor new features"
Database Software
PostgreSQL Weekly News
The July 19, 2009 edition of the PostgreSQL Weekly News is online with the latest PostgreSQL DBMS articles and resources.
Networking Tools
conntrack-tools 0.9.13 released
Version 0.9.13 of conntrack-tools has been announced. "The netfilter project presents another development release of the conntrack-tools that includes support for all the protocol helpers available in 2.6.30 that were missing so far (SCTP, UDPlite, DCCP and GRE). The daemon updates includes a fix for a memory leak that can be triggered under heavy load and if you set a hashtable in user-space that is smaller than the one in the kernel. Moreover, it adds initial support for DCCP and SCTP state-synchronization."
libnetfilter_conntrack 0.0.100 released
Version 0.0.100 of libnetfilter_conntrack has been announced. "libnetfilter_conntrack is a userspace library providing a programming interface (API) to the in-kernel connection tracking state table. This library requires a linux kernel >= 2.6.18. T[h]is release includes a couple of minor fixes."
PacketFence 1.8.4 released
Version 1.8.4 of PacketFence has been announced. "PacketFence is a fully supported, Free and Open Source network access control (NAC) system that runs on Linux. It can be used to effectively secure networks - from small to very large heterogeneous networks. PacketFence has been deployed in production environments where thousands of users are involved - on wired and wireless networks."
Security
Nmap 5.00 released
The 5.00 release of the Nmap security scanner is out. "Considering all the changes, we consider this the most important Nmap release since 1997, and we recommend that all current users upgrade." Those changes include the new ncat and ndiff tools, improved performance, and a new scripting engine.
Web Site Development
ikaaro 0.60.3 released
Version 0.60.3 of ikaaro has been announced. "This is a Content Management System built on Python & itools, among other features ikaaro provides: - content and document management (index&search, metadata, etc.) - multilingual user interfaces and content - high level modules: wiki, forum, tracker, etc. The new script icms-forget.py reduces the history depth of the database. This has been implemented to address scalability issues found with the current usage of Git."
Rails 2.3.3: Touching, faster JSON, bug fixes
Version 2.3.3 of the Rails web development platform has been announced. "This release fixes a lot of bugs and introduces a handful of new features."
Desktop Applications
Audio Applications
Ardour 2.8.2 released
Version 2.8.2 of the Ardour multi-track audio workstation has been announced. "Ardour 2.8.2 contains a fix for a another critical bug on OS X that causes ardour to crash when deleting plugins with Carbon-based user interfaces. There are also two other small fixes - logarithmic plugin parameters can now be modified sensibly, and when importing files using the "copy files to session" option, an existing BWF timestamp is no longer lost. All OS X users are recommended to upgrade immediately - for those who have paid for any previous 2.8-series version, it is a free upgrade. Linux users may choose to wait for 2.8.3 in a couple of weeks which will contain a dozen or so other bug fixes."
Audacity 1.3.8 beta released
Beta version 1.3.8 of the Audacity audio editor has been announced. "It contains a number of significant improvements, plus some bug fixes." See the New in Audacity 1.3.8 document for more information.
Sonic Visualiser v1.6 now available
Version 1.6 of Sonic Visualiser has been announced, it includes several bug fixes. "Sonic Visualiser contains advanced waveform and spectrogram viewers, as well as editors for many sorts of audio annotations. Besides visualisation, it can make and play selections based on the locations of automatically detected features, seamlessly loop playback of single or multiple noncontiguous regions, synthesise annotations for playback, slow down playback while retaining display synchronisation, and show the ongoing alignment in time between multiple recordings of a piece with different timings."
Desktop Environments
New module decisions for GNOME 2.28
The latest GNOME 2.28 module decisions have been posted. "In: gnome-bluetooth (desktop), gnome-disk-utility (desktop), libgdata (external dependency), libseed (bindings), DeviceKit-disks (external dependency), WebKit/GTK+ (external dependency), libchamplain (external dependency), Out: krb5-auth-dialog, icontool".
GNOME Software Announcements
The following new GNOME software has been announced this week:- Clutter 0.9.8/1.0.0rc3 (new features and bug fixes)
- GLib 2.21.4 (bug fixes and translation work)
- Gnome Subtitles 0.9.1 (new features, bug fixes and translation work)
- gob2 2.0.16 (new features and bug fixes)
- GTK+ 2.16.5 (bug fixes and translation work)
- GTK+ 2.17.5 (new features, bug fixes and translation work)
- gtkglarea 2.0.1 (new features, bug fixes and code cleanup)
- Java ATK Wrapper 0.27.4 (bug fixes and code cleanup)
- libgdata 0.4.0 (new features, bug fixes and translation work)
- Libgee 0.1.6 (build and bug fixes)
- Tegaki 0.2 (new features and bug fixes)
KDE 4.3 RC3 released
Version 4.3 RC3 of KDE has been announced. "Even in the hot phase up to KDE 4.3.0, there have been quite a bunch of fixes to KDE's 4.3 branch. The KDE Release Team has decided to err on the safe side and do another release candidate before KDE 4.3.0 comes out."
KDE Software Announcements
The following new KDE software has been announced this week:- extract_rpm 0.1.4 (new feature)
- FlashQard 0.13.0 (new features, bug fixes and translation work)
- glnemo2 preview.2009-Jul-16 (new features, bug fixes and performance improvement)
- Kall 0.8 (unspecified)
- kmj 0.3 (new features and bug fixes)
- KMuddy 1.0 (new features, bug fixes and KDE4 port)
- KToshiba 0.4.0 Alpha (KDE4 port)
- LavaPE 0.9.1 (code rewrite)
- QEVEN 0.3.0 (unspecified)
- QTGZManager Beta5 (new features, bug fixes and code cleanup)
- RSIBreak 0.10-beta2 (bug fixes and code cleanup)
- RSIBreak 0.10 (new feature)
- subdms 0.3.4 (unspecified)
- VariCAD 2009 1.06 (new features)
- Waheela 0.2 (unspecified)
Xorg Software Announcements
The following new Xorg software has been announced this week:- inputproto 1.9.99.15 (code cleanup)
- libxcb 1.4 (new features and bug fixes)
- libXext 1.0.99.2 (bug fixes, code cleanup and documentation work)
- libXext 1.0.99.3 (bug fix)
- libXext 1.0.99.4 (code cleanup)
- libXi 1.2.99.3 (new features and bug fixes)
- libXtst 1.0.99.1 (new features, bug fixes and documentation work)
- pixman 0.15.18 beta (new features and bug fixes)
- xextproto 7.0.99.1 (bug fixes and code cleanup)
- xextproto 7.0.99.2 (bug fix)
- xextproto 7.0.99.3 (bug fix)
- xf86-video-intel 2.8.0 (new features, bug fixes and documentation work)
Encryption Software
Monkeysphere 0.25 released
Version 0.25 of Monkeysphere has been announced, it includes new features, bug fixes and code cleanup. "The Monkeysphere project's goal is to extend OpenPGP's web of trust to new areas of the Internet to help us securely identify each other while we work online. Specifically, monkeysphere currently offers a framework to leverage the OpenPGP web of trust for OpenSSH authentication. In other words, it allows you to use secure shell as you normally do, but to identify yourself and the servers you administer or connect to with your OpenPGP keys."
Interoperability
Wine 1.1.26 announced
Version 1.1.26 of Wine has been announced. Changes include: "- Still more translation updates. - Faster bitmap stretching using XRender. - Proxy support in WinHTTP. - Many more JScript functions. - Various bug fixes."
Mail Clients
Sylpheed 2.7.0 released
Stable version 2.7.0 of Sylpheed, an email client, has been announced. "2.7.0 includes experimental implementation of plug-in system, update check feature, reliability improvement, improvements of Windows installer, and bugfixes."
Music Applications
MMA 1.5 released
Version 1.5 of Musical MIDI Accompaniment (MMA) has been announced. "Included in this release: - MIDINOTE command set for SMF includes, - -B/b command line options for partial compilations and playback, - Enhanced groove HTML documentation, - Debian package added to download section, - Path and filename enhancements to make running on Windows platforms easier, - lots of bug fixes and library additions. Read the complete change log in the distro: CHANGES-1.4."
QM Vamp Plugins v1.6 now available
Version 1.6 of QM Vamp Plugins has been announced. "Plugins included are note onset detector, beat and barline tracker, tempo estimator, key estimator, tonal change detector, structural segmenter, timbral and rhythmic similarity, wavelet scaleogram, adaptive spectrogram, note transcription, chromagram, constant-Q spectrogram, and MFCC calculation. This is a major feature release which adds four new plugins (adaptive spectrogram, polyphonic transcription, wavelet scalogram, and bar-and-beat tracker) and a new method for the beat tracker."
Office Applications
Leo 4.6 final released
Version 4.6 final of Leo has been announced, it includes new features and bug fixes. "Leo is a text editor, data organizer, project manager and much more."
SyncEvolution 0.9 beta 3 released
Version 0.9 beta 3 of SyncEvolution, a PIM data synchronization tool, has been announced. "The end is near - SyncEvolution 0.9 is almost done. For the first time in the 0.9 series, precompiled binaries are made available again together with the new 0.9 beta 3 source snapshot. Users are encouraged to upgrade now and give feedback before the final 0.9 release."
Web Browsers
Firefox 3.0.12 released
Firefox 3.0.12 is out. This is another security update, fixing more than the usual number of scary bugs in the browser.
Languages and Tools
C
GCC 4.4.1 release candidate available
A release candidate of GCC 4.4.1 has been announced. "I have so far bootstrapped and tested the release candidate on x86_64-linux and i686-linux. Please test it and report any issues to bugzilla. The branch is now frozen and all checkins until after the final release of GCC 4.4.1 require explicit RM approval."
GCC 4.4.1 Status Report
The GCC 4.4.1 status report July 15, 2009 has been published. "GCC 4.4.1 Release Candidate 1 has been released, the branch is now frozen until GCC 4.4.1 is released, all check-ins require explicit approval from one of the RMs. Please report any 4.4.1 blockers as soon as possible. If all goes well, 4.4.1 will be released next week."
GCC 4.4.1 Status Report
The July 22, 2009 edition of the GCC 4.4.1 Status Report has been published. "GCC 4.4.1 has been released, I'll announce it once uploaded to ftp.gnu.org and mirrors get a chance to mirror it. The 4.4 branch is again open under the usual release branch rules."
GCC 4.5 Status Report
The GCC 4.5 status report for July 15, 2009 has been published. "The trunk is in Stage 1. We expect that Stage 1 will last through at least July and August. There are still large pending merges we are aware of, specifically the VTA, LTO and Graphite branches will be considered when deciding when to go to Stage 3."
Perl
Parrot 1.4.0 released
Version 1.4.0 of Parrot has been announced, it adds some new capabilities and bug fixes. "On behalf of the Parrot team, I'm proud to announce Parrot 1.4.0 "Mundo Cani." Parrot is a virtual machine aimed at running all dynamic languages."
Major update to perldoc.perl.org (use Perl)
use Perl covers the latest changes to perldoc.perl.org. "The main change is a complete new visual design, bringing a fresh, modern look to the site. Additionally there are a number of new features to aid navigation and usability - a floating page index window, recently read pages list, improved Pod rendering, and many more."
Python
gnupg 0.2.0 released
Version 0.2.0 of gnupg has been announced. "A new version of the Python module which wraps GnuPG has been released. The module was refactored slightly to support Python 3.0."
itools 0.60.3 released
Version 0.60.3 of itools has been announced, it includes bug fixes and code rework. "itools is a Python library, it groups a number of packages into a single meta-package for easier development and deployment".
psyco V2 announced
Version 2 of psyco has been announced. "Psyco V2 is a continuation of the well-known psyco project, which was called finished and was dis-continued by its author Armin Rigo in 2005, in favor of the PyPy project. This is a new project, using Psyco's code base with permission of Armin."
SfePy 2009.3 released
Version 2009.3 of SfePy has been announced, it includes a number of new capabilities. "SfePy (simple finite elements in Python) is a software, distributed under the BSD license, for solving systems of coupled partial differential equations by the finite element method. The code is based on NumPy and SciPy packages."
Shed Skin 0.2 released
Version 0.2 of Shed Skin has been announced. "I have just released version 0.2 of Shed Skin, an experimental (restricted) Python-to-C++ compiler. It comes with 7 new example programs (for a total of 40 example programs, at over 12,000 lines) and several important improvements/bug fixes."
Python-URL! - weekly Python news and links
The July 22, 2009 edition of the Python-URL! is online with a new collection of Python article links.
Tcl/Tk
TkDocs: new tutorial and resources for TkInter/ttk
A new release of TkDocs is available. "TkDocs is a language-neutral resource for developers who are interested in using Tk as their GUI. The highlight is an extensive tutorial that illustrates how to use the newest generation of Tk features and best practices to create modern and attractive user interfaces. I'm pleased to announce that the tutorial and other parts of the site has been updated with the latest Python-oriented Tk material, corresponding to tkinter and ttk from Python 3.1. You'll now find all the examples and code snippets available in Python (and also Tcl, Ruby and Perl for those so inclined)."
Tcl-URL! - weekly Tcl news and links
The July 15, 2009 edition of the Tcl-URL! is online with new Tcl/Tk articles and resources.
Version Control
bzr 1.17 released
Version 1.17 of the bzr adaptive version control system has been announced. "Bazaar continues to blaze a straight and shining path to the 2.0 release and the elevation of the ``2a`` beta format to the full glory of "supported and stable". Highlights in this release include greatly reduced memory consumption during commits, faster ``ls``, faster ``annotate``, faster network operations if you're specifying a revision number and the final destruction of those annoying progress bar artifacts."
Miscellaneous
Launchpad source released
Canonical has announced the long-awaited release of the Launchpad source. "Projects that are hosted on Launchpad are immediately connected to every other project hosted there in a way that makes it easy to collaborate on code, translations, bug fixes and feature design across project boundaries. Rather than hosting individual projects, we host a massive and connected community that collaborates together across many projects. Making Launchpad itself open source gives users the ability to improve the service they use every day." More information can be found on the development wiki.
Page editor: Forrest Cook
Linux in the news
Recommended Reading
Code Red: How software companies could screw up Obama's health care reform (Washington Monthly)
Worth a read: this Washington Monthly article comparing the record of proprietary and open source medical information systems. "But another big part of the problem is that proprietary systems have earned a bad reputation in the medical community for the simple reason that they often don't work very well. The programs are written by software developers who are far removed from the realities of practicing medicine. The result is systems which tend to create, rather than prevent, medical errors once they're in the hands of harried health care professionals. The Joint Commission, which accredits hospitals for safety, recently issued an unprecedented warning that computer technology is now implicated in an incredible 25 percent of all reported medication errors. Perversely, license agreements usually bar users of proprietary health IT systems from reporting dangerous bugs to other health care facilities. In open-source systems, users learn from each other's mistakes; in proprietary ones, they're not even allowed to mention them."
Trade Shows and Conferences
The Business Of Free (KDEDot)
Nikolaj Hald Nielsen covers a talk about Amarok and business at the Gran Canaria Desktop Summit. "One of the things that was touched upon was the recent release of the Palm Pre smartphone which relies on Apple's iTunes software for synchronising music with a computer. An interesting question asked was what would happen if Apple decided to block the Pre from using iTunes. Now, just over a week later, this is exactly what happened. Apple has indeed blocked the Pre from using iTunes with its latest update."
Companies
OpenMoko Layoffs Lead to New Open Hardware Venture (Linux.com)
Linux.com looks at Qi Hardware, a new venture run by former OpenMoko VP Stephen Mosher. "Qi Hardwares first product will be the NanoNote, a Linux-run mini computer the size of a cellphone with a screen, processor, keyboard, USB port and headphones but no radio frequency, Mosher said. To be launched this fall, the NanoNotes potential uses could include a nano-sized laptop, video or music player, photos or specialized portable personal or business uses."
Legal
Linux Vendor Settles With Microsoft (InformationWeek)
InformationWeek reports that the Melco Group has reached a settlement with Microsoft involving indemnification against an unspecified patent. "A manufacturer of Linux-based networking devices has agreed to pay an undisclosed sum to Microsoft in order to settle a patent claim, Microsoft disclosed Wednesday. Under the agreement, Melco Group will pay the sum to Microsoft in exchange for indemnity coverage for its Buffalo brand Network Attached Storage devices and routers. The patent indemnification covers Melco and its customers."
Interviews
Akademy 2009 Technical Papers Published: Research And Innovation In The KDE Community (KDEDot)
KDE.News has an interview with Celeste Lyn Paul and Laura Dragan. "We conducted a short interview with Celeste, member of the KDE e.V. board, usability specialist within KDE and Senior Interaction Architect at User--Centered Design, Inc. We also interviewed Laura Dragan, researcher at the Digital Enterprise Research Institute and the National University of Ireland, Galway and writer of a technical paper for Akademy 2009. They explained to us what the Technical Papers [presented at Akademy] are about."
Reviews
Are You Afraid? You Will Be... (Blog of helios)
The Blog of Helios takes a look at the Linux port of the Frictional Games trilogy, Penumbra. "Understand, these are not games where you have an arsenal of weapons to blow bloody chunks off of Sauerbraten monsters. This is a world where you exist or perish by your own natural wit, awareness and reflexes. Think quickly and correctly or become part of the shadowworld that awaits its next victim. You physically build your own survival. You actually hand-make the barricades, the weapons, the ladders and escape routes that you will need to survive...and you do it with movements and manipulations just like in the physical world."
Miscellaneous
Open-source firmware vuln exposes wireless routers (Register)
The Register reports on a DD-WRT vulnerability that would appear to justify an update. "The bug resides in DD-WRT's hyper text transfer protocol daemon, which runs as root. Because the httpd doesn't sanitize user-supplied input, it's vulnerable to remote command injection. While the httpd doesn't listen on the outbound interface, attackers can easily access it using CSRF (cross-site request forgery) techniques."
Page editor: Forrest Cook
Announcements
Non-Commercial announcements
The FSF warns (again) against Mono
The Free Software Foundation has put out a release stating that Microsoft's "Community Promise" is not sufficient, and that free software developers should still avoid mono. "The Community Promise only extends to claims in Microsoft patents that are *necessary* to implement the covered specifications. Judging just by the size of its patent portfolio, it's likely that Microsoft holds patents which a complete standard implementation probably infringes even if it's not strictly necessary--maybe the patent covers a straightforward speed optimization, or some common way of performing some task. The Community Promise doesn't say anything about these patents, and so Microsoft can still use them to threaten standard implementations."
KDE repository reaches 1,000,000 commits (KDEDot)
KDE.News covers a new repository commit milestone. "KDE announced today that the one millionth commit has been made to its Subversion-based revision control system. "This is a wonderful milestone for KDE," said Cornelius Schumacher, President of the KDE e.V. Board of Directors. "It is the result of years of hard work by a large, diverse, and talented team that has come together from all over the globe to develop one of the largest and most comprehensive software products in the world.""
Open Source for America launches
A group called Open Source for America has announced its existence. "The mission of Open Source for America is to serve as a centralized advocate and to encourage broader U.S. Federal Government support of and participation in free and open source software. Specifically, Open Source for America will: help effect change in policies and practices to allow the Federal Government to better utilize these technologies; help coordinate these communities to collaborate with the federal government on technology requirements; and raise awareness and create understanding among federal government leaders about the values and implications of open source software." In other words, we finally have a lobbying organization in the US. There's a fairly high-profile board of advisors (Ghosh, Moglen, O'Reilly, Peters, Phipps, Shuttleworth, Tiemann, Zemlin, ...), some case studies, and, inevitably, a Twitter feed.
Commercial announcements
Palm releases Mojo SDK for developing WebOS apps
Palm has announced the release of the Mojo Software Development Kit. "After a successful early access program, Palm's Mojo Software Development Kit is available to all interested app developers. The SDK can be downloaded from a new developer portal -- Palm webOSdev -- at developer.palm.com. Any interested developer with a valid email address can access the SDK, its associated documentation, and new Mojo developer forums."
New Books
Learning PHP, MySQL, and JavaScript--New from O'Reilly
O'Reilly has published the book Learning PHP, MySQL, and JavaScript by Robin Nixon.Python Essential Reference, 4th Edition - Now Available
David Beazley has announced the publication of his book Python Essential Reference, 4th Edition.Testing, Debugging, and Optimizing-Python in a Nutshell (O'ReillyNet)
O'Reilly has published an excerpt from the book Testing, Debugging, and Optimizing-Python in a Nutshell by Alex Martelli.
Resources
EFF: A practical guide to Internet technology for political activists in repressive regimes
The Electronic Frontier Foundation has announced the publication of a new guide: Surveillance Self-Defense International. "Recent political protests in Iran, China, and elsewhere have demonstrated the enormous power of the Internet for organizing protests and reporting events to the world. But governments have also used the Internet to track, harass, and undermine. SSDI urges activists to consider the risks in using various technologies and outlines strategies that can allow protestors to continue to use the Internet safely."
Open Source Database Magazine, issue 1
The first issue of Open Source Database Magazine is available as a 26-page PDF file. It includes articles on XtraBackup, PostgreSQL 8.4, and more. "Welcome to the inaugural issue of Open Source Database Magazine. It is my goal that this magazine provides a place for people to learn about open source databases of any stripe - be they Postgres, SQLite, MySQL, Drizzle, CouchDB, Hadoop or something else."
Calls for Presentations
linux.conf.au 2010 CFP closing
We just got a reminder that the call for papers for linux.conf.au 2010 (January 18 to 23, Wellington, New Zealand) will close on Friday, July 24. It's time for all the procrastinators out there to start pulling together their thoughts and put in a proposal. "The LCA2010 Papers Committee is looking for a broad range of papers spanning everything from programming and software to desktop and userspace to community, government and education."
O'Reilly Tools of Change for Publishing 2010 Conference Opens Call for Participation
A call for participation has gone out for the O'Reilly Tools of Change for Publishing Conference, the submission deadline is September 1. "The O'Reilly Tools of Change for Publishing Conference (TOC) will explore the critical trends emerging around the business of digital publishing February 22-24, 2010, at the Marriot Marquis in New York City. From authoring, editing, and layout to distribution and consumption, new technologies are changing all aspects of publishing. As digitalization and globalization continue to accelerate the rate of change, publishers face the urgent necessity of building a solid business on the shifting foundation of paid vs. free content, format and device innovations, conflicting standards and royalties. TOC offers publishers the blueprints for success."
Upcoming Events
DjangoCon '09 registration
Registration is open for DjangoCon. "DjangoCon '09 will be in Portland, Oregon at the DoubleTree Green Hotel between 8th and 12th September. The first 3 days are conference days and the last 2 days are sprint days. Keynotes will be: Ian Bicking, Ted Leung and Avi Bryant."
FRHACK list of talks and speakers released
The list of talks and speakers has been released for FRHACK 01. "FRHACK 01 September 7-8, 2009, at the Great Kursaal Hall of Besançon, France."
MAKE ART 2009
MAKE ART 2009 has been announced. "make art is an international festival dedicated to the integration of Free/Libre/Open Source Software (FLOSS) in digital art. The fourth edition of make art ? What The Fork?! distributed and open practices in FLOSS art - will take place in Poitiers (FR), from the 7th to the 13th of December 2009. make art offers performances, presentations, workshops and an exhibition, focused on the encounter between digital art and free software."
Register today for the openSUSE conference
Registration has opened for the openSUSE conference, the event will take place on September 17-20, 2009 in Nürnberg, Germany. "The openSUSE Conference schedule is up and registration is open! Attending the openSUSE Conference is free, but registration is required. Lunch will be provided, so please be sure to sign up early so we can get an accurate headcount."
SciPy 2009 conference schedule posted
The conference schedule for SciPy 2009 has been published, the event takes place on August 18-23 in Pasadena, CA. "This year's program is very rich. In order to limit the number of interesting talks that we had to turn down, we decided to reduce the length of talks. Although this results in many short talks, we hope that it will foster discussions, and give new ideas. Many subjects are covered, both varying technical subject in the scientific computing spectrum, and covering a lot of different research areas."
Events: July 30, 2009 to September 28, 2009
The following event listing is taken from the LWN.net Calendar.
Date(s) | Event | Location |
---|---|---|
July 24 July 30 |
DebConf 2009 | Cáceres, Extremadura, Spain |
July 25 July 30 |
Black Hat Briefings and Training | Las Vegas, NV, USA |
July 31 August 2 |
FOSS in Healthcare unconference | Houston, TX, USA |
August 3 August 5 |
YAPC::EU::2009 | Lisbon, Portugal |
August 7 August 9 |
UKUUG Summer 2009 Conference | Birmingham, UK |
August 7 | August Penguin 2009 | Weizmann Institute, Israel |
August 10 August 14 |
USENIX Security Symposium | Montreal, Quebec, Canada |
August 11 August 13 |
Flash Memory Summit | Santa Clara, CA, USA |
August 11 | FOSS Dev Camp - Open Source World | San Francisco, CA, USA |
August 12 August 13 |
OpenSource World Conference and Expo | San Francisco, CA, USA |
August 12 August 13 |
Military Open Source Software | Atlanta, Georgia, USA |
August 13 August 16 |
Hacking At Random 2009 | Vierhouten, The Netherlands |
August 18 August 23 |
2009 Python in Science Conference | Pasadena, CA, USA |
August 22 August 23 |
Free and Open Source Conference (FrOSCon) | St. Augustin, Germany |
August 22 August 23 |
OpenSQL Camp | St. Augustin, Germany |
August 31 September 4 |
Ubuntu Developer Week | Internet, Internet |
September 1 September 4 |
JBoss World Chicago | Chicago, IL, USA |
September 1 September 4 |
Red Hat Summit Chicago | Chicago, IL, USA |
September 1 September 5 |
DrupalCon | Paris, France |
September 4 September 5 |
PyCon 2009 Argentina | Buenos Aires, Argentina |
September 7 September 11 |
XtreemOS summer school | Oxford, UK |
September 7 September 8 |
FRHACK.ORG IT Security Conference | Besançon, France |
September 8 September 12 |
DjangoCon '09 | Portland, OR, USA |
September 10 September 11 |
Fedora Developer Conference 2009 | Brno, Czech Republic |
September 12 | Evil Robot Conference (Free Conference, Free Software) | Raleigh, NC, USA |
September 14 September 18 |
Django Bootcamp at the Big Nerd Ranch | Atlanta, Georgia, USA |
September 15 September 17 |
International Conference on IT Security Incident Management and IT Forensics | Stuttgart, Germany |
September 17 September 18 |
Internet Security Operations and Intelligence 7 | San Diego, CA, USA |
September 17 September 20 |
openSUSE Conference | Nuremberg, Germany |
September 18 September 19 |
BruCON | Brussels, Belgium |
September 18 September 20 |
EuroBSDCon 2009 | Cambridge, UK |
September 19 | Atlanta Linux Fest 2009 | Atlanta, Georgia, USA |
September 19 | Beijing Perl Workshop | Beijing, China |
September 19 | Software Freedom Day | Worldwide |
September 20 | SELinux Developer Summit 2009 @ LinuxCon | Portland, Oregon, USA |
September 21 September 23 |
LinuxCon 2009 | Portland, OR, USA |
September 21 September 25 |
Ruby on Rails Bootcamp with Charles B. Quinn | Atlanta, USA |
September 23 September 25 |
Linux Plumbers Conference | Portland, Oregon, USA |
September 23 September 25 |
Recent Advances in Intrusion Detection | Saint-Malo, Brittany, France |
September 23 September 25 |
OpenSolaris Developer Conference 2009 | Hamburg, Germany |
September 23 | Bacula Conference 2009 | Cologne, Germany |
September 24 September 26 |
Joomla! and Virtue Mart Day Germany | Bad Nauheim, Germany |
September 25 September 27 |
International Conference on Open Source | Taipei, Taiwan |
September 25 September 27 |
Ohio LinuxFest | Columbus, Ohio, USA |
September 26 September 27 |
PyCon India 2009 | Bengaluru, India |
September 26 | Open Source Conference 2009 Okinawa | Ginowan City, Okinawa, Japan |
September 26 September 27 |
Mini-DebConf at ICOS | Taipei, Taiwan |
If your event does not appear here, please tell us about it.
Event Reports
O'Reilly Velocity Conference report
O'Reilly has published a report from the recent Velocity Conference. "The second year of the O'Reilly Velocity Web Performance and Operations Conference drew more than 700 web developers and experts, a larger group than attended last year's, to San Jose June 22-24, 2009. They came to Velocity to pose their toughest questions to the people doing the best performance and operations work in the world."
Audio and Video programs
Podcast with Chris DiBona on the (computational) value of sharing
Eric Steuer talks with Chris Dibona in a new podcast. "Eric Steuer is the creative director of Creative Commons, a nonprofit organization that works to make it easier for creators to share their work with the rest of the world. It also provides tools to make it easier for people to find creative work that's been made available to them-and the rest of the world-to use, share, reuse etc., freely and legally. What follows is the first in a series of interviews called "We like to share," in which Eric talked to people who work across a variety of fields who use sharing as an approach to benefit the work that they do. The latest interview is with Chris Dibona, the Open Source Programs Manager for Google."
Page editor: Forrest Cook