User: Password:
Subscribe / Log in / New account Weekly Edition for January 29, 2009 2009

By Jonathan Corbet
January 27, 2009
The 2009 was held in Hobart, on the island of Tasmania. The setting for LCA - typically on a university campus - is always nice, but it is hard to imagine a more beautiful place to meet than Hobart. As an added bonus, the mild temperatures offered a nice complement to both (1) the brutally high temperatures being felt on the Australian mainland, and [Hobart] (2) the rather severe winter conditions awaiting your editor on his return. A number of talks from LCA 2009 have been covered in separate articles; here your editor will summarize a few other things worth mentioning.

Prior to the event, your editor heard a few people express disappointment over the choice of keynote speakers this time around. As it happens, at least some of that disappointment was premature. It is true that things got off to a bit of a slow start on the first day, when Thomas Limoncelli delivered a hand-waving talk about "scarcity and abundance." Thomas is a good and entertaining speaker, but he seemed to think that he was addressing a gathering of system administrators, so his talk missed the mark. Unfortunately, your editor got waylaid and missed Angela Beesley's keynote on the second day.

The speaker for the final day was Sun's Simon Phipps. Your editor entered this talk with low expectations, but was pleasantly surprised. Simon is an engaging speaker, and he would appear to understand our community well. As might be expected, he glossed over some of Sun's more difficult community [Simon Phipps] interactions, choosing instead to focus on more positive things and the interaction between the community and companies in general.

Simon's thesis is that we're heading into a "third wave" of free software. The first wave started, perhaps, before Richard Stallman wrote the GNU Manifesto; Simon notes that IBM's unbundling of the software for its nascent PC offering (in response to antitrust problems) played a huge role in defining the software market of the 1980's. But the Free Software Foundation brought a lot of things into focus and started the ball rolling for real. The second wave came about roughly with the founding of the Apache Software Foundation; that was when the world came to understand that free software developers can produce high-quality code. He gave Ubuntu as an example, and noted that even the Gartner Group has come to see some value in free software.

The third wave is coming as businesses really figure out how to work with free software. In his point of view, the right way is to do everything to drive adoption of the software; again, Canonical was held up as an example of how to do it right. One should only sell licenses, he says, to businesses who haven't figured out the true value of free software. Why, he asks, should a company which understands things buy RHEL or SLES licenses? (It's worth noting that a Red Hat representative took issue with that comment, not without reason).

"Third wave" businesses should work with something like a subscription model, selling support services as needed. Things like defect resolution, preferably done by people who have commit privileges with the project involved. Businesses can make upgrades easier, provide production support tools, or, if really needed, sell indemnity guarantees.

Some concerns were raised, the first of which was licenses. While noting wryly that his company "has done lots of experimentation" with software licenses, Simon identified license proliferation as a big problem. In the future, he thinks, the problems associated with proliferation will tend to drive projects toward a single license - most likely the GPL.

Another problem is, of course, software patents. Simon says we shouldn't worry too much about patent trolls, though; there is not much we can do about them in any case. A much bigger concern is companies (unnamed) which are working as members of the community but which are, simultaneously, filing a stream of "parallel patents" covering the work they do. Should one of these companies turn against the community, it could create all kinds of problems. For this reason, Simon is a big fan of licenses like GPLv3 or the Apache license which include patent covenants. Every company which engages the community under the terms of such a license gives up some of its patent weaponry in the process. The more companies we can bring into this sort of "patent peace," the better off we will be.

Even so, he says, the day may come when the community needs a strong patron to defend it against a determined patent attack.

[mascot] Simon then asked the audience to consider what it is that makes a company a true friend of free software. Is it just a matter of strapping on a penguin beak, as the Tasmanian devil has done to become the LCA2009 mascot? The real measure of friendship is contributions to the community; Sun, he pointed out, has done a lot in that regard. In closing, Simon's message to "third wave" businesses was to keep freedom in mind. There is a place, he says, for both pragmatism and radical idealism. The biggest enemy of freedom is a happy slave; he held up his Apple notebook as an example.

In response to questions, Simon noted that the license problems with the sunrpc code will hopefully be fixed soon. The problem is that this code is 25 years old and there's nobody around who worked on it at the time, so determining its origins is hard. He also said that "pressure is mounting" to release the ZFS filesystem under a GPL-compatible license. And he suggested that, eventually, Red Hat will have to start selling support services for Fedora, since that is the distribution that people are adopting.

Freedom was also at the top of the agenda during Rob Savoye's talk. He discussed the launch of the Open Media Now! Foundation, which has been formed to address the problem of codec patents head-on. As Rob puts it, we all create content, we should be able to give copies of our own content to anyone. In addition, the data we create never goes away, but our ability to read that data just might. Plus, he's simply fed up with hearing complaints that gnash does not work with YouTube videos; it works just fine, but they cannot distribute gnash with the requisite codecs.

To deal with this problem, the Foundation is starting a determined effort to gather prior art which can apply to existing codec patents. With any luck, some of the worst of them can be invalidated. But just as much effort is going into figuring out ways to work around codec patents. Most patents are tightly written; it's often possible to find a way to code an algorithm which falls outside of a given patent's claims. When a proper workaround is found (and determining "proper" is a job for a lawyer), the relevant patent can thereafter be ignored. It is a far easier, more certain, and more cost-effective way of dealing with software patents, so Rob thinks the community should be putting much more effort into finding workarounds. He hopes that people will join up with Open Media Now and help to make that happen.

Matthew Wilcox managed to fill a room with a standing-room-only crowd (and not much standing room, at that) despite being scheduled at the same time as Andrew Tridgell. His topic - solid-state drives - is clearly [Matthew Wilcox] interesting to a lot of people. Matthew discussed some of the issues with these drives, many of which have been covered here in the past. Those problems are being slowly resolved by the manufacturers, but there is a different class of problems which is now coming to the fore. There are certain kinds of kernel overhead which one doesn't notice when an I/O operation takes milliseconds to complete. When that operation completes in microseconds, though, that kernel overhead can become a bottleneck. So he has been working on finding these problems and fixing them, but it is going to take a while. He made the interesting observation that, at SSD speeds, block I/O starts to look more like network traffic, and the kernel needs to adopt some of the same techniques to be able to keep up with the hardware.

The Penguin Dinner auction was back this year, after having been dropped from the schedule in 2008. The auction is always an interesting event, [Bdale
and Linus] often involving people deciding to spend a few thousand dollars on a T-shirt after having consumed enough alcohol to make any such decision especially unwise. This year's auction beneficiary was the Save The Tasmanian Devil organization, which came away from the event somewhat richer than it had hoped. After a long series of bids, matching offers, and simple passing-the-hat in the crowd, a large consortium of bidders managed to get a total of nearly AU$40,000 pledged to this cause. There was one condition, though: Bdale Garbee not only had to lose his beard, but it had to be done at the hands of Linus Torvalds.

The "free as in beard" event happened on the last day of the conference. As was noted in the live Twitter feed being projected in the room, it was most surreal to sit in a room of 500 people all quietly watching a man shave. Bdale's wife, who took the picture which was nominally the object being auctioned, has made it clear that he will not be allowed to attend LCA unaccompanied again.


In 2010, will, for the second time ever, not be held in Australia. The winning bid for next year came from Wellington, New Zealand - a setting which rivals Hobart in beauty. Mark your calendars for January 18 to 23; it should be a good time.

Comments (4 posted)

LCA: Catching up with

By Jonathan Corbet
January 23, 2009
For years, has been one of the best places to go to catch up with the state of the X Window System; the 2009 event was no exception. There was a big difference this time around, though. X talks have typically been all about the great changes which are coming in the near future. This time, the X developers had a different story: most of those great changes are done and will soon be heading toward a distribution near you.

[Keith Packard] Keith Packard's talk started with that theme. When he spoke at LCA2008, there were a few missing features in Small things like composited three-dimensional graphics, monitor hotplugging, shared graphical objects, kernel-based mode setting, and kernel-based two-dimensional drawing. One of the main things holding all of that work back was the lack of a memory manager which could work with the graphics processor (GPU). It was, Keith said, much like programming everything in early Fortran; doing things with memory was painful.

That problem is history; X now has a kernel-based memory management system. It can be used to allocate persistent objects which are shared between the CPU and the GPU. Since graphical objects are persistent, applications no longer need to make backup copies of everything; these objects will not disappear. Objects have globally-visible names, which, among other things, allows them to be shared between applications. They can even be shared between different APIs, with objects being transformed between various types (image, texture, etc.) as needed. It looks, in fact, an awful lot like a filesystem; there may eventually be a virtual filesystem interface to these objects.

This memory manager is, of course, the graphics execution manager, or GEM. It is new code; the developers first started talking about the need to start over with a new memory manager in March, 2008. The first implementation was posted in April, and the code was merged for the 2.6.28 kernel, released in December. In the process, the GEM developers dropped a lot of generality; they essentially abandoned the task of supporting BSD systems, for example ("sorry about that," says Keith). They also limit support to some Intel hardware at this point. After seeing attempts at large, general solutions fail, the GEM developers decided to focus on getting one thing working, and to generalize thereafter. There is work in progress to get GEM working with ATI chipsets, but that project will not be done for a little while yet.

Moving data between caches is very expensive, so caching must be managed with great care. This is a task they had assumed would be hard. "Unfortunately," says Keith, "we were right." GEM is built around the shmfs filesystem code; much of the fundamental object allocation is done there. That part is easy; the biggest hassle turns out to be in the area of cache management. Even on Intel hardware, which is alleged to be fully cache-coherent, there are caching issues which arise when dealing with the GPU. Moving data between caches is very expensive, so caching must be managed with great care. This is a task they had assumed would be hard. "Unfortunately," says Keith, "we were right."

One fundamental design feature of GEM is the use of global names for graphical objects. Unlike previous APIs, GEM does not deal with physical addresses of objects in its API. That allows the kernel to move things around as needed; as a result, every application can work with the assumption that it has access to the full GPU memory aperture. Graphical objects, in turn, are referenced by "batch buffers," which contain sequences of operations for the GPU. The batch buffer is the fundamental scheduling unit used by GEM; by allowing multiple applications to schedule batch buffers for execution, the GEM developers hope to be able to take advantage of the parallelism of the GPU.

GEM replaces the "balkanized" memory management found in earlier APIs. Persistent objects eliminate a number of annoyances, such as the dumping of textures at every task switch. What is also gone is the allocation of the entire memory aperture at startup time; memory is now allocated as needed. And lots of data copying has been taken out. All told, it is a much cleaner and better-performing solution than its predecessors.

Getting this code into the kernel was a classic example of working well with the community. The developers took pains to post their code early, then they listened to the comments which came back. In the process of responding to reviews, they were able to make some internal kernel API changes which made life easier. In general, they found, when you actively engage the kernel community, making changes is easy.

The next step was the new DRI2 X extension, intended to replace the (now legacy) DRI extension. It only has three requests, enabling connection to the hardware and buffer allocation. The DRI shared memory area (and its associated lock) have been removed, eliminating a whole class of problems. Buffer management is all done in the X server; that makes life a lot easier.

Then, there is the kernel mode-setting (KMS) API - the other big missing piece. KMS gets user-space applications out of the business of programming the adapter directly, putting the kernel in control. The KMS code (merged for 2.6.29) also implements the fbdev interface, meaning that graphics and the console now share the same driver. Among other things, that will let the kernel present a traceback when the system panics, even if X is running. Fast user switching is another nice feature which falls out of the KMS merge. KMS also eliminates the need for the X server to run with root privileges, which should help security-conscious Linux users sleep better at night. The X server is a huge body of code which, as a rule, has never been through a serious security audit. It's a lot better if that code can be run in an unprivileged mode.

Finally, KMS holds out the promise of someday supporting non-graphical uses of the GPU. See the GPGPU site for information on the kinds of things people try to do once they see the GPU as a more general-purpose coprocessor.

All is not yet perfect, naturally. Beyond its limited hardware support, the new code also does not yet solve the longstanding "tearing" problem. Tearing happens when an update is not coordinated with the monitor's vertical refresh, causing half-updated screens. It is hard to solve without stalling the GPU to wait for vertical refresh, an operation which kills performance. So the X developers are looking at ways to context-switch the GPU. Then buffer copies can be queued in the kernel and caused to happen after the vertical refresh interrupt. It's a somewhat hard problem, but, says Keith, it will be fixed soon.

There is reason to believe this promise. The X developers have managed to create and merge a great deal of code over the course of the last year. Keith's talk was a sort of a celebration; the multi-year process of bringing X out of years of stagnation and into the 21st century is coming to a close. That is certainly an achievement worth celebrating.

Postscript: Keith's talk concerned the video output aspect of the X Window System, but an output-only system is not particularly interesting. The other side of the equation - input - was addressed by Peter Hutterer in a separate session. Much of the talk was dedicated to describing the current state of affairs on the input side of X. Suffice to say that it is a complex collection of software modules which have been bolted on over the years; see the diagram in the background of the picture to the right.

[Peter Hutterer] What is more interesting is where things are going from here. A lot of work is being done in this area, though, according to Peter, only a couple of developers are doing it. Much of the classic configuration-file magic has been superseded by HAL-based autoconfiguration code. The complex sequence of events which follows the attachment of a keyboard is being simplified. Various limits - on the number of buttons on a device, for example - are being lifted. And, of course, the multi-pointer X work (discussed at LCA2008) is finding its way into the mainline X server and into distributions.

The problems in the input side of X have received less attention, but it is still an area which has been crying out for work for some time. Now that work, too, is heading toward completion. For users of X (and that is almost all of us), life is indeed getting better.

Comments (23 posted)

The new GCC runtime library exemption

By Jonathan Corbet
January 27, 2009
As described in Plugging into GCC last October, the runtime library code used by the GCC compiler (which implements much of the basic functionality that individual languages need for most programs) has long carried a license exemption allowing it to be combined with proprietary software. In response to the introduction of version 3 of the GPL and the desire to add a plugin infrastructure to GCC, the FSF has now announced that the licensing of the GCC runtime code has changed. The FSF wishes to modernize this bit of licensing code while, simultaneously, using it as a defense against the distribution of proprietary GCC plugins.

Section 7 of GPLv3 explicitly allows copyright holders to exempt recipients of the software from specific terms of the license. Interestingly, people who redistribute the software have the option of removing those added permissions. The new GCC runtime library license is GPLv3, but with an additional permission as described in Section 7. That permission reads:

You have permission to propagate a work of Target Code formed by combining the Runtime Library with Independent Modules, even if such propagation would otherwise violate the terms of GPLv3, provided that all Target Code was generated by Eligible Compilation Processes. You may then convey such a combination under terms of your choice, consistent with the licensing of the Independent Modules.

Anybody who distributes a program which uses the GCC runtime, and which is not licensed under GPLv3, will depend on this exemption, so it is good to understand what it says. In short, it allows the runtime to be combined with code under any license as long as that code has been built with an "Eligible Compilation Process."

The license defines a "Compilation Process" as the series of steps which transforms high-level code into target code. It does not include anything which happens before the high-level code hits the compiler. So preprocessors and code generation systems are explicitly not a part of the compilation process. As for what makes an "Eligible Compilation Process," the license reads:

A Compilation Process is "Eligible" if it is done using GCC, alone or with other GPL-compatible software, or if it is done without using any work based on GCC. For example, using non-GPL-compatible Software to optimize the GCC intermediate representation would not qualify as an Eligible Compilation Process.

This is where the license bites users of proprietary GCC plugins. Since those plugins are not GPL-compatible, they render the compilation process "ineligible" and the resulting code cannot be distributed in combination with the GCC runtime libraries. This approach has some interesting implications:

  • "GPL-compatible" is defined as allowing combination with GCC. So a compilation process which employs a GPLv2-licensed module loses eligibility.

  • This must be the first free software license which discriminates on the basis of how other code was processed. Combining with proprietary code is just fine, but combining with free software that happens to have been run through a proprietary optimizing module is not allowed. It is an interesting extension of free software licensing conditions that could well prove to have unexpected results.

  • While the use of a proprietary GCC module removes the license exemption, using a 100% proprietary compiler does not. As long as the compiler is not derived from GCC somehow, linking to the GCC runtime library is allowed.

The explanatory material released with the license change includes this text:

However, the FSF decided long ago to allow developers to use GCC's libraries to compile any program, regardless of its license. Developing nonfree software is not good for society, and we have no obligation to make it easier. We decided to permit this because forbidding it seemed likely to backfire, and because using small libraries to limit the use of GCC seemed like the tail wagging the dog.

With this change, though, the FSF is doing exactly that: using its "small libraries" to control how the to-be-developed GCC plugin mechanism will be used. It will be interesting to see how well this works; if a vendor is truly determined to become a purveyor of proprietary GCC modules, the implementation of some replacement "small libraries" might not appear to be much of an obstacle. In that sense, this licensing truly could backfire: it could result in the distribution of binaries built with both proprietary GCC modules and a proprietary runtime library.

But, then, that depends on the existence of vendors wanting to distribute proprietary compiler plugins in the first place. It is not entirely clear that such vendors exist at this point. So it may well end up that the runtime exemption will not bring about any changes noticeable by users or developers, most of whom never thought about the runtime exemption in its previous form either.

Comments (25 posted)

KDE 4, distributors, and bleeding-edge software

By Jake Edge
January 28, 2009

Buried deep inside a recent interview with Linus Torvalds was the revelation that he had moved away from KDE and back to GNOME—which he famously abandoned in 2005. The cause of that switch was the problems he had with KDE 4.0, which seems to be a popular reaction to that release. Various media outlets, Slashdot in particular, elevated Torvalds's switch to the headline of the interview. That led, of course, to some loud complaints from the KDE community, but also a much more thoughtful response from KDE project lead Aaron Seigo. While it is somewhat interesting to know Torvalds's choice for his desktop, there are other, more important issues that stem from the controversy.

Never one to mince words, Torvalds is clear in his unhappiness: "I used to be a KDE user. I thought KDE 4.0 was such a disaster, I switched to GNOME." But, he does go on to acknowledge that he understands, perhaps even partially agrees with, the reasons behind it:

[...] but I think they did it badly. They did so [many] changes, it was a half-baked release. It may turn out to be the right decision in the end, and I will retry KDE, but I suspect I'm not the only person they lost.

There has been a regular stream of reports of unhappy KDE users, with many folks switching to GNOME due to KDE 4.0 not living up to their expectations—or even being usable at all. Part of the problem stems from Fedora's decision to move to KDE 4 in Fedora 9, but not give users a way to fall back to KDE 3.5. When Torvalds upgraded to Fedora 9, he got a desktop that "was not as functional", leading him to go back to GNOME—though, he hates "the fact that my right button doesn't do what I want it to do", which was one of the reasons he moved to KDE in the first place.

One facet of the problem, as Seigo points out, is the race between distributions to incorporate the most leading—perhaps bleeding—edge software versions. It is clear that KDE did not do enough to communicate what it thought 4.0 was: "KDE 4.0.0 is our 'will eat your children' release of KDE4, not the next release of KDE 3.5" is how Seigo described it when it was released. That message, along with the idea that KDE 4 would not be ready to replace 3.5 until 4.1 was released, didn't really propagate though. It was hard for users, distributions, and the press to separate the KDE vision of the future from the actual reality of what was delivered.

There clearly were users, perhaps less vocal or with fewer requirements, who stuck with KDE through the transition. The author notes that he went through the same upgrade path in Fedora without suffering any major problems. Reduced functionality and some annoyances were certainly present, but it was not enough to cause a switch to a different desktop environment. It is impossible to get any real numbers for users who switched, had a distribution that allowed them to stick with 3.5, or just muddled through until KDE 4 became more usable. But, without a doubt, the handling of the KDE 4.0 release gave the project a rather nasty black eye.

Seigo also minces few words when pointing to the distributions to take a large part of that blame:

I have to admit that it's really hard to stay positive about the efforts of downstreams when they wander around feeling they should be above reproach while simultaneously hurting our (theirs and ours) users in a rush to be more bad ass bleeding edge than any other cool dude distro in town. I hope this time instead of handing out spankings, the distros can sit back and think about things and try and figure out how they played an unfortunate part in the 4.0 fiasco.

There is no real substitute to distributions and projects like KDE working together to determine what should be packaged up in the next distribution release. It is unclear where exactly that process broke down for Fedora 9, but it certainly led to much of the outcry about KDE 4. But, if they had it to do all over again, how would KDE have handled things differently? Projects want to make their latest releases available to users, so that testing, bug reporting, and fixing can happen. That is the service that distributions provide. But users rightly expect a certain base level of functionality in the tools that get released.

To some extent, it is a classic chicken-and-egg problem. In his defense of the 4.0 release process, Seigo notes that releases, as opposed to alphas or betas, are the only way to get attention from users and testers:

Between the rc's and the tagging of 4.0.0 the number of reports from testing skyrocketed. This is great, and shows that when I assert "people don't test when it's alpha or even beta" I'm absolutely correct. This is not about tricking people either: people seem to forget that the open source method is based on participation not consumption. So testers look for a cue to start testing; that is their form of participation. "alpha" and even "beta" is often not enough of a cue, especially today when so many of our testing users are not nearly as technically skilled with the compiler, debuggers, etc as the typical Free software user was 10 years ago.

It would be easy to just fault KDE for releasing too early, but Seigo does have a point about "participation". Likely due to their exuberance at what they had accomplished for KDE 4, the developers were blinded to the inadequacies of the release for day-to-day use—at least for some users. The project needed to clearly get the message out that it might not be usable by all and it failed to do that. It's a fine line, but for something as integral as a desktop environment, it would have been better to find a way to release with more things working. The flip side, of course, is that it takes testing to figure out what isn't working—which is part of the service users provide back to the projects.

This is not the first time we have seen this kind of thing. Red Hat, and now Fedora, have always been rather—some would say overly—aggressive about including new software into releases. Some readers will likely remember the problems with the switch to glibc-2.0 in Red Hat 5. Others may fondly recall Red Hat 7, which shipped an unreleased GCC that didn't build the kernel correctly.

We may be seeing something similar play out with the recently announced plans to include btrfs in Fedora 11. While it has been merged into the mainline kernel for 2.6.29 (due in March), it is most definitely not in its final form. There are likely to be stability issues as well as possible changes to the user-space API. There is even the possibility of an on-disk format change, though Chris Mason and the btrfs developers are hoping to avoid it.

Much like with KDE 4, btrfs will likely benefit from more users, but there is the risk that some will either miss or ignore the warnings and lose critical data in a btrfs volume. Should that turn out to be some high-profile developer who declares the filesystem to be a "disaster", it could be a setback to the adoption of btrfs.

KDE 4.2 has just been released, and early reports would indicate that it is very functional. With the problems from the KDE 4.0 release—now a year old—fading in the memory of many, a rekindling of those flames is probably less than completely welcomed by the project. But the lessons they learned, even if solutions are not obvious, are important for KDE as well as other projects. Because free software is developed and released in the open, much can be learned from other projects' mistakes. It is yet another benefit that openness provides.

Comments (111 posted)

Page editor: Jonathan Corbet


Book Review: Hacking VoIP

January 28, 2009

This article was contributed by Nathan Willis

If you use any flavor of voice-over-IP (VoIP) technology, whether free software or proprietary, lone softphone or multi-line office Asterisk server, then you need to take a hard look at VoIP security. Himanshu Dwivedi's Hacking VoIP: protocols, attacks, and countermeasures from No Starch Press provides a thorough, but clear, examination of the landscape. It systematically examines the core VoIP protocols, server and connection infrastructure, and social engineering weaknesses in VoIP deployment. It also provides example attacks that the reader can reproduce on test machines, and details the effective safeguards.

[Book cover]

The book covers security for all major breeds of voice-over-IP technology: Session Initiation Protocol (SIP)-based, H.323-based, and Inter-Asterisk Exchange (IAX)-based. SIP is found in most current-generation VoIP software, such as Twinkle, KPhone, and Telepathy. H.323 is an older protocol stack, but is still in widespread use, particularly through the Ekiga project. Both SIP and H.323 use Real-time Transport Protocol (RTP) to handle audio streams. IAX handles connection management and audio data in one protocol, and is used by the Asterisk telephony server, although Asterisk can handle SIP and H.323 as well.

Part I examines each protocol in turn: SIP, H.323, RTP, and IAX. The author provides an overview of the authentication, call set-up, session management, and audio transport of each protocol stack. He then explores weaknesses and potential attacks against each protocol in depth.

Part II looks at potential attacks on VoIP networks that exploit underlying Internet infrastructure that connects both clients and servers, such as Simple Network Management Protocol (SNMP) and DNS. It also examines non-technical security threats such as phishing and Spam Over IP Telephony (SPIT), and includes a brief roundup of the security status of various widely available public VoIP services.

Part III explores how to harden VoIP systems, using encryption and secure authentication based on technologies such as Transport Layer Security (TLS), Secure RTP (SRTP), and Phil Zimmermann's ZRTP. The book concludes with a step-by-step VoIP Security Audit Program created by the author.

Dwivedi's writing makes the subject matter accessible without sacrificing detail. His explanations of topics like SIP authentication handshakes are clear enough for a novice to understand. That clarity is critical for explaining more complicated issues like Man-in-the-Middle and eavesdropping attacks that consist of a precise sequence of events.

Better still, for each loophole exploited, he provides step-by-step instructions for executing the attack in a laboratory environment. The laboratory consists of an Asterisk server and one or more client applications connecting via SIP, H.323, or IAX. Some exploits — such as username sniffing — require only common network analysis tools like Wireshark. For those that require special capabilities, like injecting SIP packets, the author provides links to the appropriate applications.

If you are not already familiar with VoIP security, the outlook may frighten you. All three protocol stacks are assailable on a number of fronts, from identity spoofing to denial-of-service, and the chinks in the armor are part of the stacks themselves, not poor implementations.

For example, SIP and H.323 both use MD5 to hash authentication credentials, making them vulnerable to offline dictionary attacks. IAX supports stronger RSA authentication in addition to MD5, but it can be downgraded to plaintext authentication with a single spoofed packet. Denial of service attacks on all three protocols are as simple as flooding the network with registration rejection, call rejection, or call termination packets. RTP eavesdropping and audio insertion are possible because RTP assumes that the connection — established by SIP or H.323 — is secure.

The good news is that the strength of SIP, H.323, and IAX can be significantly improved. TLS can secure call set-up, SRTP can harden audio transport, and careful security auditing can close holes on gateway servers and proxies. But this takes active measures; as Dwivedi observes in the book, end users and administrators often make assumptions about the security of VoIP based on their past experience with the comparatively robust security of traditional phone systems and GSM networks.

Those assumptions are by and large wrong. Dwivedi devotes a chapter to scrutinizing the security of widespread VoIP products, from free services like Google Talk and Yahoo Messenger to commercial products like Vonage. Vonage uses neither TLS nor SRTP, making it vulnerable to every attack on SIP or RTP. Yahoo and Google gain some security by using TLS on their sign-on processes, but are still exposed to a long list of exploits.

In light of that chapter, I did a brief survey of the open source VoIP project scenes to see which supported TLS, SRTP, and ZRTP; the results are not much better. A few projects, such as minisip and Twinkle, make security a priority, but most do not. Notably, Asterisk and Ekiga have long planned to support TLS and SRTP, but have yet to release a working build.

Hacking VoIP is a must-read for anyone interested in Internet telephony, whether as a developer or an end-user. Dwivedi clears away the fog surrounding VoIP security, revealing it for what it is: attainable, but only through conscious effort.

Every day, I see more and more TV commercials advertising "magic" boxes that plug in to your telephone and your broadband, allowing you to make free or cheap telephone calls. These products are undoubtedly SIP-and-RTP-based devices with no security. VoIP is still in its infancy compared to email and the Web; making security commonplace is still possible. By spreading a good understanding of the seriousness of the issues and how to solve them, this book could go a long way towards making that a reality.

Comments (2 posted)

Brief items

Need help on possible PG 8.4 security features

PostgreSQL is considering adding some security features for version 8.4 and is looking for security folks to review the code. "The PostgreSQL community is considering including security enhancements in Postgres 8.4, e.g. row-level permissions and SE-Linux security. However, to evaluate the patch and its usefulness, we need security experts who want to use this capability or have used it in other databases." Click below for the full message. (Thanks to Alvaro Herrera).

Full Story (comments: 1)

New vulnerabilities

cups: insecure tmp file usage

Package(s):cups CVE #(s):CVE-2009-0032
Created:January 26, 2009 Updated:January 28, 2009

From the Mandriva advisory:

A vulnerability has been discovered in CUPS shipped with Mandriva Linux which allows local users to overwrite arbitrary files via a symlink attack on the /tmp/pdf.log temporary file (CVE-2009-0032)

Mandriva MDVSA-2009:029 cups 2009-01-24
Mandriva MDVSA-2009:028 cups 2009-01-24
Mandriva MDVSA-2009:027 cups 2009-01-24

Comments (none posted)

DevIL: off by one error

Package(s):DevIL CVE #(s):CVE-2008-5262
Created:January 22, 2009 Updated:March 9, 2009
Description: DevIL, the Developer's Image Library has an off by one error. From the Red Hat Bug entry: Multiple stack-based buffer overflows in the iGetHdrHeader function in src-IL/src/il_hdr.c in DevIL 1.7.4 allow context-dependent attackers to execute arbitrary code via a crafted Radiance RGBE file.
Gentoo 200903-04 devil 2009-03-06
Debian DSA-1717 devil 2009-02-05
Fedora FEDORA-2009-0867 DevIL 2009-01-21
Fedora FEDORA-2009-0856 DevIL 2009-01-21

Comments (none posted)

dia: arbitrary code execution

Package(s):dia CVE #(s):
Created:January 27, 2009 Updated:January 28, 2009
Description: From the Fedora advisory: Filter out untrusted python modules search path to remove the possibility to run arbitrary code on the user's system if there is a python file in dia's working directory named the same as one that dia's python scripts try to import.
Fedora FEDORA-2009-1057 dia 2009-01-27
Fedora FEDORA-2009-0943 dia 2009-01-27

Comments (none posted)

ganglia-monitor-core: arbitrary code execution

Package(s):ganglia-monitor-core CVE #(s):CVE-2009-0241
Created:January 26, 2009 Updated:June 9, 2009

From the Debian advisory:

Spike Spiegel discovered a stack-based buffer overflow in gmetad, the meta-daemon for the ganglia cluster monitoring toolkit, which could be triggered via a request with long path names and might enable arbitrary code execution.

SuSE SUSE-SR:2009:011 java, realplayer, acroread, apache2-mod_security2, cyrus-sasl, wireshark, ganglia-monitor-core, ghostscript-devel, libwmf, libxine1, net-snmp, ntp, openssl 2009-06-09
Gentoo 200903-22 ganglia 2009-03-10
Debian DSA-1710-1 ganglia-monitor-core 2009-01-25

Comments (none posted)

kernel: several vulnerabilites

Package(s):kernel CVE #(s):CVE-2009-0029 CVE-2009-0065
Created:January 27, 2009 Updated:October 5, 2009
Description: From the Fedora advisory:

CVE-2009-0029 Linux Kernel insecure 64 bit system call argument passing

CVE-2009-0065 kernel: sctp: memory overflow when FWD-TSN chunk is received with bad stream ID.

Fedora FEDORA-2009-8647 kernel 2009-08-15
Fedora FEDORA-2009-8264 kernel 2009-08-04
Fedora FEDORA-2009-6883 kernel 2009-06-23
Fedora FEDORA-2009-6846 kernel 2009-06-23
Mandriva MDVSA-2009:135 kernel 2009-06-17
SuSE SUSE-SA:2009:031 kernel 2009-06-09
SuSE SUSE-SA:2009:030 kernel 2009-06-08
Fedora FEDORA-2009-10165 kernel 2009-10-03
Fedora FEDORA-2009-5383 kernel 2009-05-25
Fedora FEDORA-2009-5356 kernel 2009-05-25
Red Hat RHSA-2009:1055-02 kernel 2009-05-19
Debian DSA-1794-1 linux-2.6 2009-05-06
Debian DSA-1787-1 linux-2.6.24 2009-05-02
CentOS CESA-2009:0331 kernel 2009-04-20
Ubuntu USN-752-1 linux-source-2.6.15 2009-04-07
Ubuntu USN-751-1 linux, linux-source-2.6.22 2009-04-07
SuSE SUSE-SA:2009:017 kernel 2009-04-03
SuSE SUSE-SA:2009:015 kernel 2009-04-03
Debian DSA-1749-1 linux-2.6 2009-03-20
Red Hat RHSA-2009:0331-01 kernel 2009-03-12
SuSE SUSE-SA:2009:010 kernel 2009-02-26
Red Hat RHSA-2009:0264-01 kernel 2009-02-10
Red Hat RHSA-2009:0053-01 kernel 2009-02-04
Fedora FEDORA-2009-0816 kernel 2009-01-21
Fedora FEDORA-2009-0923 kernel 2009-01-24

Comments (none posted)

ktorrent: arbitrary uploads, code execution

Package(s):ktorrent CVE #(s):CVE-2008-5905 CVE-2008-5906
Created:January 27, 2009 Updated:February 24, 2009
Description: From the Ubuntu advisory:

It was discovered that KTorrent did not properly restrict access when using the web interface plugin. A remote attacker could use a crafted http request and upload arbitrary torrent files to trigger the start of downloads and seeding. (CVE-2008-5905)

It was discovered that KTorrent did not properly handle certain parameters when using the web interface plugin. A remote attacker could use crafted http requests to execute arbitrary PHP code. (CVE-2008-5906)

Gentoo 200902-05 ktorrent 2009-02-23
Ubuntu USN-711-1 ktorrent 2009-01-26

Comments (none posted)

moodle: insecure temp file

Package(s):moodle CVE #(s):CVE-2008-5153
Created:January 22, 2009 Updated:June 25, 2009
Description: moodle has an insecure temp file vulnerability. From the Red Hat Bug entry: spell-check-logic.cgi in Moodle 1.8.2 allows local users to overwrite arbitrary files via a symlink attack on the /tmp/spell-check-debug.log, /tmp/spell-check-before, or /tmp/spell-check-after temporary file.
Ubuntu USN-791-1 moodle 2009-06-24
Fedora FEDORA-2009-3280 moodle 2009-04-02
Fedora FEDORA-2009-3283 moodle 2009-04-02
Debian DSA-1724-1 moodle 2009-02-13
Fedora FEDORA-2009-0819 moodle 2009-01-21
Fedora FEDORA-2009-0814 moodle 2009-01-21

Comments (none posted)

mumbles: unsafe shell usage

Package(s):mumbles CVE #(s):
Created:January 22, 2009 Updated:January 28, 2009
Description: mumbles uses the shell in an unsafe manner. From the Red Hat Bug entry: The Firefox plugin uses os.system in an insecure fashion.
Fedora FEDORA-2009-0436 mumbles 2009-01-14

Comments (none posted)

nessus-core: signature verification flaw

Package(s):nessus-core CVE #(s):CVE-2009-0125
Created:January 26, 2009 Updated:October 13, 2009

From the CVE entry:

** DISPUTED ** NOTE: this issue has been disputed by the upstream vendor. nasl/nasl_crypto2.c in the Nessus Attack Scripting Language library (aka libnasl) 2.2.11 does not properly check the return value from the OpenSSL DSA_do_verify function, which allows remote attackers to bypass validation of the certificate chain via a malformed SSL/TLS signature, a similar vulnerability to CVE-2008-5077. NOTE: the upstream vendor has disputed this issue, stating "while we do misuse this function (this is a bug), it has absolutely no security ramification."

Mandriva MDVSA-2009:271 libnasl 2009-10-12
Fedora FEDORA-2009-0577 libnasl 2009-01-16
Fedora FEDORA-2009-0636 libnasl 2009-01-16
Fedora FEDORA-2009-0577 nessus-libraries 2009-01-16
Fedora FEDORA-2009-0636 nessus-libraries 2009-01-16
Fedora FEDORA-2009-0577 nessus-core 2009-01-16
Fedora FEDORA-2009-0636 nessus-core 2009-01-16
SuSE SUSE-SR:2009:003 boinc-client, xrdp, phpMyAdmin, libnasl, moodle, net-snmp, audiofile, xterm, amarok, libpng, sudo, avahi 2009-02-02

Comments (none posted)

php: information disclosure

Package(s):php CVE #(s):CVE-2008-5498
Created:January 22, 2009 Updated:January 6, 2010
Description: php has an information disclosure vulnerability. From the Mandriva alert: An array index error in the imageRotate() function in PHP allowed context-dependent attackers to read the contents of arbitrary memory locations via a crafted value of the third argument to the function for an indexed image (CVE-2008-5498).
Gentoo 201001-03 php 2010-01-05
Fedora FEDORA-2009-3848 php 2009-04-21
Fedora FEDORA-2009-3768 php 2009-04-21
Red Hat RHSA-2009:0350-01 php 2009-04-14
Slackware SSA:2009-098-02 php 2009-04-08
CentOS CESA-2009:0338 php 2009-04-07
CentOS CESA-2009:0337 php 2009-04-06
Red Hat RHSA-2009:0337-01 php 2009-04-06
Red Hat RHSA-2009:0338-01 php 2009-04-06
Mandriva MDVSA-2009:023 php 2009-01-21
Mandriva MDVSA-2009:022 php 2009-01-21
Mandriva MDVSA-2009:021 php 2009-01-21

Comments (none posted)

scilab: insecure temp file

Package(s):scilab CVE #(s):CVE-2008-4983
Created:January 22, 2009 Updated:January 28, 2009
Description: Scilab, a scientific software package for numerical computations, has an insecure temporary file usage, this can be used for a symlink attack.
Gentoo 200901-14 scilab 2009-01-21

Comments (none posted)

tor: heap corruption

Package(s):tor CVE #(s):
Created:January 26, 2009 Updated:January 28, 2009

From the Tor release notes:

Fix a heap-corruption bug that may be remotely triggerable on some platforms. Reported by Ilja van Sprundel.

Fedora FEDORA-2009-0917 tor 2009-01-24
Fedora FEDORA-2009-0897 tor 2009-01-24

Comments (none posted)

typo3-src: multiple vulnerabilities

Package(s):typo3-src CVE #(s):CVE-2009-0255 CVE-2009-0256 CVE-2009-0257 CVE-2009-0258
Created:January 27, 2009 Updated:February 11, 2009
Description: From the Debian advisory:

Chris John Riley discovered that the TYPO3-wide used encryption key is generated with an insufficiently random seed resulting in low entropy which makes it easier for attackers to crack this key. (CVE-2009-0255)

Marcus Krause discovered that TYPO3 is not invalidating a supplied session on authentication which allows an attacker to take over a victims session via a session fixation attack. (CVE-2009-0256)

Multiple cross-site scripting vulnerabilities allow remote attackers to inject arbitrary web script or HTML via various arguments and user- supplied strings used in the indexed search system extension, adodb extension test scripts or the workspace module. (CVE-2009-0257)

Mads Olesen discovered a remote command injection vulnerability in the indexed search system extension which allows attackers to execute arbitrary code via a crafted file name which is passed unescaped to various system tools that extract file content for the indexing. (CVE-2009-0258)

Debian DSA-1720-1 typo3-src 2009-02-10
Debian DSA-1711-1 typo3-src 2009-01-26

Comments (none posted)

vnc: arbitrary code execution

Package(s):vnc CVE #(s):CVE-2008-4770
Created:January 27, 2009 Updated:March 9, 2009
Description: From the CVE entry: The CMsgReader::readRect function in the VNC Viewer component in RealVNC VNC Free Edition 4.0 through 4.1.2, Enterprise Edition E4.0 through E4.4.2, and Personal Edition P4.0 through P4.4.2 allows remote VNC servers to execute arbitrary code via crafted RFB protocol data, related to "encoding type."
Gentoo 200903-17 vnc 2009-03-09
CentOS CESA-2009:0261 vnc 2009-02-11
Red Hat RHSA-2009:0261-01 vnc 2009-02-11
Fedora FEDORA-2009-0991 vnc 2009-01-27
Debian DSA-1716-1 vnc4 2009-01-31
Fedora FEDORA-2009-1001 vnc 2009-01-27

Comments (none posted)

xine-lib: multiple vulnerabilities

Package(s):xine-lib CVE #(s):CVE-2008-5233 CVE-2008-5241 CVE-2008-5245 CVE-2008-5246
Created:January 22, 2009 Updated:June 1, 2010
Description: xine-lib has multiple vulnerabilities. From the Mandriva alert:

Failure on manipulation of either MNG or Real or MOD files can lead remote attackers to cause a denial of service by using crafted files (CVE: CVE-2008-5233).

Integer underflow allows remote attackers to cause denial of service by using Quicktime media files (CVE-2008-5241).

Vulnerabilities of unknown impact - possibly buffer overflow - caused by a condition of video frame preallocation before ascertaining the required length in V4L video input plugin (CVE-2008-5245).

Heap-based overflow allows remote attackers to execute arbitrary code by using crafted media files. This vulnerability is in the manipulation of ID3 audio file data tagging mainly used in MP3 file formats (CVE-2008-5246).

Gentoo 201006-04 xine-lib 2010-06-01
Mandriva MDVSA-2009:319 xine-lib 2009-12-05
SuSE SUSE-SR:2009:004 apache, audacity, dovecot, libtiff-devel, libvirt, mediawiki, netatalk, novell-ipsec-tools,opensc, perl, phpPgAdmin, sbl, sblim-sfcb, squirrelmail, swfdec, tomcat5, virtualbox, websphere-as_ce, wine, xine-devel 2009-02-17
Mandriva MDVSA-2009:020 xine-lib 2009-01-21
Ubuntu USN-710-1 xine-lib 2009-01-26

Comments (none posted)

xine-lib: multiple vulnerabilties

Package(s):xine-lib CVE #(s):CVE-2008-5238 CVE-2008-5242 CVE-2008-5244 CVE-2008-5248
Created:January 27, 2009 Updated:June 1, 2010
Description: From the Ubuntu advisory:

It was discovered that the Matroska, MOD, Real, and Real Audio demuxers in xine-lib did not correctly handle malformed files, resulting in integer overflows. If a user or automated system were tricked into opening a specially crafted Matroska, MOD, Real, or Real Audio file, an attacker could execute arbitrary code as the user invoking the program. This issue only applied to Ubuntu 6.06 LTS, 7.10, and 8.04 LTS. (CVE-2008-5238)

It was discovered that the QT demuxer in xine-lib did not correctly handle an invalid metadata atom size, resulting in a heap-based buffer overflow. If a user or automated system were tricked into opening a specially crafted MOV file, an attacker could execute arbitrary code as the user invoking the program. (CVE-2008-5234, CVE-2008-5242)

It was discovered that xine-lib did not correctly handle certain malformed AAC files. If a user or automated system were tricked into opening a specially crafted AAC file, an attacker could could cause xine-lib to crash, creating a denial of service. This issue only applied to Ubuntu 7.10, and 8.04 LTS. (CVE-2008-5244)

It was discovered that xine-lib did not correctly handle MP3 files with metadata consisting only of separators. If a user or automated system were tricked into opening a specially crafted MP3 file, an attacker could could cause xine-lib to crash, creating a denial of service. This issue only applied to Ubuntu 6.06 LTS, 7.10, and 8.04 LTS. (CVE-2008-5248)

Gentoo 201006-04 xine-lib 2010-06-01
Mandriva MDVSA-2009:298 xine-lib 2009-11-13
SuSE SUSE-SR:2009:004 apache, audacity, dovecot, libtiff-devel, libvirt, mediawiki, netatalk, novell-ipsec-tools,opensc, perl, phpPgAdmin, sbl, sblim-sfcb, squirrelmail, swfdec, tomcat5, virtualbox, websphere-as_ce, wine, xine-devel 2009-02-17
Ubuntu USN-710-1 xine-lib 2009-01-26

Comments (none posted)

Page editor: Jake Edge

Kernel development

Brief items

Kernel release status

The current 2.6 development kernel is 2.6.29-rc3, released on January 28. Some 430 changesets were merged since 2.6.29-rc2; most of these are fixes, but there's also a reorganization of the filesystem Kconfig files, a couple of drivers for the i.MX31 processor, a driver for TI OMAP High Speed Multimedia card interfaces, and a driver for Freescale QUICC Engine USB host controllers. The short-form changelog is in Linus's announcement; see the full changelog for lots of details.

The current stable 2.6 kernel is, released on January 24; the update was released at the same time. Both contain a fairly long list of fixes for a number of serious problems.

Comments (none posted)

Kernel development news

Quotes of the week

We've already demonstrated "look how much stuff we can merge" time and time again, but no-one ever seems to have a proposal for how we increase the amount of review code gets before it's merged.

There's lowering the barrier for entry, and there's not having a barrier at all. The latter is what I'm concerned that staging/ has become.

-- Dave Jones

In fact, i claim that doing "real review" on butt-ugly code is a waste of time and resources, and that it is also _harmful_. By doing real review on something that is not even right stylistically, you insert value into it. That way you _encourage_ that author of that ugly piece of code to contribute more code in the same fashion. You indirectly harm Linux that way because you encourage bad taste.

I strongly support the notion that high-level review is only warranted on code that is reviewable and looks tasteful, and that code which doesn't meet basic style should not be merged at all.

-- Ingo Molnar

This *looks* like the kind of naive question a newbie would ask. And a poor coder would simply patch in the increase. A reasonable coder would also make a comment about the potential bloat. A good coder would ask why you need more than fifty_five_characters_in_one_single_exported_identifier.

But you're operating on a completely different level!

You chose this example to demonstrate, by (if I may) expandio ad absurdum, that our current approach is flawed. Obviously you *knew* that it could be converted to a pointer, and equally obviously this would require us to process relocations before parsing version symbols. Clearly, you understood that this would mean we had to find another solution for struct module versioning, but you knew that that was always the first symbol version anyway.

You no-doubt knew that we could potentially save 7% on our module size using this approach. But obviously not wanting to criticize my code, you instead chose this oh-so-subtle intimation where I would believe the triumph to be mine alone!

I am humbled by your genius, and I only hope that my patch series approaches the Nirvanic perfection you foresaw.

-- Rusty Russell

Comments (2 posted)

LCA: A new approach to asynchronous I/O

By Jonathan Corbet
January 27, 2009
Asynchronous I/O has been a problematic issue for the Linux kernel for many years. The current implementation is difficult to use, incomplete in its coverage, and hard to support within the kernel. More recently, there has been an attempt to resolve the problem with the syslet concept, wherein kernel threads would be used to make almost any system call potentially asynchronous. Syslets have their own problems, though, not the least of which being that their use can cause a user-space process to change its process ID over time. Work on this area has slowed, with few updates being seen since mid-2007.

Zach Brown is still working on the asynchronous I/O problem, though; he used his talk to discuss his current approach. The new [Zach
Brown] "acall" interface has the potential to resolve many of the problems which have been seen in this area, but it is early-stage work which is likely to evolve somewhat before it is seriously considered for mainline inclusion.

One of the big challenges with asynchronous kernel operations is that the kernel's idea of how to access task state is limited. For the most part, system calls expect the "current" variable to point to the relevant task structure. That proves to be a problem when things are running asynchronously, and, potentially, no longer have direct access to the originating process's state. The current AIO interface resolves this problem by splitting things into two phases: submission and execution. The submission phase has access to current and is able to block, but the execution phase is detached from all that. The end result is that AIO support requires a duplicate set of system call handlers and a separate I/O path. That, says Zach, is "why our AIO support still sucks after ten years of work."

The fibril or syslet idea replaces that approach with one which is conceptually different: system call handlers remain synchronous, and kernel threads are used to add asynchronous operation on top. This work has taken the form of some tricky scheduler hacks; if an operation which is meant to be asynchronous blocks, the scheduler quickly shifts over to another thread and returns to user space in that thread. That allows the preservation of the state built up to the blocking point and it avoids the cost of bringing in a new thread if the operation never has to block. But these benefits at the cost of changing the calling process's ID - a change which is sure to cause confusion.

When Zach inherited this work, he decided to take a fresh look at it with the explicit short-term goal of making it easy to implement the POSIX AIO specification. Other features, such as syslets (which allow a process to load a simple program into the kernel for asynchronous execution) can come later if it seems like a good idea. The end result is the "acall" API; this code has not yet been posted to the lists for review, but it is available from Zach's web site.

With this interface, a user-space process specifies an asynchronous operation with a structure like this:

    struct acall_submission {
	u32 nr;
	u32 flags;
	u64 cookie;
	u64 completion_ring_pointer;
	u64 completion_pointer;
	u64 id_pointer;
	u64 args[6];

In this structure, nr identifies which system call is to be invoked asynchronously, while args is the list of arguments to pass to that system call. The cookie field is a value used by the calling program to identify the operation; it should be non-zero if it is to be used. The flags and various _pointer fields will be described shortly.

To submit one or more asynchronous requests, the application will call:

    long acall_submit(struct acall_submission **submissions,
                      unsigned long nr);

submissions is a list of pointers to requests, and nr is the length of that list. The return value will be the number of operations actually submitted. If something goes wrong in the submission process, the current implementation will return a value less than nr, but the error code saying exactly what went wrong will be lost if any operations were submitted successfully.

By default, acall_submit() will create a new kernel thread for each submitted operation. If the flags field for any request contains ACALL_SUBMIT_THREAD_POOL, that request will, instead, be submitted to a pool of waiting threads. Those threads are specific to the calling process, and they will only sit idle for 200ms before exiting. So submission to the thread pool may make sense if the application is submitting a steady stream of asynchronous operations; otherwise the kernel will still end up creating individual threads for each operation. Threads in the pool do not update their task state before each request, so they might be behind the current state of the calling process.

If the id_pointer field is non-NULL, acall_submit() will treat it as a pointer to an acall_id structure:

    struct acall_id {
	unsigned char opaque[16];

This is a special value used by the application to identify this operation to the kernel. Internally it looks like this:

    struct acall_kernel_id {
	u64 cpu;
	u64 counter;

It is, essentially, a key used to look up the operation in a red/black tree.

The completion_pointer field, instead (if non-NULL), points to a structure like:

    struct acall_completion {
	u64 return_code;
	u64 cookie;

The final status of the operation can be found in return_code, while cookie is the caller-supplied cookie value. Once that cookie has a non-zero value, the return code will be valid.

The application can wait for the completion of specific operations with a call to:

    long acall_comp_pwait(struct acall_id **uids,
			  unsigned long nr,
			  struct timespec  *utime,
			  const sigset_t *sigmask,
			  size_t sigsetsize);

The uids array contains pointers to acall_id structures identifying the operations of interest; nr is the length of that array. If utime is not NULL, it points to a timespec structure specifying how long acall_comp_pwait() should wait before giving up. A set of signals to be masked during the operation can be given with sigmask and sigsetsize. A return value of one indicates that at least one operation actually completed.

An application submitting vast numbers of asynchronous operations may want to avoid making another system call to get the status of completed operations. Such applications can set up one or more completion rings, into which the status of completed operations will be written. A completion ring looks like:

    struct acall_completion_ring {
	uint32_t head;
	uint32_t nr;
	struct acall_completion comps[0];

Initially, head should be zero, and nr should be the real length of the comps array. When the kernel is ready to store the results of an operation, it will first increment head, then put the results into comps[head % nr]. So a specific entry in the ring is only valid once the cookie field becomes non-zero. The kernel makes no attempt to avoid overwriting completion entries which have not yet been consumed by the application; it is assumed that the application will not submit more operations than will fit into a ring.

The actual ring to use is indicated by the completion_ring_pointer value in the initial submission. Among other things, that means that different operations can go into different rings, or that the application can switch to a differently-sized ring at any time. In theory, it also means that multiple processes could use the same ring, though waiting for completion will not work properly in that case.

If the application needs to wait until the ring contains at least one valid entry, it can call:

    long acall_ring_pwait(struct acall_completion_ring *ring,
			  u32 tail, u32 min,
			  struct timespec  *utime,
			  const sigset_t *sigmask,
			  size_t sigsetsize);

This call will wait until the given ring contains at least min events since the one written at index tail. The utime, sigmask, and sigsetsize arguments have the same meaning as with acall_comp_pwait().

Finally, an outstanding operation can be canceled with:

    long acall_cancel(struct acall_id *uid);

Cancellation works by sending a KILL signal to the thread executing the operation. Depending on what was being done, that could result in partial execution of the request.

This API is probably subject to change in a number of ways. There is, for example, no limit to the size of the thread pool other than the general limit on the number of processes. Every request is assigned to a thread immediately, with threads created as needed; there is no way to queue a request until a thread becomes available in the future. The ability to load programs into the kernel for asynchronous execution ("syslets") could be added as well, though Zach gave the impression that he sees syslets as a relatively low-priority feature.

Beyond the new API, this asynchronous operation implementation differs from its predecessors in a couple of ways. Requests will always be handed off to threads for execution; there is no concept of executing synchronously until something blocks. That may increase the overhead in cases where the request could have been satisfied without blocking, though the use of the thread pool should minimize that cost. But the big benefit is that the calling process no longer changes its ID when things do block. That results in a more straightforward user-space API with minimal surprises - certainly a good thing to do.

Linus was at the presentation, and seemed to think that the proposed API was not completely unreasonable. So it may well be that, before too long, we'll see a version of the acall API proposed for the mainline. And that could lead to a proper solution to the asynchronous I/O problem at last.

Comments (29 posted)

Snet and the LSM API

By Jake Edge
January 28, 2009

A new security module, called snet (which is short for "security for network syscalls") was recently posted as an RFC on the linux-security-module mailing list. Its purpose is rather simple—much simpler than the two current mainline users of the LSM interface—intercept system calls for networking and call out to user space to determine if they are to be allowed. The idea is to be able to create Linux versions of the "personal firewall" that is popular on Windows machines. Reaction to snet was mixed, partially because of a disdain for that type of security tool, but also because it is implemented using LSM.

Snet, developed by Samir Bellabes, consists of a kernel piece which uses LSM to hook the "interesting" socket-related system calls (socket(), bind(), connect(), listen(), and accept()), as well as a user space library that can be used to accept or deny those calls. Communication between the kernel and user space is handled by a netlink socket using libnl. The decisions are then cached in the kernel to reduce the number of calls required to user space. That last part is important because personal firewalls typically pop up a request on the user's display asking them to decide whether to allow the system call. Timeouts can be established for the user-space calls, along with a default response if the timeout is reached.

This "user request" feature of personal firewalls is one thing that many find objectionable. As Paul Moore puts it: "my opinion is that it is a poor option for security and typically only results in training the user to click the 'allow' button when the pfwall [dialog] box pops up on his/her screen". Yet it is a "feature" of other operating systems and not completely unreasonable for Linux to support. From that perspective, snet seems like a reasonable starting point.

There are a few other problems, though, stemming from the decision to use the LSM API. Peter Dolding seems to think this capability should be added to netfilter, rather than built as a standalone solution. Others pointed out that netfilter is sufficiently low-level that any context about users or processes that are performing these operations is not available. That could change, but it would take a concerted effort to change the netfilter code, which doesn't seem likely near-term, if ever.

A larger problem comes from the inability to stack LSM modules. If a user is interested in the kinds of protection that snet can provide, they must forgo any other LSM-implemented security solution (i.e. SELinux, Smack, AppArmor, TOMOYO, etc.). A parallel discussion about LSM stacking is also occurring on linux-security-module, partially motivated by the needs of snet and other "smaller" security solutions. Those tools do not implement a full-scale security solution a la SELinux or Smack, but instead try to handle a smaller subset of the problem.

LSM stacking also came up at the LCA security panel, so it is certainly on the minds of Linux security developers. Casey Schaufler sums up the current state of affairs along with a look to a possible future:

Stacking of special purpose LSMs would be a great idea. One reason that we don't have special purpose LSMs is that you can't stack them, you have to provide the entire "solution" in the one LSM. Of course, complete solutions don't stack.

I would be very interested to see an LSM that does nothing but multiplex other LSMs. That would make multiple unrelated LSMs feasible without trying to create something that could deal with SELinux's and Smack's different notions of network access control model. You could revive the notion of loadable modules while you're at it. The LSM Multiplexer LSM could put any restrictions on the LSMs it is willing to support.

It seems likely that someone will try to build an LSM-multiplexer before too long. In addition to snet, the TuxGuardian project appears to be reawakening after a period of quiet. It is similar to snet, and also uses LSM to trap network accesses. Other projects are also mentioned in the threads on linux-security-module. In the end, it is just too limiting to require that all security modules implement a full-scale security solution, and since LSM is the only accepted way to implement some of these hooks, some middle ground will likely be found.

In another related thread, Schaufler notes that a lot of what is being described for personal firewalls could be implemented using SELinux—at least as a starting point. One sticking point to that particular solution is the user interaction required. It is hard to see how an SELinux-derived solution could interact with the user for some decisions, but not others. It also is clearly outside of the scope of what SELinux is intended for.

While snet may implement "bad security" in some minds, the discussion about it, especially with regard to LSM stacking has been very valuable. It may turn out that there is no sane way to stack arbitrary security modules in a way that a) makes sense and b) doesn't drive all of the security developers insane. But there are some reasonable use cases for that capability so it would seem that an investigation of those possibilities is warranted. With luck we will soon see where it leads.

Comments (3 posted)

A SystemTap update

January 21, 2009

This article was contributed by Mark Wielaard

SystemTap has been under active development for a some years. More than 35 people have contributed enhancements in the last year. But newer developments, like the ability to dynamically trace user space programs, seem to have been very quietly introduced and, thus, have not always been noticed by users that are not yet using SystemTap extensively. So this article will take a look at what currently works out of the box, what that box should contain to make things work, the work in progress, and the challenges SystemTap faces to be more powerful and get more widespread adoption.

SystemTap's goal is to provide full system observability on production systems, which is safe, non-intrusive, (near) zero-overhead and which allows ubiquitous data collection across the whole system for any interesting event that could happen. To achieve this goal, SystemTap defines the stap language, in which the user defines probes, actions, and data acquisition. The SystemTap translator and runtime guarantees that probe points are only placed on safe locations and that probe functions cannot generate too much overhead when collecting data. For dynamic probes on addresses inside the kernel, SystemTap uses kprobes; for dynamic probes in user space programs, instead, SystemTap uses its cousin uprobes [PDF]. This provides a unified way of probing and then collecting data for observing the whole system. To dynamically find locations for probe points, arguments of the probed functions and the variables in scope at the probe point, SystemTap uses the debuginfo (Dwarf) standard debugging information that the compiler generates.

So, to provide an ideal setting for using SystemTap, GNU/Linux distributions should provide easy access to debuginfo for the kernel and user space programs. Almost all distributors do this. The kernel supports kprobes, which has been in the upstream kernel for some years, and uprobes, which comes with (and is automatically loaded by) SystemTap, but which relies on the full utrace framework, which isn't yet in the mainline kernel. (The latest few releases of the Fedora family, including Red Hat Enterprise Linux and CentOS, do include full utrace support by default). SystemTap works without debuginfo, but the range of probes and the amount of data you can collect is then very limited. And it works without utrace support, but then you won't be able to do deep user space probing, only observe direct user/kernel space interactions.

There are various probe variants one can use with SystemTap, but the most interesting ones are the debuginfo-based probes for the kernel, kernel modules, and user space applications. These can use function, statement or return variants, and wildcards, such as:

  • kernel.function("rpc_new_task"): a named kernel function,
  • process("/bin/ls").function("*"): any function entry in a specific process,
  • module("usb*").function("*sync*").return: every return of a function containing the word sync, in any module starting with usb, or
  • kernel.statement("bio_init@fs/bio.c+3"): for a specific statement in a particular file

Depending on the type of probe, one can access specifics of the probe point. For the debuginfo based probes these are $var for in-scope variables or function arguments, $var->field for accessing structure fields, $var[N] for array elements, $return for the return value of a function in a return probe, and meta variables like $$vars to get a string representation of all the in-scope variables at a particular probe point. All access to such constructs are safeguarded by the SystemTap runtime to make sure no illegal accesses can occur.

Given that one has the debuginfo of a program installed, one can easily get a simple call trace of a specific program, including all function parameters and return values with the following stap script:

  probe process("/bin/ls").function("*").call
    printf("=>%s(%s)\n", probefunc(), $$parms);

  probe process("/bin/ls").function("*").return
    printf("<=%s:%s\n", probefunc(), $$return);

The examples included with SystemTap come with much more powerful versions that show timed, per-thread call graphs, optionally showing only children of a particular function call.

While these probing and data extraction constructs are powerful, they do require some knowledge of the kernel or program code base. Since you are often interested in what is happening and not precisely how, SystemTap comes with "tapsets," which are utility functions and aliases for groups of interesting probes in a particular subsystem. Examples include system calls, NFS operations, signals, sockets, etc. Currently these tapsets are distributed with SystemTap itself, but ideally each program or subsystem would come with its own tapset of interesting events provided by the program or subsystem maintainer.

Just printing out events while they occur is not always ideal. First, you may be overwhelmed by volume of the output; second, you might only be interested in a specific subset of the same event (only certain parameters, only calls that take longer than a specific time, only from the process that does the most calls over a specific time frame, etc.). Finally, processing all the events on your production system might interfere with the thing you are trying to observe. Especially at the start of your investigations, when you might not yet be sure what the interesting events are, you may do some very wide probing to see what is going on.

For this reason the stap language supports variables that can be used as associative arrays, simple control structures and data aggregation functions to do simple statistics during probe time, with very low overhead and without having to call external programs that might interfere with the system being probed.

The following script might be how you would start investigating a problem involving a system which seems to do an excessive amount of reads. It uses the "vfs" tapset and an associative array to store the number of reads a particular executable with a specific process ID does:

  global totals;
    totals[execname(), pid()]++

  probe end
    printf("== totals ==\n")
    foreach ([name,pid] in totals-)
      printf("%s (%d): %d \n", name, pid, totals[name,pid])

This will give you a list of executables and their pid sorted by the total number of vfs reads done while the script was running. These facilities in the stap language help greatly to minimize any overhead of the tracing framework. If you would try to do the same thing by just printing each vfs event and then post-processing the results with Perl, you might end up with Perl itself being the process doing the most vfs calls, or worse, by having to parse megabytes of trace data, Perl might start trashing the system even more, making it harder to determine the root cause of the original problem.

SystemTap now also supports static markers in the kernel. This allows subsystem maintainers to mark specific events as interesting, providing a format string of the arguments to the event that can be easily parsed by tracing tools. The advantage of static markers over tapsets is that they are in-code and so might be easier to maintain, though you probably still want to have an associated tapset for utilities to nicely format the arguments or associate various markers with each other. Also, they can work without needing any DWARF debuginfo around, but you lose the ability to inspect local variables or function parameters not passed to the marker. You use them with a command like:

    probe kernel.mark("kernel_sched_wakeup")

The tapset can then access the arguments through $argN and get the argument format string of the marker with $format.

An alternate way of adding static markers to the kernel, tracepoints, is not yet directly supported in SystemTap. Tracepoints have the disadvantage that they require the DWARF debuginfo to be around because they don't currently specify the types of their arguments except through their function prototypes. So SystemTap can currently only use tracepoints via hand-written intermediary code that maps them to markers.

The development version of SystemTap recently got support for user-space static markers. Although SystemTap defines its own STAP_PROBE macros for usage in applications that want to add static markers, there is also an alternative tracing tool, Dtrace, that has its own way for programs to embed static markers. SystemTap supports the convention used by Dtrace by providing an alternative include file and build preprocessor so that programs using DTRACE_PROBE macros can be compiled as if for Dtrace and have their static markers show up with SystemTap.

Luckily, there are various programs that already have such markers defined. For example PostgreSQL has various static markers to trace higher-level events like transactions and database locks. Currently one has to adapt the build process of such programs by hand, but the next version of SystemTap will come with scripts that will automate that process.

While SystemTap works well on GNU/Linux distributions that support it, there are a couple of challenges to overcome to make it more ubiquitous and easier for more people to use out of the box. This goes beyond work on the SystemTap code base itself. Since the goal is to provide full system observability, from low-level kernel events to high-level application events, there is work to be done all across the GNU/Linux stack. Also needed is better integration into more distributions, providing default installation of SystemTap and tapsets, easy access to debuginfo for deep inspection, binaries compiled with marker support for high-level events, etc. The two main challenges to make SystemTap more powerful and easier to use on any distribution are debuginfo and better kernel support.

A lot of power of SystemTap comes from the fact that it can use DWARF debuginfo from the kernel and applications to do very detailed inspection. But this power comes at a price, since the debuginfo is often large. For example, on Fedora, the kernel debuginfo package is far larger than the kernel package itself. One easy win will be to split the debuginfo package into the DWARF files and the source files, which are needed for a debugger, but not directly for a tracer like SystemTap. Fedora plans to do this for its next release. The elfutils team is also working on a framework for Dwarf transformation and compression that could be used as post-processor on the output of the compiler.

SystemTap sometimes suffers from the same issues you might have with a debugger: the compiler has optimized the code, but forgot where it put a certain variable after the optimization. Of course this is always the variable you are most interested in. Alexandre Oliva is working on improving the local variable debug information in GCC. His variable tracking assignments [PDF] branch in GCC aims to improve debug information by annotating assignments early in the compilation process and carrying over such annotations throughout all optimization passes so that you can always accurately track variables, even in optimized code.

Finally, there is work being done on having a SystemTap "client and server" that could be used on production systems where you might not even want to have any tools or debuginfo installed. You can then set up a development client that has the same configuration as the production system with the addition of the SystemTap translator and all debuginfo, create and test your scripts there. The final result of this work could then be used on the bare-bones production server.

Most of the SystemTap runtime, like the kprobes support, is maintained in the upstream linux kernel, but there is some stuff still missing. This leads to distributions having to add patches to their kernel, especially to support user space tracing. In particular, the utrace framework is still not upstream. Over the last few kernel releases, various parts have been merged, including the utrace user_regset framework, which creates an interface for code accessing the user-space view of any machine specific state, and the tracehook work, which provides a framework for all the user process tracing. The actual utrace framework sits on top of these components; the ptrace() interface is implemented as utrace client. Anything that changes the ptrace implementation is hairy stuff, so there is a large ptrace testsuite to make sure that nothing breaks. One idea under consideration is to push utrace upstream in two installments. At first, using utrace or ptrace on a process would be mutually exclusive. That could pave to path to get pure-utrace upstream in first and then do proper ptrace cooperation in a second go.

This approach would also provide the way for uprobes, which depends on the utrace framework, to be submitted upstream. Uprobes components such as breakpoint insertion and removal and the single-stepping infrastructure are also potentially useful for other user space tracers and debuggers. Like with utrace, one idea is to factor out these portions of uprobes so that it can be used by multiple clients as a shared user-space breakpoint support (ubs) layer. With multiple clients using the same layer, upstream acceptance might be easier.

One candidate for using both the utrace and the uprobes layer besides SystemTap is Froggy, which provides an alternative debugger interface to ptrace. The GDB Archer project would like to serve as testbed for Froggy, which they hope will also make GDB more robust when linked with libpython, which is being used for GDB scripting.

In the past, kernel maintainers were skeptical about tracing, which resulted in tracing frameworks like dprobes, LTT and parts of the SystemTap runtime being maintained outside the main kernel tree. But now that there is actually no shortage of tracing options in the kernel, people like Ted Ts'o have been urging the SystemTap hackers to push as much as possible upstream. Ted also encourages the developers to focus more on the kernel hackers as first-rate customers, rather than focusing exclusively on the whole system experience for production setups. The SystemTap developers have been successful in making their module support "just work" with any kernel. It currently works with kernel versions between 2.6.9 and the latest, 2.6.28; it is also regularly tested against the latest -rc kernels. But, maybe they have been a little too successful, because having this activity be more visible on the linux kernel mailing list would be good publicity. In response, there is now an active SystemTap bug called "Make upstream kernel developers happy" that calls for more frequent postings on the main kernel mailing list, improvements in the usage of debuginfo as described above, and pushing utrace and uprobes upstream first as a priority.

There is still work to do, but over the last couple of years the GNU/Linux tracing and debugging experience has kept improving. Hopefully soon, all these parts will fall into place and provide hackers with a fairly nice environment for not only debugging on development systems, but also for unobtrusive tracing on production systems.

About the author: Mark Wielaard is a Senior Software engineer at Red Hat working in the Engineering Tools group hacking on SystemTap.

Comments (18 posted)

Patches and updates

Kernel trees


Core kernel code

Development tools

Device drivers

Filesystems and block I/O

Memory management


Virtualization and containers

Benchmarks and bugs


Page editor: Jonathan Corbet


News and Editorials

Fedora looks to prevent upgrade disasters

January 28, 2009

This article was contributed by Bruce Byfield

The Fedora project is getting creative about ways to ensure that updates cause fewer problems for users. In the past six weeks, project members have floated over half a dozen ideas about how to achieve this goal on the fedora-devel-list alone — and, no doubt, other, unrecorded ones on chat channels, private emails, and at FUDCon, the project's user and developer conference, held in mid-January. Which of these ideas will be implemented is still undecided, but the discussion is a treasury of ideas, as well as a vivid glimpse into the considerations involved with running one of the largest GNU/Linux distributions.

The discussion began in early December, 2008 because an update to D-Bus, a core package that carries messages between applications, caused numerous broken packages when applied to Fedora 10. Users were particularly concerned because installing the update left them unable to use PackageKit, Fedora's desktop tool for package upgrades. Fedora was quick to issue instructions about how to fix the problem, but the project's developers appear to have become galvanized by the problem, and have determined to avoid similar problems in the future.

Very likely, the response was affected by Fedora's problems in the last six months, including the still-mysterious security crisis that lasted 26 days last August and September, and the need to adjust release schedules because of the security problems. With these events fresh in everybody's minds, Fedora members may well have felt pressure to prove themselves by responding effectively. In fact, the quickness of the responses might suggest that the Fedora community was still in crisis mode from the earlier crises.

It was also worrying that, early in the response to the D-Bus crisis, Fedora developers were openly admitting that they lacked a complete understanding of what was affected. "Does anyone have an understanding of exactly what is broken [and] what isn't?" developer Ian Amess asked, and, in the following discussion, it appeared that nobody did. At times, developers were reduced to anecdotal reporting, such as Arjan van de Ven's report that "I have a strong suspicion that the kerneloops applet is broken (based on a sharp drop of incoming reports since a few days)." Without thorough information, Fedora troubleshooters were unable to say whether the fastest way to offer repairs was to issue an update, or to regress to an earlier version of D-Bus.

In this situation, plans to avoid reoccurrences of the situation began to be suggested even before the immediate problem was solved. One of the first solutions on the fedora-devel-list was from Kevin Kofler, who advocated reverting to the previous version, and only changing the version of D-Bus with new Fedora releases. Similarly, a simultaneous thread discussed the possibility of creating a list of key packages that should receive priority in Fedora quality assurance, with Will Woods suggesting that the list should include yum, Network Manager, GRUB, and the kernel, along with all of their dependencies.

Yet another discussion centered on the the karma system in which developers vote on the readiness of packages in quality assurance. As summarized by Michael Schwendt, the consensus in this discussion was that several communication problems existed: Maintainers could choose the urgency of the notifications of bugs in their packages, responses to bugs are left to maintainers' judgment, and so are efforts to coordinate testing between maintainers when their packages shared dependencies. In other words, responses to problems are not uniform, no quality standards exist, nor any expectations of cooperation. Instead, the response is left to the conscientiousness of each maintainer.

In addition, submitters could vote on the packages they submitted themselves, potentially reducing the scrutiny of others. Nor did the Fedora system have any minimum level of karma that signaled when a package was ready to be added to the stable repositories; instead leaving it once again to the standards of the maintainers. Further insight into Fedora quality assurance was given by Luke Macken on his blog, where he calculated that the majority of packages were released for general use in as little as six days, and often did so simply at the maintainer's request, statistics that might suggest quality assurance is less rigorous than it could be.

As discussion continued over the weeks, other threads discussed innovations that might prevent reoccurrences. Arthur Pemberton advocated what he called a "Fedora Com System" — a kind of hot line on the desktop that would allow Fedora leaders to communicate directly with users. However, others maintained that fedora-announce-list already provided a similar service, especially if users subscribed to it via an RSS feed.

Other comments raised additional possibilities. Steven Moix raised the possibility of creating an alias for yum, the basic command used by Fedora for package management so that it would always use the --skip-broken option. In this way, problematic packages would not be installed or added as updates, and users would be left with intact systems. Others, though, rejected this idea because it could still leave users without the functionality they needed. Moreover, if broken packages were not installed, they might easily go unreported unless users paid close attention to the output of PackageKit or yum.

In much the same way, another contributor's suggestion that every second or so Fedora release include a stable version, so users could choose whether they wanted a bleeding edge operating system or a reliable one. This solution might help to compensate for Fedora's relatively short life cycle for each version, a choice that some users perceive as undesirable compared to the policies of other distributions. Others, though, shot down the idea as not only overly-ambitious but unnecessary, on the grounds that the Red Hat Enterprise Linux and CentOS distributions already provided stable versions of the same code as Fedora.

As discussion continued over Christmas and into the New Year, one of the most interesting proposals was Jesse Keatings' idea of appointing what he called "proven packagers." In Keating's view, proven packagers would be experienced, well-respected experts in package management — the kind whom "you would trust fully with any of the packages you either maintain or even just use." Proven packagers would have a roving brief, and be ready to mentor or intervene as needed, "always with a desire to improve the quality of Fedora." Expressing misgivings that the status might be too easy to attain, Robert Scheck emphasized that proven packagers should not be appointed by a single person, and "should be persons well known to the community and having some presence" in the community so that they could operate more effectively.

This is only a summary of a dozen threads and hundreds of responses. Still, it gives some sense of how the Fedora community is analyzing itself in the aftermath of the D-Bus disaster. At least on the evidence found in fedora-devel-list, Fedora members might be criticized for not looking to other distributions for solutions, and for the fact that, so far, only the proven packagers suggestion is visibly moving forward. All the same, the creative open-mindedness and the general politeness in the discussions might still provide Fedora with the solutions it needs to weather its latest engineering and marketing disaster and prevent similar problems in the future.

Comments (10 posted)

New Releases

KNOPPIX 6.0 released

KNOPPIX 6.0 is out. There's a lot of changes including a rewrite of the boot system, the LXDE desktop environment, a slimmed-down package set, and the ADRIANE audio menu environment.

Comments (5 posted)

Tin Hat 20090119 released

Tin Hat, a Linux distribution derived from hardened Gentoo, fixes bugs and several security issues in this version.

Full Story (comments: none)

Ubuntu 8.04.2 LTS released

Ubuntu 8.04.2 LTS has been announced. "The Ubuntu team is proud to announce the release of Ubuntu 8.04.2 LTS, the second maintenance update to Ubuntu's 8.04 LTS release. This release includes updated server, desktop, and alternate installation CDs for the i386 and amd64 architectures. In all, over 200 updates have been integrated, and updated installation media has been provided so that fewer updates will need to be downloaded after installation. These include security updates and corrections for other high-impact bugs, with a focus on maintaining stability and compatibility with Ubuntu 8.04 LTS."

Full Story (comments: 7)

Announcing Moblin v2 Core Alpha Release

The Moblin team has announced the availability of the Moblin v2 Core Alpha Release. See the release notes for more information.

Full Story (comments: 9)

Distribution News

Debian GNU/Linux

New archive key created

Debian has a new key for the archive, but the key will not be used until the release of Lenny (5.0) r1 or the expiry date of the current key, July 1, 2009; which ever comes first.

Full Story (comments: 2)


Ext4 to be standard for Fedora 11, Btrfs also included (heise online)

heise online notes plans to include Ext4 and Btrfs in Fedora 11. "According to current plans, version 11 of Fedora, which is expected to arrive in late May, will use Ext4 as its standard file system. That's what the Fedora Engineering Steering Committee (FESCo) recently decided, following a heated discussion in an IRC meeting. If however Ext3's successor encounters big problems with the pre-release versions of Fedora 11, the developers will dump that plan and revert to Ext3. So the Fedora Project is going one step beyond Ubuntu version 9.04 (Jaunty Jackalope), which as things currently stand will offer Ext4 as an install time option, though the installer will still use Ext3 as its default file system."

Comments (82 posted)

Fedora Board Recap 2009-01-20

Click below for a quick recap of the January 20th meeting of the Fedora Advisory Board. Topics include: Net Neutrality Follow-up, FUDCon F11 Follow-up, Finalizing trademark guidelines and What is Fedora?.

Full Story (comments: none)

Red Hat Enterprise Linux

Mandriva's loss is Red Hat's gain

Adam Williamson, former community manager of Mandriva, has been hired by Red Hat to work on a new community QA system. Vincent Danen, also an ex-Mandriva employee, will be a Senior Software Engineer at Red Hat working with the Red Hat Security Response Team.

Comments (none posted)

Ubuntu family

Process changes: Developer application processes

Ubuntu has finalized how future applications will work out. This email is focused on Ubuntu Developers (MOTU) application process. MOTU (Masters Of The Universe) are mostly volunteers packaging non-core applications.

Full Story (comments: 1)

Other distributions

ClarkConnect 5.0 Feature Overview

ClarkConnect is gearing up for its 5.0 release, expected in early April. Here is a feature overview of some of the highlights in this release, including complete LDAP Integration, Windows File Sharing / Samba, Network Management / Peer-to-Peer and mail quarantine.

Comments (none posted)

Distribution Newsletters

Debian miscellaneous developer news (#13)

This issue of Misc developer news covers: Security support for new testing (squeeze) delayed, New whohas tool displays other distributions that have your package, Documentation for python-apt, sbuild and wanna-build status update, and Kernel pseudo-package removed.

Full Story (comments: none)

DistroWatch Weekly, Issue 287

The DistroWatch Weekly for January 26, 2009 is out. "In this issue we share some highlights from, one of the world's most popular open source conferences. In the news, the ext4 file system finds its way into Ubuntu and becomes the default for Fedora 11, Slackware Linux prepares for KDE 4.2, server distribution ClarkConnect releases feature list for its upcoming version 5.0, and two well-known ex-Mandriva developers join Red Hat, Inc. Also in this issue, links to two interviews with the developers of Fedora and Ubuntu, and an update on DistroWatch's package management cheatsheet."

Comments (none posted)

Fedora Weekly News #160

The Fedora Weekly News for January 25, 2009 is out. "Announcements notes upcoming events and deadlines for Fedora 11. PlanetFedora picks up on some communication problems in "General" and shares "How To" information on disabling the system bell. Developments rounds up some "Fedora 11 Release Activity" and synopsizes the debate around a "Minimalist Root Login to X?". Infrastructure is back with some essential information on "Fedora Security Policy". Artwork shares the "Fedora 11 Release Banner". SecurityAdvisories provides a handy list of essential updates. Virtualization explains "QEMU VM Channel Support". We are pleased to have an AskFedora Q&A covering the advisability of using the "Ext4 Filesystem on Solid State Disks". Keep sending your questions!"

Full Story (comments: none)

OpenSUSE Weekly News/56

This issue of the OpenSUSE Weekly News looks at FOSDEM 2009, Top 25 Most Dangerous Programming Errors, Novell's 2009 Technical Strategy and Process, NTFS-3g - writing to windows partition, Preview/Fix broken AVI files in openSUSE and more.

Comments (none posted)

Ubuntu Developer News Issue 1

The first edition of Ubuntu Developer News has been released. It consists of short blurbs of news about development projects or other activities of interest to developers, with links off to more information. This edition has entries on the Technical Board election, Testing Days, New D-Bus, Launchpadlib, and much more. Click below for the newsletter.

Full Story (comments: none)

Ubuntu Weekly Newsletter #126

The Ubuntu Weekly Newsletter for January 24, 2009 covers: Ubuntu 8.04.2 LTS released, Ubuntu Developer Week, Ubuntu Classroom upcoming sessions, Developer application process changes, Technical Board run-off results, Ubuntu Developer News: issue #1, Ubuntu on Italian TV, Japanese LoCo holds "Offline Meeting Tokyo," Nordic Ubuntu LoCo team working together, Ubuntu Podcast #18, Meeting summaries, and much more.

Full Story (comments: none)

Newsletters and articles of interest

CrunchBang is a Speedy, Dark-Themed Linux Desktop (LifeHacker)

LifeHacker has a screenshot tour of CrunchBang Linux. "CrunchBang seems to Just Work on the two systems I tested it on, and it looks like a great fit for an on-the-go desktop for your thumb drive, or replacement for a slow-moving Linux boot."

Comments (1 posted)


Q&A with Paul Frields at Red Hat (Neowin)

Neowin has a wide-ranging interview with Fedora project leader Paul Frields. The questions largely come from Neowin forum users and cover such topics as Fedora on netbooks, Fedora artwork, the relationship between the project and Red Hat, future releases, and much more. "I would like to see a stronger quality engineering effort built around Fedora. We had a couple instances in the last six months where end users had a broken package update experience for a few days at a time. We were able to repair that easily, but it shouldn't ever have happened in my opinion, and I've been talking with different people in Fedora about how we can improve on how we deliver bits to our users. The good news is that we are developing some better automated tools to help prevent a recurrence, and many of the Fedora community people are talking about quality as a concrete goal for Fedora 11 and 12."

Comments (none posted)

Page editor: Rebecca Sobol


The long road to a working Cheese

By Forrest Cook
January 28, 2009

Cheese is an interesting application that is designed to take still photos and movies using a webcam. In addition to its basic monitoring and recording abilities, Cheese can display and record real-time video effects similar to those from the EffecTV project. Cheese is based on the GStreamer multimedia framework. From the Cheese project description:

Cheese uses your webcam to take photos and videos, applies fancy special effects and lets you share the fun with others. It was written as part of Google's 2007 Summer of Code lead by Daniel G. Siegel and mentored by Raphaël Slinckx. Under the hood, Cheese uses GStreamer to apply fancy effects to photos and videos. With Cheese it is easy to take photos of you, your friends, pets or whatever you want and share them with others. After a success of the Summer of Code, the development continued and we still are looking for people with nice ideas and patches ;)

Cheese started out as a Google Summer of Code project entitled Photobooth-like application for the GNOME-Desktop. (See this GNOME Journal interview with Daniel Siegel). Several additional GSoC projects involved Cheese, these include Cheese integration into Gnome with student Felix Kaser and mentor Daniel Siegel and Extend Cheese with OpenGL effects with student Filippo Argiolas and mentor Daniel Siegel.

The main features of Cheese include:

  • Real-time video monitor window.
  • Supports the selection of multiple video resolutions.
  • Ability to take still .jpg photos with optional video effects.
  • Has a countdown timer for taking still photos.
  • Makes a click sound when a still photo is taken.
  • Ability to record .ogv movies with sound and optional video effects.
  • Can chain multiple video effects together.
  • Built-in thumbnail library that shows recorded photos and movies.
  • Displays photos with Eye of GNOME.
  • Plays movies with Totem Movie Player.
  • Images and movies can be saved to files, emailed or exported to F-spot.

Your author installed version 2.24.2 of Cheese on an Ubuntu 8.10 system using the standard Ubuntu package. The CPU was an Athlon 64 2800 running the 32 bit version of Ubuntu. Initially, an ancient Kensington VideoCam Model 67015 was tried as the video capture device, but the camera would not work. This was likely a system issue since other video applications such as xawtv and EffecTV no longer saw the camera after the system was upgraded from Ubuntu 8.04. A new HP Deluxe Webcam model KQ246AA (USB) with a built-in microphone was purchased at the local big-box electronics store. Initially, the HP camera worked with xawtv, but not with Cheese (or EffecTV).

A bit of Googling found an Ubuntu bug report that indicated others were having similar issues with Cheese. Following the thread in the bug report, your author first tried the suggestion of installing a newer kernel from the Pre-released package updates. This did not fix the problem. Digging further into the bug report messages, the next attempt involved installing mercurial (hg), then cloning and installing the latest uvcvideo driver from the LinuxTV site. This finally produced a video capture device that worked with Cheese.

Operation of Cheese is quite straightforward, one can simply run the application and start clicking photos. A few user interface issues were encountered. The Edit->Preferences menu allows one to select the camera and its resolution, but no audio configuration choices were given. It was necessary to run the gstreamer-properties application to select the camera's built-in USB audio device. Sometimes, after a pull-down menu was selected, a gray rectangle was left where the menu used to reside, on top of the moving video monitor. Sometimes the gray area would eventually disappear while other times it was necessary to move the main Cheese window to refresh the video display.

The Effects button is somewhat non-intuitive; when one clicks it, a set of effects is shown. It took a bit of playing around to figure out that one needs to click Effects again to get back to the main video monitor window. A differently named "Monitor" button would be useful here. When making movies, using resolution above 352x288 resulted in major losses of audio samples and jerky video. Both the USB camera's audio input and the sound card's auxiliary input were tried with similar results. The Cheese built-in documentation recommended using gstreamer-properties to switch the default video output to X11/XShm/Xv, this was tried but the higher-res video was still jerky. A CPU with more muscle would likely improve this situation.

Your author was left with the impression that Cheese and its ancillary applications could greatly benefit from the addition of a few extra features. It would be more fun to look at still photos if Eye of Gnome's slideshow capability had the ability to step through the stills on a timed interval. It should be noted that it is possible to export images to F-Spot, which can display a timed slide show. Similarly, Totem could really use some more advanced features such as a pause button with single-frame stepping capabilities. The documentation claims that it is possible to right-click the recorded image or video thumbnail and fire up a non-default viewer, but your author was unable to make this work. The video effects are very cool, but there are no audio effects; LV2 comes to mind here. Some of these ideas might make some good 2009 Google Summer of Code projects.

Despite encountering a number of bugs and user interface difficulties, Cheese is indeed a unique and useful application. Cheese is the first application your author has found that can produce a working movie from a web cam. At this point, or at least with this hardware configuration, Cheese is not quite ready for use by non-technical users, nonetheless it is a great application that shows much promise.

Comments (6 posted)

System Applications

Database Software

Eventum 2.2 released

Version 2.2 of Eventum has been announced. "I am pleased to announce that Eventum 2.2, the latest version of the user-friendly and flexible issue tracking system from MySQL, is now available".

Full Story (comments: none)

Firebird: 2.0.5 released (SourceForge)

Version 2.0.5 of thee Firebird DBMS has been announced. "The Firebird Project team is pleased to announce the release of Firebird 2.0.5. Kits for Linux (i686 and AMD-64), Win32 and MacOSX Intel and PowerPC should start to filter through to SourceForge over the next few hours, ready to download. This sub-release features a significant batch of bug fixes, many backported from v.2.1.x development."

Comments (none posted)

PostgreSQL Weekly News

The January 25, 2009 edition of the PostgreSQL Weekly News is online with the latest PostgreSQL DBMS articles and resources.

Full Story (comments: none)

Filesystem Utilities

Clonezilla: live 1.2.1-37 (stable) released (SourceForge)

Version 1.2.1-37 of Clonezilla: live has been announced. "Clonezilla is a partition or disk clone software similar to Ghost. It saves and restores only used blocks in hard drive. Two types of Clonezilla are available, Clonezilla live and Clonezilla SE (Server Edition). We are happy to announce this new stable release. In this release, we have 2 more new languages, and some improvement about cloning MS windows."

Comments (none posted)


Samba 3.3.0 available for download

Version 3.3.0 of Samba has been announced, see the release notes for more information.

Comments (none posted)

Networking Tools

conntrack-tools 0.9.10 released

Version 0.9.10 of conntrack-tools has been announced. "The netfilter project presents another development release of the conntrack-tools. As usual, this release includes important fixes, improvements and new features".

Full Story (comments: none)

Web Site Development

nginx 0.6.35 released

Version 0.6.35 of nginx, a light weight web server, has been announced. See the Changes document for more information.

Comments (none posted)


Sector: 1.18 release (SourceForge)

Version of has been announced. "SECTOR: A Distributed Storage and Computing Infrastructure Sector version 1.18 contains several improvements on the file system, in particular the real time replication of data update."

Comments (none posted)

unattended-gui: release 1.908 (SourceForge)

Version 1.908 of unattended-gui has been announced. "Support unattended installation of several Linux and Windows. Also a collection of scripts for inventory, deinstallation and other add-ons like dhcp-ldap, php-ssh, samhain, syslog-ng, switch managment, ldap browser, pxe management.Fixed some bugs.Feature requests improvements."

Comments (none posted)

Desktop Applications

Audio Applications

Invada LADSPA plugins

Fraser has announced the Invada LADSPA plugins. "I've released some LADSPA plugins that are loosely based on my VST plugins I wrote years ago. I haven't produced any documentation yet but most plugins are fairly self explanatory."

Full Story (comments: none)

BitTorrent Applications

Azureus: Vuze released (SourceForge)

Version of Azureus: Vuze has been announced, it includes new features and bug fixes. "Azureus: Vuze is a powerful, full-featured, cross-platform bittorrent client and open content platform."

Comments (none posted)

Business Applications

xTuple ERP 3.2 released

Version 3.2 of xTuple ERP has been announced, it includes a lot of new functionality. "We're very pleased to announce that the final version of xTuple ERP version 3.2.0 - PostBooks, Standard, and Manufacturing Editions - are now available for download. This is the eleventh release of the world's most advanced open source ERP from xTuple (formerly known as OpenMFG)."

Full Story (comments: none)

Desktop Environments

GNOME 2.25.5 released

Version 2.25.5 of the GNOME desktop has been announced. "This is the fifth development release towards our 2.26 release that will happen in March 2009. By now, development is well under way, and we've already made good progress on some of the goals that we've set ourselves for 2.26 ("

Full Story (comments: none)

New module decisions for GNOME 2.26

A list of new module decisions for GNOME 2.26 has been announced. "The release team met on Sunday to talk about the latest movies, the forthcoming Australian Open, etc. but also to make fun of Andreas N. (we won't reveal his last last name publicly -- but he's swedish and draws various things)."

Full Story (comments: none)

GNOME Software Announcements

The following new GNOME software has been announced this week: You can find more new GNOME software releases at

Comments (none posted)

KDE 4.1.4 and 4.2 RC are available (KDEDot)

KDE.News reports on the release of KDE 4.1.4 and KDE 4.2 RC. "The KDE community has made available two new releases of the KDE desktop and applications today. KDE 4.1.4 is the latest update for the KDE 4.1 series. It contains many bugfixes, mainly in the e-mail and PIM suite Kontact and the document viewer Okular. KDE 4.2 RC is the release candidate of KDE 4.2, also bringing new features and thousands of bug fixes to the KDE desktop and applications. KDE 4.1.4 is the last planned update to the KDE 4.1 series and stabilises the 4.1 platform further. It is a recommended update for everyone running KDE 4.1.3 or earlier."

Comments (none posted)

KDE 4.2.0 Release

KDE 4.2 has been released. The KDE 4.2 Visual Guide is also available. "KDE 4.2.0 is not the end, but another milestone along the road of KDE 4 development. This platform is designed and intended to keep on growing far into the future, and the KDE Team would like to invite you to join us in this fantastic journey. This visual guide highlights many of the improvements in KDE 4.2, and we hope that you will enjoy using this release."

Comments (15 posted)

KDE Software Announcements

The following new KDE software has been announced this week: You can find more new KDE software releases at

Comments (none posted)

Xfce 4.6 Release Candidate 1 released

Release Candidate 1 of the Xfce 4.6 desktop environment has been announced. "Shortly after Beta 3, we are pleased to announce the first Release Candidate for Xfce 4.6. If no serious bugs are found, this is going to be the state of the final release (plus translation updates). This Release Candidate is the first 4.6 release that comes with graphical installers for the main components and goodies. The release comes with several fixes for critical bugs and crashes found in Beta 3."

Comments (none posted)

Xorg Software Announcements

The following new Xorg software has been announced this week: More information can be found on the X.Org Foundation wiki.

Comments (none posted)

Desktop Publishing

Asymptote: 1.60 released (SourceForge)

Version 1.60 of Asymptote has been announced. "Asymptote is a powerful descriptive vector graphics language for technical drawing, inspired by MetaPost but with an improved C++-like syntax. Asymptote provides for figures the same high-quality level of typesetting that LaTeX does for scientific text. An optional bool3 condition was added to the graph functions, allowing one to skip points or segment a graph into distinct branches, based on a user-supplied test (see the example gamma.asy). A gettriple routine was added..."

Comments (none posted)


MyHDL 0.6 released

Version 0.6 of MyHDL has been announced. "MyHDL is an open source Python package that lets you go from Python to silicon. With MyHDL, you can use Python as a hardware description and verification language. Furthermore, you can convert implementation-oriented MyHDL code to Verilog and VHDL automatically, and take it to a silicon implementation from there." See the What's New document for release details.

Comments (none posted)


UOX3: v0.98-4.0 Released (SourceForge)

Version 0.98-4.0 of UOX3 has been announced. "OpenUO is an Opensource community for the development of Ultima Online emulators, primarily focusing on Ultima Offline eXperiment 3, an OpenSource UO emulator allowing for single or online/LAN play of your own shard. After quite a long delay, a new UOX3 build has been released, with a slew of new changes."

Comments (none posted)


Elisa Media Center 0.5.25 released

Version 0.5.25 of Elisa Media Center has been announced. "The Elisa team is happy to announce the release of Elisa Media Center 0.5.25, code-named "The Angry Mob". Elisa is a cross-platform and open-source Media Center written in Python. It uses GStreamer for media playback and pigment to create an appealing and intuitive user interface. This is a bugfix release: among other issues solved, the Youtube plugin now works again."

Full Story (comments: none)

MediaInfo: 0.7.9 released (SourceForge)

Version 0.7.9 of MediaInfo, a utility for displaying video or audio tag file info, has been announced. "In this release: Better OGG support (Dirac, Speek, Kate format handling), SMV (WAV/ADPCM with JPG video) and DPG (Nintendo DS) format support, TimeCode tracks in MPEG-4 handling, Python binding improvement, new Mono binding, and a lot of bugs correction".

Comments (none posted)

Music Applications

Denemo 0.8.2 released

Version 0.8.2 of the Denemo musical notation editor has been announced. "A lot of bugs were fixed and several new features were added. We also prepared Denemo for further midi-interaction. But to make Denemo a full notation-midi-sequencer we need you! If you are a developer with interest in MIDI please help us! However, the focus this time was on (MIDI-)input and more scripting support. Now its possible to combine any input-method (Keyboard, Mouse, Midi) with any Denemo-function making it possible to have scripts like the "Angry Delete" feature: Don't take your hand off the midi-keyboard if you played a wrong note. Just hit the next note with all your might ("high velocity") and it will replace the last one instead of adding a new one."

Full Story (comments: none)

Filterclavier - a MIDI controllable filter piano

The initial release of Filterclavier is available. "Today I got the first usable version of Filterclavier running, a low/high/bandpass filter which cutoff and resonance (and gain) are set by MIDI input. “Portamento time” adds viscosity to the filter following the MIDI notes. In the moment it is monophonic, but in the future there may be several filters in parallel."

Comments (none posted)

Office Suites

KOffice 2.0 Beta 5 released (KDEDot)

KDE.News notes the availability of KOffice 2.0 Beta 5. "Moving towards the 2.0 release with almost monthly beta releases, the KOffice team has once more honoured its promise to bring out beta releases of KOffice until the time is right for a release candidate. So today we bring you this beta with many, many improvements across the board. Incremental as it is, this beta is an important step towards a final release."

Comments (none posted) 3.0.1 is available

Version 3.0.1 of has been announced. "This release fixes a number of minor issues reported with 3.0, released on October 13th last year. Although minor releases normally do not include new features, there are two points of interest: enhanced support for grammar checkers, and an increase in the number of words held in personal word lists to 30,000. A full list of all the issues fixed may be found in the developers' release notes at ".

Full Story (comments: none)


Pyevolve 0.5 released

Version 0.5 of Pyevolve, a Python-based genetic algorithms framework, has been announced. "Since the version 0.4, Pyevolve has changed too much, many new features was added and many bugs was fixed, this documentation describes those changes, the new API and new features."

Comments (none posted)


BleachBit 0.3.0 announced

Version 0.3.0 of BleachBit has been announced. "BleachBit is a Internet history, locale, registry, privacy, and file cleaner for Linux on Python v2.4 - v2.6. Notable changes for 0.3.0: * Clean locales (also called localizations). * When deleting, optionally shred files to hide contents. * Erase the clipboard. * Add Bulgarian translation. * Improve the GUI. * Add a preferences dialog. * Fix several bugs including a serious bug that prevented some parts of Firefox from being cleaned."

Full Story (comments: none)

Languages and Tools


GCC 4.3.3 released

Version 4.3.3 of GCC, the Gnu Compiler Collection, has been announced. "(regression fixes and docs only)". See the Changes document for more information.

Comments (none posted)


Caml Weekly News

The January 27, 2009 edition of the Caml Weekly News is out with new articles about the Caml language.

Full Story (comments: none)


mpmath 0.11 released

Version 0.11 of mpmath has been announced. "Mpmath is a pure-Python library for arbitrary-precision floating-point arithmetic that implements an extensive set of mathematical functions. It can be used as a standalone library or via SymPy. This versions adds speed improvements, many new mathematical functions (Bessel functions, polylogarithms, Fibonacci numbers, the Barnes G-function, generalized Stieltjes constants, inverse error function, generalized incomplete gamma function, etc), a high-precision ODE solver, improved algorithms for infinite sums and products, calculation of Taylor and Fourier series, and multidimensional rootfinding, besides many other improvements and bugfixes. The documentation has also been greatly extended."

Full Story (comments: none)

Python-URL! - weekly Python news and links

The January 27, 2009 edition of the Python-URL! is online with a new collection of Python article links.

Full Story (comments: none)


Tcl-URL! - weekly Tcl news and links

The January 22, 2009 edition of the Tcl-URL! is online with new Tcl/Tk articles and resources.

Full Story (comments: none)


pyftpdlib 0.5.1 released

Version 0.5.1 of pyftpdlib, the Python FTP Server library, has been announced. "This new version, aside from fixing some bugs, includes the following major enhancements: * A new script implementing FTPS (FTP over SSL/TLS) has been added in the demo directory. * File transfers are now 40% faster thanks to the re-dimensioned application buffer sizes. * ASCII data transfers on Windows are now 200% faster. * Preliminary support for SITE command has been added. * Two new callback methods to handle "file received" and "file sent" events have been added."

Full Story (comments: none)

Page editor: Forrest Cook

Linux in the news

Recommended Reading

Ctrl-Z: a return to the Supreme Court's software patent ban? (ars technica)

Ars technica has published a lengthy history of U.S. case law around software patents which reaches the conclusion that the Supreme Court would be likely to be hostile to the concept. "The majority may not have intended to authorize patents on software. But Justice Stevens, the only Diehr justice still sitting on the Supreme Court today, wrote a lengthy dissent warning that the decision would have that effect. Stevens's prophecy was fulfilled by the United States Court of Appeals for the Federal Circuit, which Congress created in 1983 and gave jurisdiction over patent appeals. Although the Supreme Court was still the ultimate authority on patent appeals, it rarely reviewed the Federal Circuit's decisions during its first two decades. As a consequence, the Federal Circuit became the de facto 'Supreme Court of patents.'"

Comments (none posted)

Location-aware software comes to the Linux platform (ars technica)

Here's a brief survey of location-oriented tools for Linux. "The GTK+ widgets provided by libchamplain have already been adopted by several GNOME applications. A new plugin for the GNOME image viewer, for example, will display a map with markers to show the location of images with geolocation metadata. The library is also going to be used in Empathy, the GNOME instant messaging client. Empathy's new location-aware functionality uses an XMPP extension that describes a wide range of location metadata. It is built on top of GeoClue and uses libchamplain to display a graphical user interface."

Comments (7 posted)


Sun Begins Carrying Out Planned Layoffs (eWeek)

eWeek reports on the layoffs at Sun Microsystems. "Sun Microsystems, which revealed on Nov. 14, 2008, that it planned to reduce its global work force by 5,000 to 6,000 employees—15 to 18 percent—began carrying out that dreadful duty Jan. 22. Sun confirmed that layoff notifications were sent to about 1,300 employees as part of that action. Reductions were made across all levels, including vice presidents and directors, the company said."

Comments (2 posted)

Sun to be eclipsed by Red Hat? (Channel Register)

Chris Mellor compares Red Hat and Sun in market capitalization terms. "In revenue terms the two companies are markedly different. Sun revenues were $13.8bn in 2008 while Red Hat's were less than four per cent of this at $0.52bn. The stock market seems to be thinking that Red Hat's shares will be more valuable than Sun's, that Sun's earnings per share will trend down and that Red Hat's will increase." (Thanks to Rahul Sundaram)

Comments (24 posted)

Linux Adoption

Open source question for schools (BBC)

The BBC takes a look at open source software in education, specifically as a cost-saving measure. "Steve Beswick, director of education for Microsoft UK, told the BBC that while open source software may, on face value, offer savings, there could be hidden costs, both financial and otherwise. [...] 'A lot of people are trained in Microsoft-based technologies, so there may be increased costs in re-training to learn how to use open source solutions,' he said."

Comments (27 posted)


Cloning Linux Systems With CloneZilla Server Edition (HowtoForge)

HowtoForge presents a tutorial on CloneZilla SE. "This tutorial shows how you can clone Linux systems with CloneZilla SE. This is useful for copying one Linux installation to multiple computers without losing much time, e.g. in a classroom, or also for creating an image-based backup of a system. I will install CloneZilla SE on a Debian Etch server in this tutorial. The systems that you want to clone can use whatever Linux distribution you prefer."

Comments (none posted)

Essential Java resources (developerWorks)

IBM developerWorks presents a list of Java resources. "Since its introduction to the programming community as a whole in 1995, the Java platform has evolved far beyond the "applets everywhere" vision that early Java pundits and evangelists imagined a Java world to be like. Instead, the Java world rose up to Swing, coalesced around servlets, rode that into J2EE, stumbled on EJB, sidestepped over to Spring and Hibernate, added generics and became more dynamic, then functionalized, and continues to grow in all sorts of interesting directions even as I write this."

Comments (7 posted)


Mozilla Looking to Tag Along (Linux Journal)

Linux Journal takes a look at Mozilla Test Pilot. "The plan, which is entirely opt-in and requires installing a plugin to participate, has been dubbed Test Pilot by Mozilla Labs, and hopes to provide volumes of useful information for Mozilla developers and outside researchers. Volunteers will initially be asked provide a limited amount of information for demographic purposes, then will install Test Pilot and browse as usual. Additional "experiments and tests" will follow, and the participants will have the opportunity to choose whether or not to take part in a given exercise."

Comments (none posted)

Suse Studio: Linux customization for the masses (CNET)

Matt Asay takes a look at SUSE Studio. "Nat Friedman, Novell's chief technology and strategy officer for open source, has been working on Suse Studio for some time, but it was at VMworld in September that Novell first publicly demonstrated the product. Since then, Novell has not said much publicly about the alpha-stage product. That's too bad, as this may well be one of the industry's most exciting and transformational software releases in years."

Comments (12 posted)


An odd choice to help government with open source strategy (ars technica)

ars technica looks into the Obama administration's choice of former Sun CEO Scott McNealy as an advisor for its government open-source strategy. "Although Obama's interest in open source looks like a promising sign that the incoming government is serious about reforming federal IT procurement policies, the decision to call on Sun's eccentric cofounder is an incomprehensible twist. McNealy's long history of bizarre and contradictory positions on open source software make him a less than ideal candidate for helping to shape national policy on the subject. Asking Scott McNealy to write a paper about open source software is a bit like asking Dick Cheney to write a paper about government transparency."

Comments (16 posted)

Page editor: Forrest Cook


Non-Commercial announcements

Draft Wikipedia license change plan

The Wikimedia Foundation has posted a draft plan for changing its licensing away from the GNU Free Documentation License - a change which was enabled by the FDL 1.3 release. They hope to finalize this plan by the beginning of February, then hold an election to let contributors make the final decision.

Comments (none posted)

Commercial announcements

Deutsche Telekom spinoff launches cloud marketplace

Deutsche Telekom Laboratories has launched the Zimory Public Cloud. "The reality ofcloud computing takes a major step forward this week when Zimory GmbH - a spinoff of Deutsche Telekom -- unveils Zimory Public Cloud, an online marketplace that for the first time brings together buyers and sellers of computing resources. Zimory Public Cloud provides companies of all sizes instant, easy and flexible access to external computing power worldwide while also enabling businesses with excess server capacity to offer their resources to businesses around the world."

Full Story (comments: none)

VMware reports Fourth Quarter and Full Year 2008 results

VMware has announced its 2008 yearly and fourth quarter financial results. "Revenues for the fourth quarter were $515 million, an increase of 25% from the fourth quarter of 2007. -- GAAP operating income for the fourth quarter was $102 million, an increase of 34% from the fourth quarter of 2007. Non-GAAP operating income for the fourth quarter was $135 million, an increase of 25% from the fourth quarter of 2007. -- GAAP net income for the fourth quarter was $111 million, or $0.29 per diluted share, compared to $78 million, or $0.19 per diluted share, for the fourth quarter of 2007. Non-GAAP net income for the quarter was $142 million, or $0.36 per diluted share, compared to $103 million, or $0.26 per diluted share, for the fourth quarter of 2007."

Comments (none posted)

New Books

The Art of Lean Software Development - New from O'Reilly

O'Reilly has published the book The Art of Lean Software Development by Curt Hibbs, Steve Jewett and Mike Sullivan.

Full Story (comments: none)

The Art of Application Performance Testing - New from O'Reilly

O'Reilly has published the book The Art of Application Performance Testing by Ian Molyneaux.

Full Story (comments: none)

Beautiful Architecture - New from O'Reilly

O'Reilly has published the book Beautiful Architecture, edited by Diomidis Spinellis and Georgios Gousios.

Full Story (comments: none)

New Book: FLOSS+Art

OpenMute has published the book FLOSS+Art by a variety of authors. "What does Free and Open Source software (FLOSS) provide to artists and designers - beyond just free alternatives to established tools from Photoshop to Final Cut? "FLOSS+Art" is the first book to answer this question. It shows how the value of Free Software lies in its differences and creative challenges, as opposed to out-of-the-box and off-the-shelf solutions; how it allows to work and collaborate differently with computers, and therefore enable different kinds of art and design."

Full Story (comments: none)

Hello, Android--New from Pragmatic Bookshelf

Pragmatic Bookshelf has published the book Hello, Android by Ed Burnette.

Full Story (comments: none)

The Manga Guide to Databases--New from No Starch Press

No Starch Press has published the book The Manga Guide to Databases by Mana Takahashi, Shoko Azuma, and Trend-pro Co, Ltd.

Full Story (comments: none)

Ubuntu Pocket Guide and Reference

MacFreda publishing has published the book Ubuntu Pocket Guide and Reference by Keir Thomas. A freely downloadable PDF version is available.

Full Story (comments: none)

Designing Web Interfaces - New from O'Reilly

O'Reilly has published the book Designing Web Interfaces by Bill Scott and Theresa Neil.

Full Story (comments: none)

Wikipedia: The Missing Manual Posted on Wikipedia--O'Reilly Media Alert

O'Reilly has announced the free availability of the book Wikipedia: The Missing Manual. "The Missing Manuals series, published by O'Reilly Media, today announced the migration of its book about Wikipedia to Wikipedia. As of today, the entire contents of "Wikipedia: The Missing Manual" (O'Reilly, $29.99) by John Broughton is available for free online for editing and updating just like any other Wikipedia entry."

Full Story (comments: none)


FTF releases legal infrastructure guide for Free Software projects

FSFE's Freedom Task Force (FTF) has announced the release of a guide to assist with establishing legal infrastructure for Free Software projects. "The guide gives tips on how Free Software projects can consolidate their legal position. It includes information about setting up legal entities, dealing with copyright issues, managing trademarks, and best practices for project management."

Full Story (comments: none)

Radeon R6xx/R7xx 3D register reference guide released

AMD has announced the release of the 3D register reference guide for ATI Radeon R6xx and R7xx chipsets. This is another important step toward enabling full (free) support for Radeon-based graphics adapters. It has taken a while, but it's clear that AMD was serious when it announced its plans to open up its hardware.

Full Story (comments: 7)

Contests and Awards

"We're Linux" video contest kicks off

The Linux Foundation has formally kicked off the "We're Linux" video contest. To better represent the community nature of Linux, the contest was renamed from "I'm Linux" after input from Linux community members. The press release also announces the six judges for the contest. "'While Microsoft spent large sums of money on advertising last year to attempt to reinvent itself, and Apple continued to use well executed yet traditional techniques for advertising its alternative, Linux will be best represented by using the same kind of collaborative model used to develop the operating system,' said Amanda McPherson, vice president of marketing and developer programs at the Linux Foundation." Click below for the full press release.

Full Story (comments: 15)

Education and Certification

Course dates 2009 - Python Academy

A number of Python Academy courses will be held in Leipzig, Germany through the first half of 2009.

Full Story (comments: none)

Calls for Presentations

EuroDjangoCon '09 announced

The 2009 EuroDjangoCon has been announced. "For those of you that don't know, EuroDjangoCon will be on 4th, 5th and 6th May 2009 in Prague." The call for talks is currently open.

Full Story (comments: none)

IMF 2009 Call for Papers

A call for papers has gone out for IMF 2009, the 5th International Conference on IT Security Incident Management & IT Forensics. The event takes place on September 15-17, 2009 in Stuttgart, Germany. Submissions are due by May 18.

Full Story (comments: none)

Linux Foundation Collaboration Summit call for participation

Opens registration and a call for papers has gone out for the Linux Foundation Collaboration Summit. "The Linux Foundation (LF), the nonprofit organization dedicated to accelerating the growth of Linux, today announced that registration is open for the Linux Foundation's Annual Collaboration Summit taking place April 8 - 10, 2009 in San Francisco. Also available today are further details, including the Call for Proposals (CFP), for both the Annual Collaboration Summit and LinuxCon 2009."

Full Story (comments: none)

PostgreSQL Conference, U.S. East 09 Call for Papers

A call for papers has gone out for the PostgreSQL Conference. "PostgreSQL Conference, U.S., East 09 will be held in Philadelphia at historic Drexel University from April 3rd through 5th. The call for papers is now out at:"

Full Story (comments: none)

Upcoming Events

Registration is open for Cloud Summit 2009

TechWeb has announced open registration for Cloud Summit 2009. The event takes place in Las Vegas, NV on May 18-19, 2009. "Enterprise Cloud Summit is the only industry event to tackle enterprise cloud adoption. This two-day program is packed with industry heavyweights and real-world demos that show you the promises - and pitfalls - of cloud technology."

Full Story (comments: none)

The Demonstrating Open-Source Healthcare Solutions Conference

The Demonstrating Open-Source Healthcare Solutions conference (DOHCS 2009) has been announced. "The leading Open Source healthcare companies-Medsphere Systems (OpenVista), ClearHealth, Akaza Research (OpenClinica) and WebReach (Mirth)-are joining forces to present the Third Annual Demonstrating Open Source Healthcare Solutions (DOHCS) conference on February 20th, 2009 at the Los Angeles Westin LAX. This event takes place in association with the Southern California Linux Exposition (SCALE), held at the same location on February 21 and 22."

Full Story (comments: none)

FOSS in Healthcare IT Conference (LinuxMedNews)

LinuxMedNews has announced the FOSS in Healthcare unconference. "The Houston crew (including the editor of this site) are joining together to host an entire conference devoted to FOSS in Healthcare in Houston, TX Summer of 09 (July 31st - Aug 2nd) We have invited several of the top FOSS projects, including Misys, ClearHealth, WorldVistA, OpenMRS, Tolven, DSS, Medsphere, and Mirth, just to mention a few!"

Comments (none posted)

Registration is open for the OpenClinica European Summit (LinuxMedNews)

LinuxMedNews has announced the opening of registration for the OpenClinica European Summit. "Akaza Research announces the first annual OpenClinica European Summit. The event will be held on April 14, 2009 in Brussels, Belgium and bring together users, developers, and leading service providers of the OpenClinica open source electronic data capture (EDC) software."

Comments (none posted)

RailsConf opens registration and spotlights Rails' next generation

Registration is open for the 2009 RailsConf. "Want to see Rails 3 roll out? O'Reilly Media and Ruby Central have opened registration for RailsConf 2009, happening May 4-7, 2009, at the Las Vegas Hilton. RailsConf, the official event for the Ruby on Rails community, will showcase the latest developments in the merger of Rails and Merb into Rails 3."

Full Story (comments: none)

Events: February 5, 2009 to April 6, 2009

The following event listing is taken from the Calendar.

February 4
February 5
DC BSDCon 2009 Washington, D.C., USA
February 4
February 6
Money:Tech 2009 New York, NY, USA
February 5
February 9
German Perl Workshop Frankfurt, Germany
February 7 Frozen Perl 2009 Minneapolis, MN., USA
February 7
February 8
FOSDEM 2009 Brussels, Belgium
February 9
February 11
O'Reilly Tools of Change for Publishing New York, NY, USA
February 15 Free Software Awards 2009 Deadline Soissons, France
February 16
February 18
Open Source Singapore Pacific-Asia Conference Singapore, Singapore
February 16
February 19
Black Hat DC Briefings 2009 Washington, D.C., USA
February 20 Demonstrating Open-Source Health Care Solutions Los Angeles, CA, USA
February 20
February 22
Southern California Linux Expo Los Angeles, CA, USA
February 24
February 26
VMworld Europe 2009 Cannes, France
February 25
February 27
German Perl Workshop Frankfurt Main, Germany
February 27 PHP UK Conference London, UK
February 28 Belgian Perl Workshop Leuven, Belgium
February 28 uCon Security Conference Recife, Brazil
March 1
March 4
Global Ignite week Online
March 3
March 8
CeBIT 2009 Hanover, Germany
March 4
March 7
DrupalCon DC 2009 Washington D.C., USA
March 6 Dutch Perl Workshop Arnhem, The Netherlands
March 7 Ukrainian Perl Workshop 2009 Kiev, Ukraine
March 8
March 11
Bossa Conference 2009 Recife, Brazil
March 9
March 13
Advanced Ruby on Rails Bootcamp with Charles B. Quinn Atlanta, GA, USA
March 9
March 12
O'Reilly Emerging Technology Conference San Jose, CA, USA
March 12
March 15
Pingwinaria 2009 - Polish Linux User Group Conference Spala, Poland
March 14 OpenNMS User Conference (Europe) 2009 Frankfurt Main, Germany
March 14
March 15
Chemnitzer Linux Tage 2009 Chemnitz, Germany
March 16
March 20
Android Bootcamp with Mark Murphy Atlanta, USA
March 16
March 20
CanSecWest Vancouver 2009 Vancouver, BC, Canada
March 18 Linuxwochen Österreich - Klagenfurt Klagenfurt, Austria
March 21
March 22
Libre Planet 2009 Cambridge, MA, USA
March 23
March 27
iPhone Bootcamp Atlanta, Georgia, USA
March 23
April 3
Google Summer of Code '09 Student Application Period online, USA
March 23
March 27
ApacheCon Europe 2009 Amsterdam, The Netherlands
March 24
March 26
UKUUG Spring 2009 Conference London, England
March 25
March 29
PyCon 2009 Chicago, IL, USA
March 27
March 29
Free Software and Beyond The World of Peer Production Manchester, UK
March 28 Open Knowledge Conference 2009 London, UK
March 31
April 2
Solutions Linux France Paris, France
March 31
April 3
Web 2.0 Expo San Francisco San Francisco, CA, USA
April 3
April 5
PostgreSQL Conference: East 09 Philadelphia, PA, USA
April 3
April 4
Flourish Conference Chicago, IL, USA

If your event does not appear here, please tell us about it.

Web sites

New GNOME devel-announce-list

A new New GNOME devel-announce-list has been announced. "In an effort to make it easier to follow the GNOME release process, I've added the feed of devel-announce-list to On you can find information about: * various projects (like metacity) * Commit Digest (overview of last weeks development) * GNOME Foundation announcements * Development announcements (GNOME releases, freezes, etc) "

Comments (none posted)

A New Day, A New Dot (KDEDot)

KDE.News has announced a site make-over. "I would like to welcome you to the new and improved KDE Dot News. As I am sure many of you know, although the old KDE Dot News has served the KDE community very well, it was beginning to show its age. Because of that a number of people including myself took it upon ourselves to modernise the Dot. The new Dot also has a number of exciting new features including: * A comment moderation system similar to Slashdot * The ability to translate stories (and interface elements) into multiple languages * It streamlines story submission and the editing process * A sexy new look".

Comments (none posted)

Page editor: Forrest Cook

Copyright © 2009, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds