User: Password:
Subscribe / Log in / New account Weekly Edition for October 29, 2009

Community contributions and copyright assignment

By Jonathan Corbet
October 28, 2009
Over the course of the last month or so, your editor has been to six conferences on three continents. When engaged in that kind of travel, it is, of course, obligatory to determine which country has the best beer; normally, substantial amounts of research are required. It's also normal to hear what's on one's co-researchers' minds while carrying out this task. This time around, your editor heard grumbles from a surprising number of people, all about the same topic: copyright assignment policies.

In particular, developers are concerned and unhappy about the copyright assignment policy that Canonical has chosen for all of its projects. This agreement [PDF] is a relatively simple read; it fits on a single page. It applies to a long list of projects, including Bazaar, Launchpad, Quickly, Upstart, and Notify-osd; contributions to any of those projects must be made under the terms of this agreement.

So what do contributors agree to? The core term is this:

I hereby assign to Canonical with full title guarantee all copyright now or in the future subsisting in any part of the world in any Assigned Contributions.

So Canonical gets outright ownership of the code. In return, Canonical gives the original author rights to do almost anything with that code.

Assigning copyright to Canonical could well be an obstacle for potential contributors, but there are a couple of other terms which make things worse. One of them is this:

Canonical will ordinarily make the Assigned Contributions available to the public under a "Free Software Licence", according to the definition of that term published by the Free Software Foundation from time to time. Canonical may also, in its discretion, make the Assigned Contributions available to the public under other license terms.

There are many free software developers who might balk at giving their code away to somebody who "ordinarily" will make it available under a free license. And the final sentence is even worse; "other license terms" is, of course, euphemistic language for "proprietary terms."

Finally, there is the patent pledge:

I will not assert or enforce any patent against (a) Canonical (b) anyone who received the Software and/or the Assigned Contributions from Canonical or (c) anyone who received the Software and/or the Assigned Contributions under a Free Software Licence, where that patent is infringed by any of them exercising copyright rights in the Software and/or the Assigned Contributions

This language is likely to be just fine for many developers who have no intention of asserting patents against anybody anyway. But it's worth noting that (1) the patent grant is broad, including anything which might be added to the program (by others) in the future, and (2) there is no "self defense" exception allowing patents to be used to fight off litigation initiated by others. So, to a patent holder, this language is going to look like a unilateral disarmament pledge with unknown (and unknowable) scope. For many companies - even those which are opposed to software patents in general - that requirement may well be enough, on its own, to break the deal.

Contributor agreements abound, of course, though their terms vary widely. One might compare Canonical's agreement with the Free Software Foundation's language, which reads:

The Foundation promises that all distribution of the Work, or of any work "based on the Work", that takes place under the control of the Foundation or its assignees, shall be on terms that explicitly and perpetually permit anyone possessing a copy of the work to which the terms apply, and possessing accurate notice of these terms, to redistribute copies of the work to anyone on the same terms. These terms shall not restrict which members of the public copies may be distributed to. These terms shall not require a member of the public to pay any royalty to the Foundation or to anyone else for any permitted use of the work they apply to, or to communicate with the Foundation or its agents in any way either when redistribution is performed or on any other occasion.

Not all developers are big fans of the FSF, but most of them trust it to live up to those particular terms. The FSF agreement makes no mention of patents at all (though GPLv3 is certainly not silent on the subject).

What about other projects? The Apache Software Foundation has an agreement by which the ASF is granted a license (which it promises not to use "in a way that is contrary to the public benefit or inconsistent with its nonprofit status") but the author retains ownership of all code. Sun's contributor agreement [PDF], which now covers MySQL too, gives Sun the right to do anything with the code, but shares joint ownership with the author. An extreme example is the SugarCRM agreement, which appears to transfer not just the author's copyrights, but his or her patents (the actual patents, not a license) as well.

Agreements like Sun's and SugarCRM's are common when dealing with corporate-owned projects; they clearly prioritize control and the ability to take things proprietary over the creation of an independent development community. More community-oriented projects, instead, tend to take a different approach to contributor agreements. Canonical is being criticized in a way that SugarCRM is not, despite the fact that Canonical's agreement appears to be the friendlier of the two. A plausible reason for that difference is that Canonical presents itself as a community-oriented organization, but it is pushing a more corporate-style contributor agreement.

Canonical's policy is especially likely to worry other Linux distributors. They are often happy to contribute to a project controlled by a different distributor, but they do not normally do so under terms which allow the recipient to take the code proprietary. Licenses like the GPL ensure fair dealing between companies; contributor agreements which allow "other license terms" remove any assurance of fair dealing. It is not surprising that some people are uninterested in contributing code under such terms.

The real sticking point, at the moment, appears to be Upstart. Other distributors either have adopted it or are considering doing so; it does appear to be a substantial improvement over the old SYSV init scheme. In the course of adopting Upstart, these distributors are certain to fix problems and make improvements to suit their needs. But they are rather less certain to contribute those changes back under Canonical's terms. In his wanderings, your editor has heard developers talk about possibly forking Upstart. Another developer claimed to be working on a completely new alternative system for system initialization which would take lessons from Upstart, but which would be an independent development. Neither of these outcomes seems optimal.

Your editor sent in a query asking what prevents Canonical from adopting more contributor-friendly terms, but got no answer over the course of a couple of days. Groups requiring copyright assignment often claim that it's necessary for them to be able to take action against copyright infringers. But the projects which have had the most success in that area - the Linux kernel and Busybox, for example - have no copyright assignment policy. The other thing that copyright assignment allows, of course, is a relicensing of the code. The FSF has made use of this privilege to move its projects to GPLv3; companies like MySQL have, instead, used it to ship code under proprietary terms. One might assume that Canonical has no such intent, but the fact that Canonical has explicitly reserved the right to do so is unlikely to make people comfortable.

When developers contribute code to a project, they tend to get intangible rewards in return. So asking them to hand over ownership of the code as well might seem to be pushing things a little too far. Even so, many developers are willing to contribute under such terms. But there are limits, and allowing a competitor to take code proprietary may well be beyond those limits - as are overly-broad patent grants for contributors who are concerned about such things. Companies which demand such rights may find that their community projects are not as successful as they would like.

Comments (45 posted)

Mozilla refactors messaging with Raindrop

October 28, 2009

This article was contributed by Nathan Willis

Mozilla Labs recently pulled the covers off of Raindrop, a new project that attempts to rethink how messaging software presents information to users. In one sense, Raindrop is designed to function as a "grand unified inbox" aggregating email, instant messaging, and a wide range of site-specific message channels. These channels otherwise exist in complete isolation, which requires users to check multiple applications and web services to collect their incoming communication. But Raindrop also strives to better present the aggregated dispatches and notices, automatically sorting individual conversations from group discussions, and personal messages from automated announcements and updates.

Messages, messages everywhere....

Raindrop's web page says its mission is making it "enjoyable to participate in conversations from people you care about, whether the conversations are in email, on twitter, a friend's blog or as part of a social networking site." To that end, the application abstracts away from the user the work of retrieving messages, notifications, and replies from the various web and email accounts, and presents them together as a unified whole. On top of that, Raindrop attempts to figure out which messages and conversations are most likely to be important to the user, and filters them up to the top of the stack. One of the introduction videos gives the example of sorting personal email to the top of the stack, while putting automatically-generated alerts to the bottom.

Clearly, there are more than just two categories of message (personal and automatic); Raindrop filters list email as well, but pays more attention to threads in which the user is participating. On the microblogging front, Raindrop classifies direct messages and "@" replies above general status notices, and can thread back-and-forth exchanges just like email.


Raindrop's interface is undergoing constant study and redesign, but at present it features a "home" screen with a combined list of all messages, plus links to specialized views for content, including "Direct Messages." "Group Conversations," "Mail Folders," and "Mailing Lists." Raindrop's home screen sorts the newest conversations at the top, and all messages appear as conversation "bubbles" with a preview of their contents. Raindrop threads related messages together, and it flags each conversation with an icon to distinguish between what it believes are person-to-person conversations, group discussions, and announcements or other impersonal messages. At the bottom of the home screen is a summary block for content in which the user is not a direct participant — general Twitter updates, mailing list threads between other people, and so on.

The Raindrop team is making considerable efforts to solicit input and feedback from real-world users in order to adapt its design. The project's "guiding principles" emphasize its user-centric and participatory process. The project has not yet made a pre-packaged release, but has tagged a 0.1 milestone in its source code repository. In keeping with the participatory goal, the designers have issued two previews of interface changes, although neither is yet available to run.

Those interested can install and run the current code on Linux, Mac, and Windows machines, however. Raindrop is a web application, written primarily in Python and JavaScript and using CouchDB for storage, but it contains its own web server, so it can run on a normal desktop machine. Users can check out the Mercurial tree and install either the 0.1 milestone from October 22, or the less stable trunk. Installation instructions are provided on the Mozilla wiki; the notable dependencies include CouchDB version 0.10 or newer and Python 2.5 or 2.6.

After checking out the code, the included script will check for specific Python packages (including Twisted, Paisley, and several support packages for dealing with specific services such as Twitter and RSS feeds) and check that CouchDB is configured and running. Once the script reports that everything is satisfied, users must manually create a ~/.raindrop file and populate it with account information for the services they wish to monitor. The current release includes IMAP email accounts, generic RSS feeds, and the popular commercial services Gmail, Twitter, and Skype.

Once Raindrop is configured, users start the service with the included script. The first time through, according to the installation guide, this script should be executed as: sync-messages --max-age=Ndays
which will fetch the previous N days' worth of incoming messages from each account. can take several minutes to process each account, so it is best to choose a small value for N. When the import finishes, users can access the Raindrop application from a browser at:

Functionality and ongoing development

At first glance, Raindrop's home screen shows what one would expect from any email client: message threads. More subtle than this automatic threading is Raindrop's attempt to combine all message sources into one "inflow" (as the project calls it). Each conversation category Raindrop presents combines threads from all of the configured accounts; "Group Conversations" contains email and @replies, "Sent" contains outgoing tweets and mail, and so on.

Lead designer Bryan Clark describes this automated sifting of content by message type as one of the key goals of the project. The first iteration of the user interface looks much like a webmail client, but the prototypes posted by the developers indicate that they plan to push the separation of different content types even further, perhaps clustering announcements and other non-personal messages into separate areas of the screen, giving more room to the "important" conversations with a "dashboard"-style layout.

As of today, it is difficult to get a solid feel for how this intelligent processing of messages will work in practice, because there are so few supported services. It is certainly handy to access all of the assorted account inboxes in a single location, but the actual value of merging message sources increases the more sources there are to consider. Users who interact with the same contacts via Twitter, Skype, and email can test the combined-message-threading more rigorously, but for many users, additional services may have to be added to make the user experience diverge significantly from a traditional, single-source web application — perhaps even a service from outside the web itself, such as an SMS gateway.

The Raindrop team solicits input from outside users and developers through user and developer mailing lists, and has posted documentation on the front- and back-end architecture. In addition, the design team working on the user interface and user experience maintains a blog chronicling its work, and posts its design ideas and mock-ups to Flickr. Users and developers are encouraged to send feedback and ideas to both groups.

One interesting feature of Raindrop that should help encourage its open development is that it has Mozilla Bespin's code-editing functionality built-in. At the bottom of each page is a link labeled "Extend" that opens a "Raindrop Extender" code editor window in a new tab. Raindrop is structured to permit easy addition of extensions written in JavaScript, HTML, and CSS.

Raindrop Extender includes two extensions that the user can activate and begin using on his or her local Raindrop installation immediately. One parses each message for URLs and appends a list of the URLs found in each message to the message's preview bubble for easy access. The second performs a similar task for Flickr URLs, but rather than providing a link, it fetches and displays a thumbnail image of the file in the preview.

Rethinking messaging

Several early blog reactions to Raindrop compared it to Google's Wave, but beyond the aggregation of multiple content sources, the two projects have little in common. Wave centers around real-time and collaborative content editing, while Raindrop focuses on filtering messages in a user-centric way. Other projects have attempted to "rethink the inbox" over the years — much of the Getting Things Done (GTD) craze took aim at message processing, for example, and, although it has never attracted critical mass, a big part of the Chandler project's goal was to merge calendar, email, and to-do into a unified stream. A horde of Firefox and Thunderbird extensions exist to try and combine the multitude of single-site message streams into a single application.

Raindrop has certainly found a problem in need of a solution; even with open standards and open protocols, online communication today has splintered into more and more messaging services that are blindly ignorant of each other — consider how many ostensibly "VoIP services" also provide instant messaging functionality as if users were in need of another IM account. But above all else, the Raindrop design team seems to understand that to a user, an incoming message is important or unimportant based on who sent it and what it says — it does not matter which from which site or protocol the message originated. Its "grand unified inbox" does not stop at un-splintering all of the incoming content, it actually tries to make useful sense out of it.

The current 0.1 milestone of Raindrop is clearly just the first drop in the bucket, but it deserves kudos for tackling this complicated issue — and for doing it in a completely open way. Clark and the other Raindrop developers at Mozilla Messaging are the team that developed Thunderbird; whether Raindrop's concepts remain limited to a web application, become integrated into Thunderbird and other stand-alone clients, or some combination of both remains to be seen. Wherever it goes, though, Raindrop will be an interesting experiment to watch.

Comments (3 posted)

A report from JLS

By Jonathan Corbet
October 26, 2009
Like a number of Asian countries, Japan has, in the past, had a reputation for being a great consumer of Linux: Japanese companies have been happy to make use of it when it suited them, but contributions back to Linux have been relatively scarce. The situation has changed over the years, and Japanese developers are now a significant part of our community. We get a lot of code from Japan, and, increasingly, ideas and leadership as well. Japan is pulling its weight, and, possibly, more than that.

Given this context, it makes sense that the 2009 Kernel Summit went to Tokyo. Japan (and the Linux Foundation) did a great job of hosting this high-profile event; some developers were heard to suggest that the summit should be held there every year. But one also should not overlook the significance of the first Japan Linux Symposium which followed the Summit. JLS 2009 is the beginning of what is intended to be an annual, world-class Linux gathering. Your editor's impression is that this event has gotten off to a good start.

The JLS program featured a long list of developers from Asia and beyond. Your editor will summarize a few of the talks here; others will be covered separately.


[Photo frenzy] Arguably, one important prerequisite to the creation of a thriving development community is the existence of local rock-star programmers who can serve as an inspiration to others. Japan certainly has one of those in the form of Yukihiro Matsumoto, best known as the creator of the Ruby language. He is known in Japan as an inspirational speaker, though, your editor fears, some of that inspiration was lost as the simultaneous translators worked flat-out to keep up with his fast-paced talk. Certainly the audience was clearly thrilled to have an opportunity to hear him speak.

His talk, held during the first-day keynote block, was aimed at a non-technical audience; it thus offered relatively little that would be new to LWN readers. "Matz" talked about the Unix philosophy and how it suits his way of working - "simplicity," "extensibility," and "programmability" were the keywords here. Open source was a good thing for [Yukihiro Matsumoto] him as well; it allowed him to play with (and learn from) a wide variety of software and set the stage for the development of Ruby. The posting of Ruby itself was a big surprise - he had bug reports and patches within hours of the creation of the mailing list. Without the open source community, Ruby would never have reached its current level of functionality or adoption.

Amusingly, Matsumoto-san noted that his objective at the outset was to create an object-oriented Perl. He did not know about Python at the time; had he stumbled across that language earlier, things might have gone much differently.


Security modules are among the most difficult types of code to merge into the kernel. Pathname-based access control techniques are a hard sell even by the standards of security code in general; one need only look at the fate of AppArmor to see how difficult it can be. So a first-time contributor who merges a security module using pathname-based techniques has accomplished something notable. That contributor is Toshiharu Harada, who saw TOMOYO Linux merged into 2.6.30, two years after its initial posting. Harada-san talked about his experience in a session at JLS.

[Toshiharu Harada] Getting started with kernel development is hard, despite the existence of a lot of good documents on how to go about it. We still make mistakes. The biggest problems are simple human nature and the fact that we don't like reading documentation; these, he said, are difficult issues to patch. There is too much stuff under the kernel's documentation directory, and we would much rather go and code something than read. But there are things we should look at; he suggested HOWTO, SubmitChecklist, and CodingStyle. He also liked Linus's ManagementStyle document, which contains such un-Japanese advice as:

Most people are idiots, and being a manager means you'll have to deal with it, and perhaps more importantly, that _they_ have to deal with _you_.

Linux kernel documentation, Harada-san noted, is tremendously practical.

His advice - derived from the many mistakes made in the process of getting TOMOYO Linux merged - was equally practical. Send patches, not just URLs. Stick to the coding style. Keep your patch series bisectable. Use existing data structures and APIs in the kernel. Be sure to send copies to the right people. Don't ask others to make changes for you - just make them. Try not to waste reviewers' time. And so on.

There are, he noted, lots of kernel developers who are willing to help those trying to figure out the system. Arguably the real lesson from the talk - never explicitly stated - was related to that: Harada-san was able to overcome obstacles and get his code into the kernel because he listened to the people who were trying to help him. If more developers would adopt that approach, we would have fewer failed attempts to engage with the development process.

On Japanese participation

Satoru Ueda is one of the strongest proponents of the use of - and contributions to - Linux within Sony. His efforts once led to a Sony vice-CEO asking him whether he was actually working for Panasonic, which seemed to be the beneficiary of his efforts. Ueda-san used his JLS talk to examine why Japanese developers often hesitate to work with the development community.

Is Japanese non-participation, he asked, a cultural problem? In part it might be. In general, he says, Japanese people tend to respond to strangers with fear, worrying about what unknown people might do to them. [Satoru Ueda] Westerners, instead, tend to be much more aware that strangers, while potentially dangerous, can also bring good things. That makes them more open to things like working in development communities.

That said, Japanese attitudes in general - and toward the open source community in particular - are changing. Japanese hesitation in this area is not really a cultural issue, set in stone; instead, getting past it is just a matter of adaptation.

Economics is also an important issue. Japanese executives are starting to see the economic advantages of open source software, and that is making them fairly excited about being a part of it. Mid-level managers are decidedly less enthusiastic; they fear that community participation could erode their power and influence within the company. They also feel stronger than the community and feel a need to keep core development competence within the company. Developers, too, are hesitant. The high visibility afforded by community participation is relatively unhelpful in Japan, where labor mobility is quite low. They fear that managers may not understand what they do, they worry about working in an unfamiliar language, and they fear being flamed in public.

Again, things seem to be getting better. Labor mobility is on the rise in Japan, and some managers are beginning to figure things out. And there are a lot of open-source developers in Japan. So, in the end, Ueda-san is optimistic about the future of Japanese participation in the development community.

[Tokyo by night]

Looking at how the Japan Linux Symposium went, your editor would be inclined to agree with that optimism. The event was well attended by highly-engaged developers from Japan and beyond. Questions during the talks were subdued in the Japanese fashion, but the hallway discussions were lively. JLS mirrors a growing and enthusiastic development community. This event is off to a good start; if it can retain its success next year in the absence of the Kernel Summit, it may well become one of the definitive conferences worldwide.

Comments (8 posted)

Page editor: Jonathan Corbet


"Evil Maid" attack against disk encryption

By Jake Edge
October 28, 2009

Physical security is important. The "Evil Maid" attack serves as a reminder that briefly allowing a laptop out of your control, even with an encrypted hard disk, means that all security bets are off—the machine should be considered potentially compromised. Obviously different users have different levels of paranoia about their data security, but the Evil Maid attack shows just how simple it can be for others to access your data.

There is nothing particularly new in the proof-of-concept (PoC) attack against TrueCrypt disk encryption software, but the simplicity of the approach should give one pause. Joanna Rutkowska described the attack back in January, but the need for physical computer security goes back much further than that. But, folks are less wary of physical attacks against laptops today because of whole-disk encryption. Rutkowska's PoC, along with last year's report on "cold boot" attacks, should make it clear that encryption—at least without some kind of Trusted Platform Module (TPM) support—is not a complete solution

The basic idea behind Evil Maid is that someone gets access to a laptop for a fairly short period of time (a few minutes), and, in that time, boots it from a USB key. One obvious vector is a hotel maid (or someone acting as one), who enters someone's room while they are out to dinner, which is what gives the attack its name. The USB key contains a payload that hooks the TrueCrypt password prompting code and stores the last password entered. The payload gets added to the Master Boot Record (MBR) of the laptop so that it becomes active on the next boot.

While it has not been implemented in the PoC, there is no reason that the malware couldn't send the password off via the network; currently it just reports it back the next time the Evil Maid USB key is booted. That would require the attacker to access the laptop twice—with its user typing in the encryption key in between—but a multi-day hotel stay would give ample opportunity for that to occur.

As Bruce Schneier points out, this attack is in no way limited to TrueCrypt, as other solutions suffer from the same vulnerabilities. Both Schneier and Rutkowska look at some potential workarounds, but, in the final analysis, physical access allows an attacker too many ways around these security measures. Even Trusted Computing, with appropriate TPM hardware, can succumb to certain kinds of attacks.

Microsoft's BitLocker drive encryption uses the TPM, which provides reasonable assurance that the right code is being booted, but even that can fall prey to Evil Maid-style attacks, as Rutkowska describes:

Namely the Evil Maid for Bitlocker would have to display a fake Bitlocker prompt (that could be identical to the real Bitlocker prompt), but after obtaining a correct password from the user Evil Maid would not be able to pass the execution to the real Bitlocker code, as the SRTM [Static Root of Trust Measurement] chain will be broken. Instead, Evil Maid would have to pretend that the password was wrong, uninstall itself, and then reboot the platform. Thus, a Bitlocker user that is confident that he or she entered the correct password, but the OS didn't boot correctly, should destroy the laptop.

Rutkowska also describes a "Poor Man's Solution" which calculates hashes of various unencrypted portions of the disk (especially the MBR). The Disk Hasher is a bootable Linux-based USB key that calculates and stores the hashes on the USB key, as well as verifying the correct hashes prior to booting. As she points out, it only protects against disk-based attacks—BIOS reflashing would subvert Disk Hasher.

Requiring a password in the BIOS before booting is another possible workaround, but one that may not provide as much security as it at first seems. BIOS reflashing is one possible attack, but an easier—though more time-consuming than the "standard" Evil Maid attack—method would be to remove the disk, attach it to another laptop and install the necessary code. It also adds complexity to the attack, but the 5-15 minutes needed to swap out a laptop hard disk is not all that difficult to come by in the hotel scenario.

This PoC, along with other attacks against encrypted disks, is very useful to remind users that hard disk encryption is no panacea. You still must consider which kinds of threats you are trying to protect against. Disk encryption is great for preventing accidental disclosure of private information when someone steals a laptop, but is much less useful for an attack that is focused on accessing the data on a particular laptop. Much like internet security, fairly straightforward protection techniques are fine to thwart the random attacker but are probably insufficient for one who is focused on subverting your defenses in particular.

Comments (25 posted)

Brief items

Firefox 3.5.4 and 3.0.15 now available for download

Mozilla has announced the availability of Firefox 3.5.4 and 3.0.15. Each fixes some fairly serious sounding security problems (3.5.4, 3.0.15) including multiple "critical" flaws. "We strongly recommend that all Firefox users upgrade to this latest release. If you already have Firefox 3.5 or Firefox 3, you will receive an automated update notification within 24 to 48 hours. This update can also be applied manually by selecting "Check for Updates..." from the Help menu. " Distribution updates will presumably be available soon as well.

Full Story (comments: none)

New vulnerabilities

acroread: multiple vulnerabilities

Package(s):acroread CVE #(s):CVE-2007-0048 CVE-2009-2979 CVE-2009-2980 CVE-2009-2981 CVE-2009-2982 CVE-2009-2983 CVE-2009-2985 CVE-2009-2986 CVE-2009-2988 CVE-2009-2990 CVE-2009-2991 CVE-2009-2993 CVE-2009-2994 CVE-2009-2996 CVE-2009-2997 CVE-2009-2998 CVE-2009-3431 CVE-2009-3458 CVE-2009-3459 CVE-2009-3462
Created:October 26, 2009 Updated:October 28, 2009

From the CVE entries:

CVE-2007-0048: Adobe Acrobat Reader Plugin before 8.0.0, and possibly the plugin distributed with Adobe Reader 7.x before 7.1.4, 8.x before 8.1.7, and 9.x before 9.2, when used with Internet Explorer, Google Chrome, or Opera, allows remote attackers to cause a denial of service (memory consumption) via a long sequence of # (hash) characters appended to a PDF URL, related to a "cross-site scripting issue."

CVE-2009-2979: Adobe Reader and Acrobat 9.x before 9.2, 8.x before 8.1.7, and possibly 7.x through 7.1.4 do not properly perform XMP-XML entity expansion, which allows remote attackers to cause a denial of service via a crafted document.

CVE-2009-2980: Integer overflow in Adobe Reader and Acrobat 7.x before 7.1.4, 8.x before 8.1.7, and 9.x before 9.2 allows attackers to cause a denial of service or possibly execute arbitrary code via unspecified vectors.

CVE-2009-2981: Adobe Reader and Acrobat 7.x before 7.1.4, 8.x before 8.1.7, and 9.x before 9.2 do not properly validate input, which might allow attackers to bypass intended Trust Manager restrictions via unspecified vectors.

CVE-2009-2982: An unspecified certificate in Adobe Reader and Acrobat 9.x before 9.2, 8.x before 8.1.7, and possibly 7.x through 7.1.4 might allow remote attackers to conduct a "social engineering attack" via unknown vectors.

CVE-2009-2983: Adobe Reader and Acrobat 9.x before 9.2, 8.x before 8.1.7, and possibly 7.x through 7.1.4 allow attackers to cause a denial of service (memory corruption) or possibly execute arbitrary code via unspecified vectors.

CVE-2009-2985: Adobe Reader and Acrobat 7.x before 7.1.4, 8.x before 8.1.7, and 9.x before 9.2 allow attackers to cause a denial of service (memory corruption) or possibly execute arbitrary code via unspecified vectors, a different vulnerability than CVE-2009-2996.

CVE-2009-2986: Multiple heap-based buffer overflows in Adobe Reader and Acrobat 7.x before 7.1.4, 8.x before 8.1.7, and 9.x before 9.2 might allow attackers to execute arbitrary code via unspecified vectors.

CVE-2009-2988: Adobe Reader and Acrobat 7.x before 7.1.4, 8.x before 8.1.7, and 9.x before 9.2 do not properly validate input, which allows attackers to cause a denial of service via unspecified vectors.

CVE-2009-2990: Array index error in Adobe Reader and Acrobat 9.x before 9.2, 8.x before 8.1.7, and possibly 7.x through 7.1.4 might allow attackers to execute arbitrary code via unspecified vectors.

CVE-2009-2991: Unspecified vulnerability in the Mozilla plug-in in Adobe Reader and Acrobat 8.x before 8.1.7, and possibly 7.x before 7.1.4 and 9.x before 9.2, might allow remote attackers to execute arbitrary code via unknown vectors.

CVE-2009-2993: The JavaScript for Acrobat API in Adobe Reader and Acrobat 7.x before 7.1.4, 8.x before 8.1.7, and 9.x before 9.2 does not properly implement the (1) Privileged Context and (2) Safe Path restrictions for unspecified JavaScript methods, which allows remote attackers to create arbitrary files, and possibly execute arbitrary code, via the cPath parameter in a crafted PDF file. NOTE: some of these details are obtained from third party information.

CVE-2009-2994: Buffer overflow in Adobe Reader and Acrobat 7.x before 7.1.4, 8.x before 8.1.7, and 9.x before 9.2 might allow attackers to execute arbitrary code via unspecified vectors.

CVE-2009-2996: Adobe Reader and Acrobat 7.x before 7.1.4, 8.x before 8.1.7, and 9.x before 9.2 allow attackers to cause a denial of service (memory corruption) or possibly execute arbitrary code via unspecified vectors, a different vulnerability than CVE-2009-2985.

CVE-2009-2997: Heap-based buffer overflow in Adobe Reader and Acrobat 7.x before 7.1.4, 8.x before 8.1.7, and 9.x before 9.2 might allow attackers to execute arbitrary code via unspecified vectors.

CVE-2009-2998: Adobe Reader and Acrobat 7.x before 7.1.4, 8.x before 8.1.7, and 9.x before 9.2 do not properly validate input, which might allow attackers to execute arbitrary code via unspecified vectors, a different vulnerability than CVE-2009-3458.

CVE-2009-3431: Stack consumption vulnerability in Adobe Reader and Acrobat 9.1.3, 9.1.2, 9.1.1, and earlier 9.x versions; 8.1.6 and earlier 8.x versions; and possibly 7.1.4 and earlier 7.x versions allows remote attackers to cause a denial of service (application crash) via a PDF file with a large number of [ (open square bracket) characters in the argument to the alert method. NOTE: some of these details are obtained from third party information.

CVE-2009-3458: Adobe Reader and Acrobat 7.x before 7.1.4, 8.x before 8.1.7, and 9.x before 9.2 do not properly validate input, which might allow attackers to execute arbitrary code via unspecified vectors, a different vulnerability than CVE-2009-2998.

CVE-2009-3459: Heap-based buffer overflow in Adobe Reader and Acrobat 7.x before 7.1.4, 8.x before 8.1.7, and 9.x before 9.2 allows remote attackers to execute arbitrary code via a crafted PDF file that triggers memory corruption, as exploited in the wild in October 2009. NOTE: some of these details are obtained from third party information.

CVE-2009-3462: Adobe Reader and Acrobat 7.x before 7.1.4, 8.x before 8.1.7, and 9.x before 9.2 on Unix, when Debug mode is enabled, allow attackers to execute arbitrary code via unspecified vectors, related to a "format bug."

Gentoo 200910-03 acroread 2009-10-25
SuSE SUSE-SA:2009:049 acroread, 2009-10-26

Comments (none posted)

acroread: denial of service

Package(s):acroread,acroread_ja CVE #(s):CVE-2009-2992
Created:October 26, 2009 Updated:October 28, 2009

From the CVE entry:

CVE-2009-2992: An unspecified ActiveX control in Adobe Reader and Acrobat 9.x before 9.2, 8.x before 8.1.7, and possibly 7.x through 7.1.4 does not properly validate input, which allows attackers to cause a denial of service via unknown vectors.

SuSE SUSE-SA:2009:049 acroread, 2009-10-26

Comments (none posted)

firefox: multiple vulnerabilities

Package(s):firefox seamonkey CVE #(s):CVE-2009-1563 CVE-2009-3274 CVE-2009-3370 CVE-2009-3372 CVE-2009-3373 CVE-2009-3374 CVE-2009-3375 CVE-2009-3376 CVE-2009-3380 CVE-2009-3382
Created:October 28, 2009 Updated:June 14, 2010
Description: Firefox 3.5.4 and 3.0.15 have been released with fixes for the usual set of scary vulnerabilities.
Gentoo 201301-01 firefox 2013-01-07
Mandriva MDVSA-2010:071 mozilla-thunderbird 2010-04-23
Fedora FEDORA-2010-7100 seamonkey 2010-04-21
SuSE SUSE-SR:2010:013 apache2-mod_php5/php5, bytefx-data-mysql/mono, flash-player, fuse, java-1_4_2-ibm, krb5, libcmpiutil/libvirt, libmozhelper-1_0-0/mozilla-xulrunner190, libopenssl-devel, libpng12-0, libpython2_6-1_0, libtheora, memcached, ncpfs, pango, puppet, python, seamonkey, te_ams, texlive 2010-06-14
CentOS CESA-2010:0153 thunderbird 2010-03-26
Ubuntu USN-915-1 thunderbird 2010-03-18
CentOS CESA-2010:0154 thunderbird 2010-03-17
Red Hat RHSA-2010:0153-02 thunderbird 2010-03-17
Red Hat RHSA-2010:0154-02 thunderbird 2010-03-17
Mandriva MDVSA-2009:290-1 firefox 2009-12-02
Debian DSA-1931-1 nspr 2009-11-08
Slackware SSA:2009-306-01 mozilla 2009-11-03
Fedora FEDORA-2009-10878 epiphany-extensions 2009-10-29
Red Hat RHSA-2009:1530-01 firefox 2009-10-27
Mandriva MDVSA-2009:294 firefox 2009-11-05
Fedora FEDORA-2009-10878 evolution-rss 2009-10-29
Fedora FEDORA-2009-10878 galeon 2009-10-29
Fedora FEDORA-2009-10878 gnome-python2-extras 2009-10-29
Fedora FEDORA-2009-10878 gnome-web-photo 2009-10-29
Ubuntu USN-853-2 firefox 2009-11-11
SuSE SUSE-SA:2009:052 MozillaFirefox 2009-11-04
Ubuntu USN-853-1 firefox-3.0, firefox-3.5, xulrunner-1.9, xulrunner-1.9.1 2009-10-31
Fedora FEDORA-2009-10878 firefox 2009-10-29
Fedora FEDORA-2009-10878 ruby-gnome2 2009-10-29
Fedora FEDORA-2009-10981 yelp 2009-11-04
Fedora FEDORA-2009-10981 xulrunner 2009-11-04
Fedora FEDORA-2009-10981 ruby-gnome2 2009-11-04
Fedora FEDORA-2009-10981 pcmanx-gtk2 2009-11-04
Fedora FEDORA-2009-10981 perl-Gtk2-MozEmbed 2009-11-04
Fedora FEDORA-2009-10981 mugshot 2009-11-04
Fedora FEDORA-2009-10981 Miro 2009-11-04
Fedora FEDORA-2009-10981 mozvoikko 2009-11-04
Fedora FEDORA-2009-10981 kazehakase 2009-11-04
Fedora FEDORA-2009-10981 google-gadgets 2009-11-04
Fedora FEDORA-2009-10981 gnome-web-photo 2009-11-04
Fedora FEDORA-2009-10981 gnome-python2-extras 2009-11-04
Fedora FEDORA-2009-10981 epiphany-extensions 2009-11-04
Fedora FEDORA-2009-10981 gecko-sharp2 2009-11-04
Fedora FEDORA-2009-10981 evolution-rss 2009-11-04
Fedora FEDORA-2009-10981 firefox 2009-11-04
Fedora FEDORA-2009-10981 galeon 2009-11-04
Fedora FEDORA-2009-10981 epiphany 2009-11-04
Fedora FEDORA-2009-10981 blam 2009-11-04
Fedora FEDORA-2009-10878 chmsee 2009-10-29
Fedora FEDORA-2009-10878 google-gadgets 2009-10-29
Fedora FEDORA-2009-10878 kazehakase 2009-10-29
Fedora FEDORA-2009-10878 Miro 2009-10-29
Fedora FEDORA-2009-10878 monodevelop 2009-10-29
Fedora FEDORA-2009-10878 mozvoikko 2009-10-29
Fedora FEDORA-2009-10878 pcmanx-gtk2 2009-10-29
Fedora FEDORA-2009-10878 perl-Gtk2-MozEmbed 2009-10-29
Fedora FEDORA-2009-10878 seahorse-plugins 2009-10-29
Fedora FEDORA-2009-10878 xulrunner 2009-10-29
Fedora FEDORA-2009-10878 yelp 2009-10-29
Mandriva MDVSA-2009:290 firefox 2009-10-29
Debian DSA-1922-1 xulrunner 2009-10-28
Fedora FEDORA-2009-10878 hulahop 2009-10-29
Fedora FEDORA-2009-10878 blam 2009-10-29
Fedora FEDORA-2009-10878 eclipse 2009-10-29
CentOS CESA-2009:1531 seamonkey 2009-10-28
CentOS CESA-2009:1531 seamonkey 2009-10-28
SuSE SUSE-SR:2009:018 cyrus-imapd, neon/libneon, freeradius, strongswan, openldap2, apache2-mod_jk, expat, xpdf, mozilla-nspr 2009-11-10
Fedora FEDORA-2009-10878 epiphany 2009-10-29
CentOS CESA-2009:1530 firefox 2009-10-28
Red Hat RHSA-2009:1531-01 seamonkey 2009-10-27

Comments (none posted)

kernel: missing initialization flaws

Package(s):kernel CVE #(s):CVE-2005-4881 CVE-2009-3228
Created:October 22, 2009 Updated:October 8, 2010
Description: From the Red Hat alert:

multiple, missing initialization flaws were found in the Linux kernel. Padding data in several core network structures was not initialized properly before being sent to user-space. These flaws could lead to information leaks. (CVE-2005-4881, CVE-2009-3228, Moderate)

Mandriva MDVSA-2010:188 kernel 2010-09-23
Mandriva MDVSA-2010:198 kernel 2010-10-07
SuSE SUSE-SA:2009:064 kernel 2009-12-22
SuSE SUSE-SA:2009:061 kernel 2009-12-14
Mandriva MDVSA-2009:329 kernel 2009-12-09
Ubuntu USN-864-1 linux, linux-source-2.6.15 2009-12-05
SuSE SUSE-SA:2009:060 kernel 2009-12-02
Red Hat RHSA-2009:1540-01 kernel-rt 2009-11-03
Red Hat RHSA-2009:1548-01 kernel 2009-11-03
CentOS CESA-2009:1548 kernel 2009-11-04
Red Hat RHSA-2009:1522-01 kernel 2009-10-22
Mandriva MDVSA-2009:301 kernel 2009-11-20
Debian DSA-1929-1 linux-2.6 2009-11-05
Debian DSA-1927-1 linux-2.6 2009-11-05
Debian DSA-1928-1 linux-2.6.24 2009-11-05
CentOS CESA-2009:1522 kernel 2009-10-26

Comments (none posted)

kernel: buffer overflow

Package(s):kernel CVE #(s):CVE-2009-2584
Created:October 22, 2009 Updated:October 28, 2009
Description: From the National Vulnerability Database entry:

"Off-by-one error in the options_write function in drivers/misc/sgi-gru/gruprocfs.c in the SGI GRU driver in the Linux kernel and earlier on ia64 and x86 platforms might allow local users to overwrite arbitrary memory locations and gain privileges via a crafted count argument, which triggers a stack-based buffer overflow. "

Ubuntu USN-852-1 linux, linux-source-2.6.15 2009-10-22

Comments (none posted)

kernel: privilege escalation

Package(s):kernel CVE #(s):CVE-2009-2695
Created:October 22, 2009 Updated:March 1, 2010
Description: From the National Vulnerability Database entry:

"The Linux kernel before 2.6.31-rc7 does not properly prevent mmap operations that target page zero and other low memory addresses, which allows local users to gain privileges by exploiting NULL pointer dereference vulnerabilities, related to (1) the default configuration of the allow_unconfined_mmap_low boolean in SELinux on Red Hat Enterprise Linux (RHEL) 5, (2) an error that causes allow_unconfined_mmap_low to be ignored in the unconfined_t domain, (3) lack of a requirement for the CAP_SYS_RAWIO capability for these mmap operations, and (4) interaction between the mmap_min_addr protection mechanism and certain application programs. "

Debian DSA-2004-1 linux-2.6.24 2010-02-27
Red Hat RHSA-2009:1672-01 kernel 2009-12-15
Red Hat RHSA-2009:1540-01 kernel-rt 2009-11-03
Red Hat RHSA-2009:1548-01 kernel 2009-11-03
CentOS CESA-2009:1548 kernel 2009-11-04
Debian DSA-1915-1 linux-2.6 2009-10-22
Ubuntu USN-852-1 linux, linux-source-2.6.15 2009-10-22
Red Hat RHSA-2009:1587-01 kernel 2009-11-17

Comments (none posted)

kernel: insufficient randomization

Package(s):kernel CVE #(s):CVE-2009-3238
Created:October 22, 2009 Updated:February 15, 2010
Description: From the National Vulnerability Database entry:

"The get_random_int function in drivers/char/random.c in the Linux kernel before 2.6.30 produces insufficiently random numbers, which allows attackers to predict the return value, and possibly defeat protection mechanisms based on randomization, via vectors that leverage the function's tendency to "return the same value over and over again for long stretches of time.""

SuSE SUSE-SA:2010:012 kernel 2010-02-15
SuSE SUSE-SA:2009:055 kernel 2009-11-12
Debian DSA-1928-1 linux-2.6.24 2009-11-05
SuSE SUSE-SA:2009:054 kernel 2009-11-11
Debian DSA-1929-1 linux-2.6 2009-11-05
Debian DSA-1927-1 linux-2.6 2009-11-05
Ubuntu USN-852-1 linux, linux-source-2.6.15 2009-10-22

Comments (none posted)

kernel: insecure file creation

Package(s):kernel CVE #(s):CVE-2009-3286
Created:October 22, 2009 Updated:February 15, 2010
Description: From the National Vulnerability Database entry:

"NFSv4 in the Linux kernel 2.6.18, and possibly other versions, does not properly clean up an inode when an O_EXCL create fails, which causes files to be created with insecure settings such as setuid bits, and possibly allows local users to gain privileges, related to the execution of the do_open_permission function even when a create fails."

SuSE SUSE-SA:2010:012 kernel 2010-02-15
SuSE SUSE-SA:2009:060 kernel 2009-12-02
Debian DSA-1928-1 linux-2.6.24 2009-11-05
CentOS CESA-2009:1548 kernel 2009-11-04
Red Hat RHSA-2009:1548-01 kernel 2009-11-03
Ubuntu USN-852-1 linux, linux-source-2.6.15 2009-10-22
Debian DSA-1929-1 linux-2.6 2009-11-05
Debian DSA-1915-1 linux-2.6 2009-10-22

Comments (none posted)

kernel: denial of service

Package(s):kernel CVE #(s):CVE-2009-3288
Created:October 22, 2009 Updated:May 7, 2010
Description: From the National Vulnerability Database entry:

"The sg_build_indirect function in drivers/scsi/sg.c in Linux kernel 2.6.28-rc1 through 2.6.31-rc8 uses an incorrect variable when accessing an array, which allows local users to cause a denial of service (kernel OOPS and NULL pointer dereference), as demonstrated by using xcdroast to duplicate a CD. NOTE: this is only exploitable by users who can open the cdrom device."

rPath rPSA-2010-0037-1 kernel 2010-05-07
Ubuntu USN-852-1 linux, linux-source-2.6.15 2009-10-22

Comments (none posted)

kernel: denial of service

Package(s):linux-2.6 CVE #(s):CVE-2009-3613
Created:October 23, 2009 Updated:December 22, 2009
Description: From the Debian advisory: Alistair Strachan reported an issue in the r8169 driver. Remote users can cause a denial of service (IOMMU space exhaustion and system crash) by transmitting a large amount of jumbo frames.
SuSE SUSE-SA:2009:064 kernel 2009-12-22
CentOS CESA-2009:1671 kernel 2009-12-18
Red Hat RHSA-2009:1671-01 kernel 2009-12-15
Ubuntu USN-864-1 linux, linux-source-2.6.15 2009-12-05
Debian DSA-1928-1 linux-2.6.24 2009-11-05
Red Hat RHSA-2009:1540-01 kernel-rt 2009-11-03
Red Hat RHSA-2009:1548-01 kernel 2009-11-03
CentOS CESA-2009:1548 kernel 2009-11-04
Debian DSA-1915-1 linux-2.6 2009-10-22

Comments (none posted)

kernel: privilege escalation

Package(s):kernel CVE #(s):CVE-2009-3612
Created:October 27, 2009 Updated:February 15, 2010
Description: From the National Vulnerability Database entry:

The tcf_fill_node function in net/sched/cls_api.c in the netlink subsystem in the Linux kernel 2.6.x before 2.6.32-rc5, and and earlier, does not initialize a certain tcm__pad2 structure member, which might allow local users to obtain sensitive information from kernel memory via unspecified vectors. NOTE: this issue exists because of an incomplete fix for CVE-2005-4881.

SuSE SUSE-SA:2010:012 kernel 2010-02-15
SuSE SUSE-SA:2009:064 kernel 2009-12-22
CentOS CESA-2009:1670 kernel 2009-12-17
Red Hat RHSA-2009:1670-01 kernel 2009-12-15
SuSE SUSE-SA:2009:061 kernel 2009-12-14
Mandriva MDVSA-2009:329 kernel 2009-12-09
Ubuntu USN-864-1 linux, linux-source-2.6.15 2009-12-05
SuSE SUSE-SA:2009:060 kernel 2009-12-02
Red Hat RHSA-2009:1540-01 kernel-rt 2009-11-03
Mandriva MDVSA-2009:301 kernel 2009-11-20
Debian DSA-1929-1 linux-2.6 2009-11-05
Fedora FEDORA-2009-10639 kernel 2009-10-21
Debian DSA-1927-1 linux-2.6 2009-11-05
Fedora FEDORA-2009-11038 kernel 2009-11-05
Debian DSA-1928-1 linux-2.6.24 2009-11-05

Comments (none posted)

mapserver: integer overflow

Package(s):mapserver CVE #(s):CVE-2009-2281
Created:October 23, 2009 Updated:October 28, 2009
Description: From the Debian advisory: An integer overflow when processing HTTP requests can lead to a heap-based buffer overflow. An attacker can use this to execute arbitrary code either via crafted Content-Length values or large HTTP request. This is partly because of an incomplete fix for CVE-2009-0840.
Debian DSA-1914-1 mapserver 2009-10-22

Comments (none posted)

nginx: denial of service

Package(s):nginx CVE #(s):
Created:October 27, 2009 Updated:October 28, 2009
Description: From the Debian alert:

Jasson Bell discovered that a remote attacker could cause a denial of service (segmentation fault) by sending a crafted request.

Debian DSA-1920-1 nginx 2009-10-26

Comments (none posted)

phpmyadmin: multiple vulnerabilities

Package(s):phpMyAdmin CVE #(s):CVE-2009-3696 CVE-2009-3697
Created:October 26, 2009 Updated:October 28, 2009

From the CVE entries:

CVE-2009-3696: Cross-site scripting (XSS) vulnerability in phpMyAdmin 2.11.x before and 3.x before allows remote attackers to inject arbitrary web script or HTML via a crafted name for a MySQL table.

CVE-2009-3697: SQL injection vulnerability in the PDF schema generator functionality in phpMyAdmin 2.11.x before and 3.x before allows remote attackers to execute arbitrary SQL commands via unspecified interface parameters.

SuSE SUSE-SR:2009:017 php5, newt, rubygem-actionpack, rubygem-activesupport, java-1_4_2-ibm, postgresql, samba, phpMyAdmin, viewvc 2009-10-26
Debian DSA-1918-1 phpmyadmin 2009-10-25

Comments (none posted)

poppler: denial of service

Package(s):poppler CVE #(s):CVE-2009-3605
Created:October 23, 2009 Updated:March 5, 2010
Description: From the Ubuntu advisory: It was discovered that poppler contained multiple security issues when parsing malformed PDF documents. If a user or automated system were tricked into opening a crafted PDF file, an attacker could cause a denial of service or execute arbitrary code with privileges of the user invoking the program.
Gentoo 201310-03 poppler 2013-10-06
Mandriva MDVSA-2011:175 poppler 2011-11-15
Mandriva MDVSA-2010:055 poppler 2010-03-04
Mandriva MDVSA-2009:346 kde 2009-12-29
Mandriva MDVSA-2009:334 poppler 2009-12-17
SuSE SUSE-SR:2009:018 cyrus-imapd, neon/libneon, freeradius, strongswan, openldap2, apache2-mod_jk, expat, xpdf, mozilla-nspr 2009-11-10
Ubuntu USN-850-2 poppler 2009-10-22
Slackware SSA:2009-302-02 poppler 2009-10-29
Slackware SSA:2009-302-01 xpdf 2009-10-29

Comments (none posted)

python-markdown2: multiple vulnerabilities

Package(s):python-markdown2 CVE #(s):
Created:October 27, 2009 Updated:October 28, 2009
Description: From the Fedora alert:

Update from to, which fixes some issues, including these two security-related bugs: - [Issue 30] Fix a possible XSS via JavaScript injection in a carefully crafted image reference (usage of double-quotes in the URL). - [Issue 29] Fix security hole in the md5-hashing scheme for handling HTML chunks during processing.

Fedora FEDORA-2009-10329 python-markdown2 2009-10-09
Fedora FEDORA-2009-10377 python-markdown2 2009-10-09

Comments (none posted)

rubygem-actionpack: information leak

Package(s):rubygem-actionpack CVE #(s):CVE-2009-3086
Created:October 26, 2009 Updated:June 15, 2011

From the CVE entry:

A certain algorithm in Ruby on Rails 2.1.0 through 2.2.2, and 2.3.x before 2.3.4, leaks information about the complexity of message-digest signature verification in the cookie store, which might allow remote attackers to forge a digest via multiple attempts.

Debian DSA-2260-1 rails 2011-06-14
Gentoo 200912-02 rails 2009-12-20
SuSE SUSE-SR:2009:017 php5, newt, rubygem-actionpack, rubygem-activesupport, java-1_4_2-ibm, postgresql, samba, phpMyAdmin, viewvc 2009-10-26

Comments (none posted)

sahana: file exposure vulnerability

Package(s):sahana CVE #(s):
Created:October 27, 2009 Updated:October 28, 2009
Description: From the Fedora bug report:

The first issue would allow an attacker to touch/modify any file on the system. Essentially the issue is that get, post, and requests aren't sanitized or unescaped.

Fedora FEDORA-2009-10718 sahana 2009-10-27
Fedora FEDORA-2009-10822 sahana 2009-10-27

Comments (none posted)

slim: current directory exposure in default path

Package(s):slim CVE #(s):
Created:October 27, 2009 Updated:October 28, 2009
Description: From the Fedora bug report:

The SLiM display manager includes the current directory in it's default path which opens up users to trojan attacks and other unexpected behavior. It should be removed from the default config.

Fedora FEDORA-2009-10475 slim 2009-10-14
Fedora FEDORA-2009-10461 slim 2009-10-14

Comments (none posted)

systemtap: multiple DOS vulnerabilities

Package(s):systemtap CVE #(s):CVE-2009-2911
Created:October 27, 2009 Updated:October 28, 2009
Description: From the Fedora bug report:

Multiple denial of service flaws were found in the SystemTap instrumentation system, when the --unprivileged mode was activated:

a, Kernel stack overflow allows local attackers to cause denial of service or execute arbitrary code via long number of parameters, provided to the print* call.

b, Kernel stack frame overflow allows local attackers to cause denial of service via specially-crafted user-provided DWARF information.

c, Absent check(s) for the upper bound of the size of the unwind table and for the upper bound of the size of each of the CIE/CFI records, could allow an attacker to cause a denial of service (infinite loop).

Fedora FEDORA-2009-10719 systemtap 2009-10-27
Fedora FEDORA-2009-10849 systemtap 2009-10-27

Comments (none posted)

viewvc: multiple vulnerabilities

Package(s):viewvc CVE #(s):CVE-2009-3618 CVE-2009-3619
Created:October 26, 2009 Updated:October 28, 2009

From the Tenable advisory:

Update of viewvc to version 1.0.9 fixes a cross-site scripting (XSS) problem and enhances filtering of illegal characters when displaying error messages (CVE-2009-3618, CVE-2009-3619).

SuSE SUSE-SR:2009:017 php5, newt, rubygem-actionpack, rubygem-activesupport, java-1_4_2-ibm, postgresql, samba, phpMyAdmin, viewvc 2009-10-26

Comments (none posted)

wordpress: denial of service

Package(s):wordpress CVE #(s):
Created:October 27, 2009 Updated:October 28, 2009
Description: From the Fedora bug report:

A denial of service (resource exhaustion) flaw was found in the way WordPress used to handle HTTP headers, contained in the "trackback" message, sent to WordPress. A local, unprivileged user could sent a specially-crafted trackback message to running instance of WordPress, leading to its crash.

Fedora FEDORA-2009-10793 wordpress 2009-10-27
Fedora FEDORA-2009-10795 wordpress 2009-10-27

Comments (none posted)

Page editor: Jake Edge

Kernel development

Brief items

Kernel release status

The current development kernel remains 2.6.32-rc5; no 2.6.32 prepatches have been released over the last week.

The current stable kernel is, released (along with on October 22. The 2.6.27 update is relatively small and focused on SCSI and USB serial devices; the 2.6.31 update, instead, addresses a much wider range of problems.

Comments (none posted)

Quotes of the week

It would be possible for us to rescan the RMRR tables when we take a device out of the si_domain, if we _really_ have to. But I'm going to want a strand of hair from the engineer responsible for that design, for my voodoo doll.
-- David Woodhouse

If a software system is so complex that its quirks and pitfalls cannot easily be located and avoided (witness the ondemand scheduler problem on Pentium IV's message I recently filed) then is it not *effectively* open source. I am qualified to read hardware manuals, I am qualified to rewrite C code (having written code generators for several C compilers) but the LKML is like the windmill and I feel like Don Quixote tilting back and forth in front of it. One could even argue that the lack of an open bug reporting system (and "current state" online reports) effectively makes Linux a non-open-source system. Should not Linux be the one of the first systems to make all knowledge completely available? Or is it doomed to be replaced by systems which might provide such capabilities (Android perhaps???)
-- Robert Bradbury

A real git tree will contain fixes for brown paperbag bugs, it will contain reverts, it will contain the occasional messy changelog. It is also, because it's more real life, far more trustable to pull from. The thing is, nothing improves a workflow more than public embarrassment - but rebasing takes away much of that public embarrassment factor.
-- Ingo Molnar

Comments (3 posted)

A Tokyo moment

The release of Windows 7 happened to coincide with the Japan Linux Symposium in Tokyo. Linus Torvalds was clearly quite impressed - and Chris Schlaeger was there to capture the moment. The original picture is available over here.

See also: Len Brown's photos from the kernel summit and JLS.

Comments (20 posted)


By Jonathan Corbet
October 28, 2009
In-kernel tracing is rapidly becoming a feature that developers and users count on. In current kernels, though, the virtual files used to control tracing and access data are all found in the debugfs filesystem, in the tracing directory. That is not seen as a long-term solution; debugfs is meant for volatile, debugging information, but tracing users want to see a stable ABI in a non-debugging location.

Following up on some conference discussions, Greg Kroah-Hartman decided to regularize the tracing file hierarchy through the creation of a new tracefs virtual filesystem. Tracefs looks a lot like .../debug/tracing in that the files have simply been moved from one location to the other. Tracefs has a simpler internal API, though, since it does not require all of the features supported by debugfs.

The idea of tracefs is universally supported, but this particular patch looks like it will not be going in anytime soon. The concern is that anything moved out of debugfs and into something more stable will instantly become part of the kernel ABI. Much of the current tracing interface has been thrown together to meet immediate needs; the sort of longer-term thinking which is needed to define an interface which can remain stable for years is just beginning to happen.

Ingo Molnar thinks that the virtual files which describe the available events could be exported now, but not much else. That leaves most of the interface in an unstable state, still. So Greg has withdrawn the patch for now; expect it to come back with the tracing developers are more ready to commit to their ABI. At that point, we can expect the debate to begin on the truly important question: /tracing or /sys/kernel/tracing?

Comments (1 posted)

Staging drivers out

By Jonathan Corbet
October 28, 2009
The staging tree was conceived as a way for substandard drivers to get into the kernel tree. Recently, though, there has been talk of using staging to ease drivers out as well. The idea is that apparently unused and unloved drivers would be moved to the staging tree, where they will languish for three development cycles. If nobody has stepped up to maintain those drivers during that time, they will be removed from the tree. This idea was discussed at the 2009 Kernel Summit with no serious dissent.

Since then, John Linville has decided to test the system with a series of ancient wireless drivers. These include the "strip" driver ("STRIP is a radio protocol developed for the MosquitoNet project - to send Internet traffic using Metricom radios."), along with the arlan, netwave, and wavelan drivers. Nobody seems to care about this code, and it is unlikely that any users remain. If that is true, then there should be no down side to removing the code.

That hasn't stopped the complaints, though, mostly from people who believe that staging drivers out of the tree is an abuse of the process which may hurt unsuspecting users. It is true that users may have a hard time noticing this change until the drivers are actually gone - though their distributors may drop them before the mainline does. So the potential for an unpleasant surprise is there; mistaken removals are easily reverted, but that is only partially comforting for a user whose system has just broken.

The problem here is that there is no other way to get old code out of the tree. Once upon a time, API changes would cause unmaintained code to fail to compile; after an extended period of brokenness, a driver could be safely removed. Contemporary mores require developers to fix all in-tree users of an API they change, though, so this particular indicator no longer exists. That means the tree can fill up with code which is unused and which has long since ceased to work, but which still compiles flawlessly. Somehow a way needs to be found to remove that code. The "staging out" process may not be perfect, but nobody has posted a better idea yet.

Comments (16 posted)

/proc and directory permissions

By Jake Edge
October 28, 2009

In a discussion of the O_NODE open flag patch, an interesting, though obscure, security hole came to light. Jamie Lokier noticed the problem, and Pavel Machek eventually posted it to the Bugtraq security mailing list.

Normally, one would expect that a file in a directory with 700 permissions would be inaccessible to all but the owner of the directory (and root, of course). Lokier and Machek showed that there is a way around that restriction by using an entry in an attacking process's fd directory in the /proc filesystem.

If the directory is open to the attacker at some time, while the file is present, the attacker can open the file for reading and hold it open even if the victim changes the directory permissions. Any normal write to the open file descriptor will fail because it was opened read-only, but writing to /proc/$$/fd/N, where N is the open file descriptor number, will succeed based on the permissions of the file. If the file allows the attacking process to write to it, writing to the /proc file will succeed regardless of the permissions of the parent directory. This is rather counter-intuitive, and, even though it is a rather contrived example, seems to constitute a security hole.

The Bugtraq thread got off course quickly, by noting that a similar effect could be achieved creating a hardlink to the file before the directory permissions were changed. While that is true, Machek's example looked for that case by checking the link count on the file after the directory permissions had been changed. The hardlink scenario would be detected at that point.

One can imagine situations where programs do not put the right permissions on the files they use and administrators attempt to work around that problem by restricting access to the parent directory. Using this technique, an attacker could still access those files, in a way that was difficult to detect. As Machek noted, unmounting the /proc filesystem removes the problem, but "I do not think mounting /proc should change access control semantics."

There is currently some discussion of how, and to some extent whether, to address the problem, but a consensus (and patch) has not yet emerged.

Comments (12 posted)

Kernel development news

JLS2009: Generic receive offload

By Jonathan Corbet
October 27, 2009
Your editor still remembers installing his first Ethernet adapter. Through the expenditure of massive engineering resources, DEC was able to squeeze this device onto a linked pair of UNIBUS boards - the better part of a square meter of board space in total - so that a VAX system could be put onto a modern network. Supporting 10Mb/sec was a bit of a challenge in those days. In the intervening years, leading-edge network adaptors have sped up to 10Gb/sec - a full three orders of magnitude. Supporting them is still a challenge, though for different reasons. At the 2009 Japan Linux Symposium, Herbert Xu discussed those challenges and how Linux has evolved to meet them.

Part of the problem is that 10G Ethernet is still Ethernet underneath. There is value in that; it minimizes the changes required in other parts of the system. But it's an old technology which brings some heavy baggage with it, with the heaviest bag of all being the 1500-byte maximum transfer unit (MTU) limit. With packet size capped at 1500 bytes, a 10G network link running at full speed will be transferring over 800,000 packets per second. Again, that's an increase of three orders of magnitude from the 10Mb days, but CPUs have not kept pace. So the amount of CPU time available to process a single Ethernet packet is less than it was in the early days. Needless to say, that is putting some pressure on the networking subsystem; the amount of CPU time required to process each packet must be squeezed wherever possible.

(Some may quibble that, while individual CPU speeds have not kept pace, the number of cores has grown to make up the difference. That is true, but the focus of Herbert's talk was single-CPU performance for a couple of reasons: any performance work must benefit uniprocessor systems, and distributing a single adapter's work across multiple CPUs has its own challenges.)

Given the importance of per-packet overhead, one might well ask whether it makes sense to raise the MTU. That can be done; the "jumbo frames" mechanism can handle packets up to 9KB in size. The problem, according to Herbert, is that "the Internet happened." Most connections of interest go across the Internet, and those are all bound by the lowest MTU in the [Herbert Xu] entire path. Sometimes that MTU is even less than 1500 bytes. Protocol-based mechanisms for finding out what that MTU is exist, but they don't work well on the Internet; in particular, a lot of firewall setups break it. So, while jumbo frames might work well for local networks, the sad fact is that we're stuck with 1500 bytes on the wider Internet.

If we can't use a larger MTU, we can go for the next-best thing: pretend that we're using a larger MTU. For a few years now Linux has supported network adapters which perform "TCP segmentation offload," or TSO. With a TSO-capable adapter, the kernel can prepare much larger packets (64KB, say) for outgoing data; the adapter will then re-segment the data into smaller packets as the data hits the wire. That cuts the kernel's per-packet overhead by a factor of 40. TSO is well supported in Linux; for systems which are engaged mainly in the sending of data, it's sufficient to make 10GB work at full speed.

The kernel actually has a generic segmentation offload mechanism (called GSO) which is not limited to TCP. It turns out that performance improves even if the feature is emulated in the driver. But GSO only works for data transmission, not reception. That limitation is entirely fine for broad classes of users; sites providing content to the net, for example, send far more data than they receive. But other sites have different workloads, and, for them, packet reception overhead is just as important as transmission overhead.

Solutions on the receive side have been a little slower in coming, and not just because the first users were more interested in transmission performance. Optimizing the receive side is harder because packet reception is, in general, harder. When it is transmitting data, the kernel is in complete control and able to throttle sending processes if necessary. But incoming packets are entirely asynchronous events, under somebody else's control, and the kernel just has to cope with what it gets.

Still, a solution has emerged in the form of "large receive offload" (LRO), which takes a very similar approach: incoming packets are merged at reception time so that the operating system sees far fewer of them. This merging can be done either in the driver or in the hardware; even LRO emulation in the driver has performance benefits. LRO is widely supported by 10G drivers under Linux.

But LRO is a bit of a flawed solution, according to Herbert; the real problem is that it "merges everything in sight." This transformation is lossy; if there are important differences between the headers in incoming packets, those differences will be lost. And that breaks things. If a system is serving as a router, it really should not be changing the headers on packets as they pass through. LRO can totally break satellite-based connections, where some very strange header tricks are done by providers to make the whole thing work. And bridging breaks, which is a serious problem: most virtualization setups use a virtual network bridge between the host and its clients. One might simply avoid using LRO in such situations, but these also tend to be the workloads that one really wants to optimize. Virtualized networking, in particular, is already slower; any possible optimization in this area is much needed.

The solution is generic receive offload (GRO). In GRO, the criteria for which packets can be merged is greatly restricted; the MAC headers must be identical and only a few TCP or IP headers can differ. In fact, the set of headers which can differ is severely restricted: checksums are necessarily different, and the IP ID field is allowed to increment. Even the TCP timestamps must be identical, which is less of a restriction than it may seem; the timestamp is a relatively low-resolution field, so it's not uncommon for lots of packets to have the same timestamp. As a result of these restrictions, merged packets can be resegmented losslessly; as an added benefit, the GSO code can be used to perform resegmentation.

One other nice thing about GRO is that, unlike LRO, it is not limited to TCP/IPv4.

The GRO code was merged for 2.6.29, and it is supported by a number of 10G drivers. The conversion of drivers to GRO is quite simple. The biggest problem, perhaps, is with new drivers which are written to use the LRO API instead. To head this off, the LRO API may eventually be removed, once the networking developers are convinced that GRO is fully functional with no remaining performance regressions.

In response to questions, Herbert said that there has not been a lot of effort toward using LRO in 1G drivers. In general, current CPUs can keep up with a 1G data stream without too much trouble. There might be a benefit, though, in embedded systems which typically have slower processors. How does the kernel decide how long to wait for incoming packets before merging them? It turns out that there is no real need for any special waiting code: the NAPI API already has the driver polling for new packets occasionally and processing them in batches. GRO can simply be performed at NAPI poll time.

The next step may be toward "generic flow-based merging"; it may also be possible to start merging unrelated packets headed to the same destination to make larger routing units. UDP merging is on the list of things to do. There may even be a benefit in merging TCP ACK packets. Those packets are small, but there are a lot of them - typically one for every two data packets going the other direction. This technology may go in surprising directions, but one thing is clear: the networking developers are not short of ideas for enabling Linux to keep up with ever-faster hardware.

Comments (23 posted)

JLS2009: A Btrfs update

By Jonathan Corbet
October 27, 2009
Conferences can be a good opportunity to catch up with the state of ongoing projects. Even a detailed reading of the relevant mailing lists will not always shed light on what the developers are planning to do next, but a public presentation can inspire them to set out what they have in mind. Chris Mason's Btrfs talk at the Japan Linux Symposium was a good example of such a talk.

The Btrfs filesystem was merged for the 2.6.29 kernel, mostly as a way to encourage wider testing and development. It is certainly not meant for production use at this time. That said, there are people doing serious work on top of Btrfs; it is getting to where it is stable enough for daring users. Current Btrfs includes an all-caps warning in the Kconfig file stating that the disk format has not yet been stabilized; Chris is planning to remove that warning, perhaps for the 2.6.33 release. Btrfs, in other words, is progressing quickly.

One relatively recent addition is full use of zlib compression. Online resizing and defragmentation are coming along nicely. There has also been some work aimed at making synchronous I/O operations work well.

Defragmentation in Btrfs is easy: any specific file can be defragmented by simply reading it and writing it back. Since Btrfs is a copy-on-write filesystem, this rewrite will create a new copy of the file's data which will be as contiguous as the filesystem is able to make it. This approach can also be used to control the layout of files on the filesystem. As an experiment, Chris took a bunch of boot-tracing data from a Moblin system [Chris Mason] and analyzed it to figure out which files were accessed, and in which order. He then rewrote the files in question to put them all in the same part of the disk. The result was a halving of the I/O time during boot, resulting in a faster system initialization and smiles all around.

Performance of synchronous operations has been an important issue over the last year. On filesystems like ext3, an fsync() call will flush out a lot of data which is not related to the actual file involved; that adds a significant performance penalty for fsync() use and discourages careful programming. Btrfs has improved the situation by creating an entirely separate Btree on each filesystem which is used for synchronous I/O operations. That tree is managed identically to, but separately from, the regular filesystem tree. When an fsync() call comes along, Btrfs can use this tree to only force out operations for the specific file involved. That gives a major performance win over ext3 and ext4.

A further improvement would be the ability to write a set of files, then flush them all out in a single operation. Btrfs could do that, but there's no way in POSIX to tell the kernel to flush multiple files at once. Fixing that is likely to involve a new system call.

Btrfs provides a number of features which are also available via the device mapper and MD subsystems; some people have wondered if this duplication of features makes sense. But there are some good reasons for it; Chris gave a couple of examples:

  • Doing snapshots at the device mapper/LVM layer involves making a lot more copies of the relevant data. Chris ran an experiment where he created a 400MB file, created a bunch of snapshots, then overwrote the file. Btrfs is able to just write the new version, while allowing all of the snapshots to share the old copy. LVM, instead, copies the data once for each snapshot. So this test, which ran in less than two seconds on Btrfs, took about ten minutes with LVM.

  • Anybody who has had to replace a drive in a RAID array knows that the rebuild process can be long and painful. While all of that data is being copied, the array runs slowly and does not provide the usual protections. The advantage of running RAID within Btrfs is that the filesystem knows which blocks contain useful data and which do not. So, while an MD-based RAID array must copy an entire drive's worth of data, Btrfs can get by without copying unused blocks.

So what does the future hold? Chris says that the 2.6.32 kernel will include a version of Btrfs which is stable enough for early adopters to play with. In 2.6.33, with any luck, the filesystem will have RAID4 and RAID5 support. Things will then stabilize further for 2.6.34. Chris was typically cagey when talking about production use, though, pointing out that it always takes a number of years to develop complete confidence in a new filesystem. So, while those of us with curiosity, courage, and good backups could maybe be making regular use of Btrfs within a year, widespread adoption is likely to be rather farther away than that.

Comments (54 posted)

Transparent hugepages

By Jonathan Corbet
October 28, 2009
Most Linux systems divide memory into 4096-byte pages; for the bulk of the memory management code, that is the smallest unit of memory which can be manipulated. 4KB is an increase over what early virtual memory systems used; 512 bytes was once common. But it is still small relative to the both the amount of physical memory available on contemporary systems and the working set size of applications running on those systems. That means that the operating system has more pages to manage than it did some years back.

Most current processors can work with pages larger than 4KB. There are advantages to using larger pages: the size of page tables decreases, as does the number of page faults required to get an application into RAM. There is also a significant performance advantage that derives from the fact that large pages require fewer translation lookaside buffer (TLB) slots. These slots are a highly contended resource on most systems; reducing TLB misses can improve performance considerably for a number of large-memory workloads.

There are also disadvantages to using larger pages. The amount of wasted memory will increase as a result of internal fragmentation; extra data dragged around with sparsely-accessed memory can also be costly. Larger pages take longer to transfer from secondary storage, increasing page fault latency (while decreasing page fault counts). The time required to simply clear very large pages can create significant kernel latencies. For all of these reasons, operating systems have generally stuck to smaller pages. Besides, having a single, small page size simply works and has the benefit of many years of experience.

There are exceptions, though. The mapping of kernel virtual memory is done with huge pages. And, for user space, there is "hugetlbfs," which can be used to create and use large pages for anonymous data. Hugetlbfs was added to satisfy an immediate need felt by large database management systems, which use large memory arrays. It is narrowly aimed at a small number of use cases, and comes with significant limitations: huge pages must be reserved ahead of time, cannot transparently fall back to smaller pages, are locked into memory, and must be set up via a special API. That worked well as long as the only user was a certain proprietary database manager. But there is increasing interest in using large pages elsewhere; virtualization, in particular, seems to be creating a new set of demands for this feature.

A host setting up memory ranges for virtualized guests would like to be able to use large pages for that purpose. But if large pages are not available, the system should simply fall back to using lots of smaller pages. It should be possible to swap large pages when needed. And the virtualized guest should not need to know anything about the use of large pages by the host. In other words, it would be nice if the Linux memory management code handled large pages just like normal pages. But that is not how things happen now; hugetlbfs is, for all practical purposes, a separate, parallel memory management subsystem.

Andrea Arcangeli has posted a transparent hugepage patch which attempts to remedy this situation by removing the disconnect between large pages and the regular Linux virtual memory subsystem. His goals are fairly ambitious: he would like an application to be able to request large pages with a simple madvise() system call. If large pages are available, the system will provide them to the application in response to page faults; if not, smaller pages will be used.

Beyond that, the patch makes large pages swappable. That is not as easy as it sounds; the swap subsystem is not currently able to deal with memory in anything other than PAGE_SIZE units. So swapping out a large page requires splitting it into its component parts first. This feature works, but not everybody agrees that it's worthwhile. Christoph Lameter commented that workloads which are performance-sensitive go out of their way to avoid swapping anyway, but that may become less true on a host filling up with virtualized guests.

A future feature is transparent reassembly of large pages. If such a page has been split (or simply could not be allocated in the first place), the application will have a number of smaller pages scattered in memory. Should a large page become available, it would be nice if the memory management code would notice and migrate those small pages into one large page. This could, potentially, even happen for applications which have never requested large pages at all; the kernel would just provide them by default whenever it seemed to make sense. That would make large pages truly transparent and, perhaps, decrease system memory fragmentation at the same time.

This is an ambitious patch to the core of the Linux kernel, so it is perhaps amusing that the chief complaint seems to be that it does not go far enough. Modern x86 processors can support a number of page sizes, up to a massive 1GB. Andrea's patch is currently aiming for the use of 2MB pages, though - quite a bit smaller. The reasoning is simple: 1GB pages are an unwieldy unit of memory to work with. No Linux system that has been running for any period of time will have that much contiguous memory lying around, and the latency involved with operations like clearing pages would be severe. But Andi Kleen thinks this approach is short-sighted; today's massive chunk of memory is tomorrow's brief email. Andi would rather that the system not be designed around today's limitations; for the moment, no agreement has been reached on that point.

In any case, this patch is an early RFC; it's not headed toward the mainline in the near future. It's clearly something that Linux needs, though; making full use of the processor's capabilities requires treating large pages as first-class memory-management objects. Eventually we should all be using large pages - though we may not know it.

Comments (12 posted)

Patches and updates

Kernel trees


Core kernel code

Development tools

Device drivers


Filesystems and block I/O

Memory management


Virtualization and containers


Page editor: Jonathan Corbet


News and Editorials

What is Fedora?

By Jake Edge
October 28, 2009

We briefly looked in on the discussion on defining the Fedora project a few weeks back. Since that time, there has been more discussion—not surprising—but also a bit more clarity on exactly what needs to be defined. While it may seem like an unnecessary, abstract exercise to some, it is clear from the discussion that there are some in the community who are directly impacted by the lack of a good shared vision of "what is Fedora?", or, perhaps more accurately: "who are Fedora's target users?".

There are a number of issues that are swirling around in the threads on the fedora-advisory-board mailing list. In general, there is dissatisfaction among users of Fedora, even highly technical users, because of the rapid, often not very exhaustively tested upgrades that are part-and-parcel of the Fedora experience. Fedora has a commitment to providing "leading edge" software to its users, but, to many users, leading edge does not equate to non-functional or hard-to-use. Unfortunately, that is what Fedora is delivering too much of the time.

As an example of technical users who have moved away from Fedora, Máirín Duffy quotes a user who contacted her off-list. The user has multiple clients, most of whom are quite technical as well, but have moved from Fedora to other distributions over the last two years or so. Upgrade instability is a major reason:

One particular quote she gave me that I'd like to share:

"Fedora boasts of an "innovation" target audience but is falling down in the two areas real world (excepting perhaps games and CGI) high-innovation users demand: stable upgrades and consistent usability. I believe if your group can wrestle these back under control the distro numbers would increase dramatically."

In summary, having technical users as a target isn't a good excuse for instability and complexity.

But, there is a tension between the goal of providing the "latest and greatest" and the goal of providing something that is consistently usable. Seth Vidal, sums it up this way: "And this is the crux of our problem: fedora is for latest leading-edge pkgs. It's not easy or reasonable to have the latest of things AND have a stable interface for them." The sense from the discussion, though, is that Fedora may have gone too far in the "bleeding edge" direction and that being a bit more cautious with which software versions are delivered is warranted. Bill Nottingham sees the need for a balance:

We want to present the newest innovations to users, but not so new that they don't work. And we want to be focused on making it just work, so they don't have to run 500 arcane commands, cut and paste config snippets from the web, or jump through other hoops just to use that innovation. Nor do we want to be pushing new innovation to them so fast that they can't keep up with it, or find that their way of doing things changes from week to week during a release.

Mike McGrath brought up a subject that was clearly an undercurrent in the discussion, which he described as "the elephant in the room": Ubuntu. There is a sense that Fedora users, and potential users, are moving to, or starting out with, Ubuntu. There are good reasons for that, he said:

The problem? They are KILLING us. I'm not talking about market share, I'm talking about my recent converts from Fedora to Ubuntu. I haven't had to do a single thing to my wife's computer since I put Ubuntu on there except setup my printer. With Fedora I was on it almost daily.

Targeting new users is quite different than targeting new technology, though. There is a real question whether Fedora can do both. There are lessons to be learned from Ubuntu, however, as William Jon McCann points out:

Might be worth considering how Ubuntu was largely borne out of the failures of Fedora. What are they doing right? What are we doing wrong? How can we improve? There is very little time to continue to be defensive. It is time to confront the brutal facts - we're losing (badly).

Duffy finds something of a middle ground:

We don't need to target Ubuntu's user base in order to produce something excellent, something polished, something that is delightful to use and makes people's lives easier, something that impresses them such that they care about how it was made.

There is a fairly clear split in the Fedora community about where to focus the project's efforts. There are some who would like to see Fedora make the effort to stabilize to the point where attracting new, non-technical users would be possible. Whereas others see that as largely impossible while upholding the "innovation" that has been the hallmark of the distribution.

That split makes life difficult when folks try to determine a direction to take or how to prioritize their work. Duffy, who does much of the design work for Fedora, describes the split and its effect on her work:

The 2 views as I would summarize them are:

- Fedora is a beautiful, usable desktop for everyone (or at least, we're getting there.) Pandas are okay! We're ready to push to the masses.

- Fedora is a menagerie of equal spins for highly-technical folks and FOSS developers. Don't you dare insult our intelligence with pandas. Go back to Sesame street.

[...] The main issue from a design perspective is that if no target is defined, then the target becomes 'everybody' - and I personally feel it's impossible to make a top-notch, beautiful design when trying to please everybody.

Even determining the target user doesn't solve the underlying problems with stability, though, as Christopher Aillon points out:

If we want to target Fedora for any class of user, we need to think and act for the user. Right now, we're clearly not even acting for the people that do use our distribution. I think we should fix that before we can even begin to define what our target user should be.

The discussion, and the perceived need for a more stable system, led McGrath to make a "Desktop proposal". In it, he outlines the problems along with some potential solutions. As part of that, he would like see a new mission added to the "Fedora Mission": "Produce a usable, general purpose desktop operating system".

Putting "desktop", or even "operating system", into the mission didn't sit well with some, but the ideas in McGrath's proposal were largely met with approval. In many ways, he captured some of the thoughts that had been floating around in the threads. One problem that McGrath mentioned might be helped by Jesse Keating's idea for "No Frozen Rawhide" (as it has come to be called):

I plan to make rawhide more unstable more of the time, and I plan to make "rawhide" more stable more of the time. Crazy eh? How can I do this? By splitting "rawhide" in two.

The Fedora board took up the question of defining target users for Fedora in its October 22 meeting. Project leader Paul Frields reported on the meeting at some length, noting that the No Frozen Rawhide (or "unfrozen rawhide") proposal was looked at favorably. There was also discussion of how to ensure that updates are smoother for users. But the main point that came out of the meeting was a preliminary definition of Fedora's target users:

We found four defining characteristics that we believe best describe the Fedora distribution's target audience: Someone who (1) is voluntarily switching to Linux, (2) is familiar with computers, but is not necessarily a hacker or developer, (3) is likely to collaborate in some fashion when something's wrong with Fedora, and (4) wants to use Fedora for general productivity, either using desktop applications or a Web browser.

Much of what the board discussed will also be hashed out face-to-face at the Fedora Users and Developers Conference (FUDCon) in Toronto in early December.

The Fedora project is at a bit of a crossroads right now, but the project seems to be taking the right steps to determine which direction to take. Unlike other distributions, Fedora tends to have these conversations in public, which allows others to observe and learn from the process. While that may make some uncomfortable, it should make for a healthier community overall. In the end, community is really what Fedora is striving for, and an OS is just a means to that end.

Comments (19 posted)

New Releases

Announcing the release candidate for Ubuntu 9.10

The release candidate for Ubuntu 9.10 has been announced. "The Ubuntu team is pleased to announce the Release Candidate for Ubuntu 9.10 Desktop and Server editions, Ubuntu 9.10 Server for UEC and EC2, and the Ubuntu Netbook Remix. Codenamed "Karmic Koala", 9.10 continues Ubuntu's proud tradition of integrating the latest and greatest open source technologies into a high-quality, easy-to-use Linux distribution. We consider this release candidate to be complete, stable, and suitable for testing by any user."

Full Story (comments: 2)

Distribution News


Fedora's target audience

As part of the Fedora project's continuing effort to figure out what it is really trying to do, the Fedora board has now come up with a definition of what it thinks the project's target audience is: "Someone who (1) is voluntarily switching to Linux, (2) is familiar with computers, but is not necessarily a hacker or developer, (3) is likely to collaborate in some fashion when something's wrong with Fedora, and (4) wants to use Fedora for general productivity, either using desktop applications or a Web browser." The plan is to use this definition to focus efforts, while, hopefully, not restricting developments which have appeal beyond this audience.

Full Story (comments: none)

Open-Source ATI R600/700 3D Support In Fedora 12 (Phoronix)

Phoronix reports on open source ATI R600/700 3D support under Fedora 12. Using an experimental version of the Mesa drivers, which are available in the F12 repositories, Phoronix tried Compiz as well as several 3D games, reporting on the stability and rendering along with some screen shots. "First off, we would like to note that the ATI kernel mode-setting support by default in Fedora 12 has been working quite well from our testing. Even when using a dual-link DVI monitor running at 2560 x 1600, KMS has worked and properly mode-set to the right resolution. With a variety of hardware and different monitors, it has all worked quite well from this beta installation. When installing the mesa-dri-drivers-experimental package, upon rebooting we were able to immediately enable Compiz support without any problems. Compiz was running well with no visual defects and the performance was suitable for the Linux desktop."

Comments (8 posted)

SUSE Linux and openSUSE

Announcing the Second openSUSE Board Election

An election for openSUSE board members has been announced. There are three seats that need to be filled. openSUSE members have until November 23rd to stand for election, and the voting will be held December 8-22. "This means that as of this year's election the openSUSE Board will be made up of equal numbers of Novell and non-Novell employees, 2 seats+Chairperson and 3 seats respectively. Candidates for this election will be voted in for a two (2) year term, ensuring that there is continuity within the Board."

Full Story (comments: none)

Maintenance of the upcoming openSUSE 11.2

openSUSE has announced that, starting with openSUSE 11.2, decisions about distribution updates will no longer be done in the background. Instead, a new maintenance team consisting of two Novell employees and three community members will oversee the process, but community members will be able to influence the update decisions. "This team will decide over the requests and coordinates the whole updates progress (plan the release time according to the severity, interact with the package maintainer, coordinate QA testing, ...) based on a new update policy. It guarantees the best supply with updates. [...] Only maintenance (tagged as recommended, optional, YOU) updates are affected by this change. Security updates will be provided on the old and approved way by the SUSE security team. This is the fastest and established way to react on security problems."

Full Story (comments: none)

Ubuntu family

Minutes from the Ubuntu Technical Board meeting, 2009-10-20

Minutes from the meeting of the Ubuntu Technical Board on October 20 are now available. Topics covered include a review of action items from previous meetings, the Developer Membership Board, Units policy, and EC2 image updates: "Discussion of Scott Moser's draft proposal for providing updated EC2 kernel (AKI), ramdisk (ARI) and filesystem (AMI) images on a regular basis throughout the cycle. [...] It was agreed that an update to the kernel requires all three images to be updated, and that an update to either the ramdisk or filesystem needs those two images to be updated."

Full Story (comments: none)

Distribution Newsletters

CentOS Pulse #0906

The October 22 issue of CentOS Pulse is available. It covers the release of CentOS 5.4, a Linux hacker diary, an interview with CentOS developer Tim Verhoeven, a review of the cPanel conference, and more.

Comments (none posted)

DistroWatch Weekly, Issue 326

The DistroWatch Weekly for October 26, 2009 is out. "Ah, the excitement of an Ubuntu release! Yes, "Karmic Koala", the distribution's 11th official version will hit the undoubtedly crowded download servers later this week amid the excitement of those who enjoy the popular operating system -- and also to the annoyance of some of the more vocal anti-Ubuntu crowds on Linux blogs and forums. But Ubuntu is not the only Linux distribution that gets attention in this week's DistroWatch Weekly. Our lead article is a review of GNOME SlackBuild for Slackware Linux, a third-party effort to provide quality GNOME packages for the oldest surviving Linux distro. In the news section, Mandriva finally updates the artwork in preparation for the upcoming stable release, openSUSE brings a number of interesting features to challenge the competition, and Funtoo hints at a possible new life as a "fork" of Gentoo Linux. Also not to be missed, an amusing and frightening analysis of a web site that charges US$125 to download Mozilla Firefox. Finally, check out the new section of DistroWatch Weekly where Jesse Smith attempts to answer some of the questions that our readers regularly post in the comments section. Happy reading!"

Comments (none posted)

Fedora Weekly News 199

Fedora Weekly News for the week ending October 25, 2009 is out. "Our issue kicks off this week with news from the Fedora Planet community of Fedora developers and users, including thoughts on PHP security, a new tool, rpmguard, continued work on libguestfs, and a great Fedora 12 beta roundup. From Ambassadors we have an event report on ABLEConf in Phoenix, Arizona. Much goodness from the Quality Assurance beat, with updates on this past week's two Test days, detailed weekly meetings notes, and various Fedora 12 beta-related activities. In news from Fedora's Translation team, updates on milestone for Fedora 12 translation tasks, new contributors of a couple Fedora Localization Project language teams, and details on the next FLSCo election. In Art/Design news, some icon emblem work, Fedora 12 final wallpaper polish, and details on post-beta F12 desktop look changes. Security Advisories brings us up to date on a couple security releases for Fedora 10 and 11. Our issue rounds out with the always-interesting Virtualization beat, with discussion on paravirtualization and KVMs in Fedora, installing Virtio drivers in Windows XP, and details on Fedora 12's kernel samepage merging (KSM) feature. We hope you enjoy FWN 199!"

Full Story (comments: none)

OpenSUSE Weekly News/94

This issue of openSUSE Weekly News looks at Network World podcasts with Joe "Zonker" Brockmeier, an update from the openSUSE Boosters, wrong usage of LD_LIBRARY_PATH, a Kernel Log on what's coming in 2.6.32, and more.

Comments (none posted)

Ubuntu Weekly Newsletter #165

The Ubuntu Weekly Newsletter for October 24, 2009 is out. "In this issue we cover: Release Candidate for Ubuntu 9.10 now available, October 21st America's Membership Board Meeting, Ubuntu IRC Council Elections, Keeping Ubuntu CD's Available, LoCo News, Launchpad: The next six months, Meet Matthew Revell, Launchpad offline 4:00UTC - 4:30UTC October 26th, The Planet, TurnKey: 40 Ubuntu-based virtual appliances released into the cloud, and much, much more!"

Full Story (comments: none)

Distribution reviews

Ars takes a first look under the hood of Fedora 12 (ars technica)

Over at ars technica, there is a brief review of the Fedora 12 beta. It looks specifically at virtualization features and PackageKit, but also makes mention of power management, a SystemTap-based tool called "scomes", and Moblin: "A special Moblin spin will be introduced with Fedora 12. This will allow users to install a complete Fedora installation with Intel's custom Moblin user experience. Upstream Moblin is already based on Fedora, so there is a lot of synergy between the two projects. The Fedora 12 Moblin spin isn't available yet, but users who want to get an early look can optionally install the Moblin environment in the desktop version of the Fedora 12 beta."

Comments (none posted)

Sneak Peeks at openSUSE 11.2: KDE 4.3 Experience, with Lubos Lunak

openSUSE News has a look at the upcoming 11.2 openSUSE release. It focuses on the KDE 4.3 experience in the release, and interviews KDE hacker Lubos Lunak. "There were attempts at making Qt ports of Firefox in the past, but as far as I know there has never been one that would be really usable (and with the advances of WebKit and the fact that it's shipping with Qt I don't see that happening in the future). The reason for why we could achieve something in a few days that has been missing for years is down to the fact that I aimed pretty low — this is not a port of Firefox, but it's the same Gtk-based version of Firefox, with 'if running in KDE, call this small helper app' code inserted in desktop-specific places doing most of the job. Even with this approach I think Firefox now integrates into KDE reasonably well."

Comments (none posted)

Page editor: Rebecca Sobol


FatELF: universal binaries for Linux

October 28, 2009

This article was contributed by Koen Vervloesem

One interesting feature of Mac OS X is the concept of a Universal Binary, a single binary file that runs natively on both PowerPC and Intel platforms. Professional game porter Ryan Gordon got sick of Mac developers pointing out that Linux doesn't have anything like that, so he did something about it and wrote FatELF. FatELF brings the idea of single binaries supporting multiple architectures to Linux.

Universal binaries in Mac OS X

Apple introduced the Universal Binary file format in 2005 to ease the transition of the Mac platform from the PowerPC architecture to the Intel architecture. The solution was to include both PowerPC and x86 versions of an application in one "fat binary". If a universal binary is run by Mac OS X, the operating system executes the appropriate section depending on the architecture in use. The big advantage was that Mac developers could distribute one executable of their software, so that end-users wouldn't have to worry about which version to download. Later, Apple went even further and allowed four-architecture binaries: 32 and 64 bit for both Intel and PowerPC.

This was not the first time Apple performed such a trick: in 1994 the company transitioned from Motorola 68k processors to PowerPC and introduced a "fat binary" which included executable code for both platforms. Moreover, NeXTSTEP, the predecessor of Mac OS X, had a fat binary file format (called "Multi-Architecture Binaries") which supported Motorola 68k, Intel x86, Sun SPARC, and HP PA-RISC. So Apple knew what needed to be done when they chose Intel as their new Mac platform. In fact, the Universal Binary format in Mac OS X is essentially the same as NeXTSTEP's Multi-Architecture Binaries. This was possible because Apple uses NeXTSTEP's Mach-O as the native object file format in Mac OS X.

A fat elf for Linux

Ryan Gordon is a well-known game porter: he has created ports of commercial games and other software to Linux and Mac OS X. Notable examples of his work are the Linux ports of the Unreal Tournament series, some of the Serious Sam Series, the Postal Series, Devastation and Prey, but also non-gaming software such as Google Earth and Second Life. With this experience, he knows a lot of both Mac OS X and Linux, so Ryan is well suited to implement the Mac OS X universal binary functionality in Linux.

His FatELF file format embeds multiple Linux binaries for different architectures in a single file. FatELF is actually a simple container format: it adds some accounting information at the start of the file and then appends all the ELF (Executable and Linking Format) binaries after it, adding padding for alignment. FatELF can be used for both executable files and shared libraries (.so files).

An obvious downside of FatELF is that the executable's size gets multiplied by the number of embedded ELF architectures. However, this only holds for the executable files and libraries; common non-executable resources such as images and data files are just shipped as they are without FatELF. For example, a game that ships with hundreds of megabytes of data will, relatively, become only slightly larger.

Moreover, a FatELF binary doesn't require more RAM to run than a regular ELF binary, because the operating system decides which chunk of the file is needed to run on the current system and ignores the ELF objects of the other architectures. This also means that the entire FatELF file does not have to be read (except for kernel modules), so the disk bandwidth overhead is minimal.

On the project's website, Ryan lists a lot of reasons why someone would use FatELF. Some of them are rather far-fetched, such as:

Distributions no longer need to have separate downloads for various platforms. Given enough disc space, there's no reason you couldn't have one DVD ISO file that installs an x86-64, x86, PowerPC, SPARC, and MIPS system, doing the right thing at boot time. You can remove all the confusing text from your website about "which installer is right for me?"

Another benefit in the same vein is that third party packages no longer have to publish multiple packages for different architectures. An obvious critique is that this multiplies the needed disk space and bandwidth if FatELF is used systematically.

However, there is something to be said for FatELF as a means to abstract away architecture differences for end-users. For example, install scripts for proprietary Linux software, such as the scripts for the graphics drivers by AMD and Nvidia, that select which driver to install based on the detected architecture, could be implemented as FatELF binaries. This seems like a cleaner solution than each software vendor implementing his own scripts and flaky logic to detect the right version. Web browser plug-ins are another type of binary that could be an interesting match for FatELF. In support of this idea, Ryan admits he made flaky shell script errors himself in the past:

Many years ago, I shipped a game that ran on i686 and PowerPC Linux. I could not have predicted that one day people would be running x86_64 systems that would be able to run the i686 version, so doing something like: exec $(uname -m)/mygame would fail, and there's really no good way to future-proof that sort of thing. As that game now fails to start on x86_64 systems, it would have been better to just ship for i686 and not try to select a CPU arch.

Another use for FatELF is what Apple used its universal binary for: a transition to a new architecture. The 32-bit to 64-bit transition comes to mind, where FatELF makes it possible to no longer need separate /lib, /lib32 and /lib64 trees. It also makes it possible to get rid of IA-32 compatibility libraries: if you want to run a couple of 32-bit applications on a 64-bit system, you only need FatELF versions of the handful of packages needed by them. But more exotic transitions are also possible, for example when the ELF OSABI (Operating System Application Binary Interface) used by the system changes, or for CPUs that can handle different byte orders.


At the moment, Ryan has written a file format specification and documentation for FatELF. To make the fat binary concept possible on Linux, he created patches for the Linux kernel to support FatELF, and he also adapted the file command to recognize FatELF files, the binutils commands to allow GCC to link against a FatELF shared library, and gdb to be able to debug FatELF binaries. The patches are stored in a Mercurial repository "until they have been merged into the upstream project". The repository also hosts some tools to manipulate FatELF binaries, which are zlib-licensed.

One of the FatELF tools is fatelf-extract, which lets the user extract a specific ELF binary from a FatELF file, e.g. the x86_64 one. The fatelf-split command extracts all embedded ELF binaries, ending up with files like my_fatelf_binary-i386 and my_fatelf_binary-x86_64. The fatelf-info command reports interesting information about a FatELF file. A tool for developers is fatelf-glue, which will glue ELF binaries together, because GCC currently can't build FatELF binaries. You just have to build each ELF binary separately and then create a FatELF file of them.

As a proof-of-concept, Ryan created a VMware virtual machine image of Ubuntu 9.04 where almost every binary and library is a FatELF file with x86 and x86_64 support. The image can be downloaded and run in VMware Workstation or VMware Player to try the FatELF functionality. But this is not the regular use case. When FatELF is used, it's probably only for a handful of applications. FatELF files also coexist fine with ELF binaries: a FatELF binary can load ELF shared libraries and vice versa.

Relatively simple implementation

Ryan recalls the real point of inspiration for FatELF, a thread on the mailing list of the installer program MojoSetup. On May 20 2007, he writes on this list:

I'd love someone to extend the ELF format so that it supports "fat" binaries, like Apple's Mach-O format does for the PowerPC/Intel "Universal" binaries...but that would require coordination and support at several points in the system software stack.

Two years later, Ryan has implemented this idea:

I have a long list of things that Linux should blatantly steal from Mac OS X, and given infinite time, I'll implement them all. FatELF happens to be something on that list that is directly useful to my work as a game developer that also happens to be a simple project. I think the changes required to the system are pretty small for what could be good benefits to Unix as a whole.

So after a few weeks of work in his spare time, Ryan got a working fat binary implementation for Linux. In contrast, building the virtual machine proof-of-concept literally took days, because it took a lot of work to automate. Ryan also spent a lot of time preparing to post the kernel patches:

I was so intimidated by the kernel mailing list, that I spent a disproportionate amount of time researching etiquette, culture, procedure. I didn't want to offend anyone or waste their time.


Overall, the patch that allows the Linux kernel to load a FatELF file was received quite positively, but with some questions. For example, Jeremy Fitzhardinge asked why Ryan made it ELF-specific:

The idea seem interesting, but does it need to be ELF-specific? What about making the executable a simple archive file format (possibly just an "ar" archive?) which contains other executables. The archive file format would be implemented as its own binfmt, and the internal executables could be arbitrary other executables. The outer loader would just try executing each executable until one works (or it runs out).

Later in the discussion, Jeremy adds that a generic approach would allow the last executable in the file to be a shell script. If no other format was supported, this shell script would then be executed, doing something like displaying a useful message. Ryan seems unsure that the added flexibility is worth the extra complications, although he admitted that he would have chosen this route if other executable formats like a.out files "were still in widespread use and actively competed with ELF for mindshare." He also thinks it should be possible to support other executable formats in the existing FatELF format.

Some reactions to the patch that allows kernel modules to be FatELF binaries are less positive. For example, Jeremy objected to this because it would only encourage more binary modules. Ryan understands his concern, but answered: "I worry about refusing to take steps that would aid free software developers in case it might help the closed-source people, too." However, Jeremy didn't see it that way, casting doubt on the use case of FatELF kernel modules:

Any open source driver should be encouraged to be merged with mainline Linux so there's no need to distribute them separately. With the staging/ tree, that's easier than ever.

I don't see much upside in making it "easier" to distribute binary-only open source drivers separately. (It wouldn't help that much, in the end; the modules would still be compiled for some finite set of kernels, and if the user wants to use something else they're still stuck.)

Moreover, even for proprietary kernel modules the use case is not that compelling. Companies like Nvidia have to distribute modules for multiple kernel versions. If the OSABI version doesn't change, they can't use FatELF to pack together multiple drivers for this purpose. So, all in all, FatELF support for kernel modules seems a bit dubious.

In another discussion, Rayson Ho found that Apple (NeXT, actually) has patented the technologies behind universal binaries, as a "method and apparatus for architecture independent executable files" (#5432937 and #5604905). Something that may be considered prior art is the mix of 32-bit and 64-bit object files in a single archive on AIX, Rayson thinks. David Miller adds another possible prior art: TILO, a variant of the Sparc SILO boot loader, that packs a 32-bit and 64-bit Linux kernel into one file an figures out which one to actually boot depending on the machine it is running on, but Rayson doubts this counts, because the project was started in 1995 or 1996, while NeXT's patent filing is from 1993. Ryan also entered the discussion and clarified that FatELF has a few fields that Apple's format doesn't, so the flow chart in the patent isn't the same. However, it's not clear yet if Ryan should be concerned and if so, which changes he should make to work around the patent.

The future

There are still a lot of things to do. Patches for module-init-tools, glibc (for loading shared FatELF libraries), and elfutils still have to be written. And the patches for binutils and gdb still have to be submitted, Ryan said:

I've only submitted the kernel patches. If the kernel community is ultimately uninterested, there's not much point in bothering the binutils people. The patches for all the other parts are sitting in my Mercurial repository. If FatELF makes it into Linus's mainline, several other mailing lists will get patches sent to them right away.

Ryan even thinks about embedding binaries from other UNIXes into a FatELF file. He mentions FreeBSD, OpenBSD, NetBSD and OpenSolaris. In principle, each operating system using ELF files for its binaries could be supported. In addition to the ones mentioned, this also includes DragonFly BSD, IRIX, HP-UX, Haiku, and Syllable. The implementations should not be difficult, according to Ryan:

You have to touch several parts of the system, but the changes you have to make to them are reasonably straightforward, so you'll probably spend more time getting comfortable with their code than patching it. And then twice as long trying to figure out how to boot a custom kernel and libc.

The support for other operating systems will make it possible to ship one file that works across Linux and FreeBSD, for example, without a platform compatibility layer. This could also be an interesting feature for hybrid Debian GNU/Linux and Debian GNU/kFreeBSD binaries.

The biggest hurdle that FatELF is facing now are adoption pains, Ryan explains:

If Linus applies it in the 2.6.33 merge window and every other project puts the patches into revision control, too, we're looking at maybe 6 to 12 months before distributions pick it up and some time later before you can count on people running those distributions.

Another disadvantage is the problems with creating fat binaries in build systems. For example, Erik de Castro Lopo writes about this on his blog. According to Ryan making the build systems handle this situation cleanly still needs some work. He expects the most popular way to build FatELF files will be to do two totally independent builds and glue them together instead of rethinking autoconf and such.


While a universal binary seems much less interesting for Linux than for Mac OS X, because most software in Linux is installed from within a package manager that knows the architecture, the concept is interesting for proprietary Linux software such as games. For a non-expert user, it's not evident if their processor is 32 or 64 bit. A FatELF download embedding both the x86 and x86_64 binary may be a good solution for this problem. And if ARM-based smartbooks become more popular, an x86/x86_64/arm FatELF binary may be the perfect way to distribute a binary that works on 32 bit Intel Atom netbooks, 64 bit Intel computers and ARM smartbooks.

Comments (25 posted)

System Applications

Database Software

MySQL Community Server 5.0.87 released

Version 5.0.87 of MySQL Community Server has been announced, it includes numerous bug fixes.

Full Story (comments: none)

MySQL Community Server 5.1.40 has been released

Version 5.1.40 of MySQL Community Server has been announced. "MySQL Community Server 5.1.40, a new version of the popular Open Source Database Management System, has been released. MySQL 5.1.40 is recommended for use on production systems."

Full Story (comments: none)

PostgreSQL 8.5alpha2 released

Version 8.5alpha2 of the PostgreSQL DBMS has been announced. "The second alpha release for PostgreSQL version 8.5, 8.5alpha2, is now available. This alpha contains several new major features added since the previous alpha. Please download, install, and test it to give us early feedback on the features being developed for the next version of PostgreSQL."

Comments (none posted)

PostgreSQL Weekly News

The October 25, 2009 edition of the PostgreSQL Weekly News is online with the latest PostgreSQL DBMS articles and resources.

Full Story (comments: none)

Web Site Development

lighttpd 1.4.24 released

Version 1.4.24 of the lighttpd web server has been announced. "Update: There is a small regression in mod_magnet, see #1307. We finally added TLS SNI, and many other small improvements. We also fixed pipelining (that should fix problem with lighty as debian mirror) and some mod_fastcgi bugs – this should result in improved handling of overloaded and crashed backends (you know which one :D)."

Comments (none posted)

luban 0.2a2 released

Version 0.2a2 of luban, a generic (web/native) user interface builder, has been announced. "The luban package is a python-based, cross- platform user interface builder. It provides UI developers a generic language to describe a user interface, and the description can be rendered as web or native interfaces. Gongshuzi, an application built by using luban, can help users visually develop UIs and run the UIs as web or native applications."

Full Story (comments: none)

nginx 0.7.63 announced

Version 0.7.63 of the nginx web server has been announced. See the CHANGES document for more information.

Comments (none posted)


Symbian releases microkernel

The Symbian Foundation has announced the release of the platform microkernel (EKA2) and supporting development kit under the Eclipse Public License (EPL). "To enable the community to fully utilise the open source kernel, Symbian is providing a complete development kit, free of charge, including ARM's high performance RVCT compiler toolchain. The provision of the kit demonstrates Symbian's commitment to lowering access barriers to encourage the wider development community - such as research institutions, enthusiast groups and individual developers - to get creative with the code."

Comments (28 posted)

Desktop Applications

Business Applications

Tryton 1.4 is available

Version 1.4 of Tryton has been announced. "Tryton is a three-tiers high-level general purpose application platform under the license GPL-3 written in Python and using PostgreSQL as database engine. It is the core base of a complete business solution providing modularity, scalability and security. This new series comes up with new modules, security and performance improvements as well as the SQLite support and welcomes the arrival of Neso, the standalone version of Tryton."

Full Story (comments: none)

Desktop Environments

GNOME 2.28.1 released

Version 2.28.1 of GNOME has been announced. "This is the first update to GNOME 2.28. It contains the usual mixture of bug fixes, translations updates and documentation improvements that are the hallmark of stable GNOME releases, thanks to our wonderful team of GNOME contributors! The next stable version of GNOME will be GNOME 2.28.2, which is due on December 16. Meanwhile, the GNOME community is actively working on the development branch of GNOME that will lead to the next major release in March 2010."

Full Story (comments: none)

Gnome Foundation meeting minutes published

The Gnome Foundation's October 15, 2009 meeting minutes have been published. "Attendance * Diego Escalante * Germán Póo-Caamaño * Lucas Rocha * Srinivasa Ragavan * Stormy Peters".

Full Story (comments: none)

GNOME Software Announcements

The following new GNOME software has been announced this week: You can find more new GNOME software releases at

Comments (none posted)

KDE4 Demonstrates Choice Is Not A Usability Problem (KDEDot)

KDE.News presents an article by Daniel Memenode that contrasts the availability of features with usability. "KDE always stood out as a desktop environment that doesn't shy away from giving you lots of options and features. It is no wonder that one of its flagship products, Konqueror, was often compared to a swiss army knife. It could be used as a file manager for both local and remote files, an image viewer and a fairly powerful web browser shifting from one role to the other as needed. This however came with one downside in that it increased the perceived complexity of the desktop environment and increased the learning curve of a new user. KDE 4 was expected, among other things, to come with innovations which would possibly resolve this issue and I think it has already made some significant strides in that direction."

Comments (none posted)

The Semantic Desktop Wants You (KDEDot)

KDE.News takes a look at Nepomuk. "The KDE team working on Nepomuk aims to bring the Semantic Desktop to KDE 4, allowing applications to share and respond intelligently to meta data about files, contacts, web pages and more. Let us make this short: Nepomuk is an important project for the future KDE desktop. Its goal is to get all the information available on the system to the user. You are receiving an email - Nepomuk should show you information relevant to related projects or persons or tasks. You look at images of a person - Nepomuk should have links to other images of that person or unanswered emails or events you met that person at. You open the video player - Nepomuk should propose to watch the next episode in the series you are currently watching."

Comments (66 posted)

what's brewing in KDE?

Sebastian Kügler points out some highlights from the KDE blogs including KDE on Maemo, Journal Viewer, Visual improvements in the window decoration and Qt opens up further.

Full Story (comments: none)

KDE Software Announcements

The following new KDE software has been announced this week: You can find more new KDE software releases at

Comments (none posted)

X11R7.5 has been released

The X.Org Foundation has announced the release of X11R7.5. "X11R7.5 supports Linux, BSD, Solaris, MacOS X, Microsoft Windows and GNU Hurd systems. It incorporates new features, and stability and correctness fixes, including improved autoconfiguration heuristics, enhanced support for input devices, and new options for reconfiguring the screen geometry while the system is running." Click below for the full announcement including more details on new features in X11R7.5.

Full Story (comments: 14)

Hutterer: X11R7.5 released - but what is it?

As if in answer to some of the questions posed in the comments on our announcement of X11R7.5, Peter Hutterer has a great description of what makes up the release on his blog. "Since then, the X11R7.x releases (referred to as "katamari") are quite like distributions. They cherry-pick a bunch of module versions known to work together and combine them into one set. The modules themselves move mostly independent of the katamaris and thus their version numbers may skip between katamaris. For example, X11R7.4 had the X Server 1.5, X11R7.5 has X Server 1.7."

Comments (4 posted)

Xorg Software Announcements

The following new Xorg software has been announced this week: More information can be found on the X.Org Foundation wiki.

Comments (none posted)

Financial Applications

SQL-Ledger 2.8.26 released

Version 2.8.26 of SQL-Ledger, a web-based double entry accounting/ERP system, has been announced. Changes include: "1. Version 2.8.26 2. fixed AR aging duplicates in report 3. DST duedate and terms calculation".

Comments (none posted)


Gluon Sprint Wrap-Up (KDEDot)

KDE.News reports from the Gluon sprint recently held in Munich. "Gluon was conceived when the project's creator, Sacha Schutz, looked around the internet and saw how popular casual games based on Flash were. He saw the need for something which would make it possible to create similar games in a simple manner using technologies unrestricted by the closed world of proprietary software."

Comments (none posted)


Inkscape 0.47pre4 is out

Version 0.47pre4 of the Inkscape vector graphics editor has been announced. "Hopefully pre4 is the final prerelease. Please download the files and let us know if you stumble upon any serious bugs except the infamous crash when undoing changes in live path effects. We probably won't release the final version within next couple of weeks, because we really need the LPE bug fixed."

Comments (none posted)


Wine 1.1.32 announced

Version 1.1.32 of Wine has been announced. Changes include: "- Many crypto fixes, particularly on 64-bit. - Improved DVD access on Mac OS. - Several common controls improvements. - Various HTML support improvements. - More DIB optimizations. - Various bug fixes."

Comments (none posted)

Medical Applications

Call for Participation: openSUSE Medical

The openSUSE_Medical project has been launched. "I'm pleased to announce an new Subproject from openSUSE: openSUSE_Medical. This new Project tries to package more Software for doctors's practice or clinical needs. With our work we try to bridge a gap in the market."

Full Story (comments: none)

Music Applications

guitarix 0.05.1-1 released

Version 0.05.1-1 of guitarix has been announced, it includes several new features and some bug fixes. "guitarix is a simple Linux Rock Guitar amplifier and is designed to achieve nice thrash/metal/rock/blues guitar sounds. Guitarix uses the Jack Audio Connection Kit as its audio backend and brings in one input and two output ports to the jack graph."

Full Story (comments: none)

probility sequencing language 1.02 released

Version 1.02 of probility sequencing language has been announced. "probability sequencing language 1.02 has been released. psl is a text based piano roll language that is inspired by the probability in jeskola buzz, but with more control than you can get in a midi based envir[on]ment. every note has a percentage chance of hitting or it is marked with an x. The frequency on the roll is entirely up to the user. support for decimals has been added to 1.02."

Full Story (comments: none)

Office Applications

SeaMonkey 2.0 released

The SeaMonkey 2.0 release is out. "The combination of an Internet browser, email & newsgroup client, HTML editor, IRC chat and web development tools, that has already established a wide user base in its previous incarnations, has been rebuilt on top of the modern Mozilla platform, featuring world-class add-on management among other things. In addition, it has been improved with feed support (including an RSS and Atom feed reader in the mail component), a modern look, restoration of browser tabs and windows after crashes or restarts, tabbed mail, automated updates, smart history search from the location bar, faster JavaScript, HTML5 features (for example video and downloadable fonts), and even support for the Lightning calendar add-on (which will issue a beta for installation on SeaMonkey 2.0 in the next few weeks)." More information can be found in the release notes.

Full Story (comments: 8)


XYZCommander 0.0.2 released

Version 0.0.2 of XYZCommander has been announced. "I'm pleased to announce the XYZCommander version 0.0.2! XYZCommander is a pure console visual file manager."

Full Story (comments: none)

Languages and Tools


GCC adds support for Renesas RX processor

The Gnu Compiler Collection (GCC) now has support for the Renesas RX processor. "Support has been added for the Renesas RX processor (RX) target by Red Hat, Inc."

Comments (none posted)

LLVM 2.6 released

Version 2.6 of the LLVM compiler is out. There's a lot of new stuff here, including much-improved x86-64 code generation, link-time optimization support, a number of new architectures supported, DragonEgg (using LLVM for GCC code generation), and more. "A major highlight of the LLVM 2.6 release is the first public release of the Clang compiler, which is now considered to be production quality for C and Objective-C code on X86 targets. Clang produces much better error and warning messages than GCC and can compile Objective-C code 3x faster than GCC 4.2, among other major features."

Full Story (comments: 69)


Caml Weekly News

The October 27, 2009 edition of the Caml Weekly News is out with new articles about the Caml language.

Full Story (comments: none)


Python 2.6.4 released

Version of has been announced. "This is the latest production-ready version in the Python 2.6 series. We had a little trouble with the Python 2.6.3 release; a number of unfortunate regressions were introduced. I take responsibility for rushing it out, but the good news is that Python 2.6.4 fixes the known regressions in 2.6.3. We've had a lengthy release candidate cycle this time, and are confident that 2.6.4 is a solid release. We highly recommend you upgrade to Python 2.6.4."

Full Story (comments: none)

Proposal: Moratorium on Python language changes

Guido van Rossum has proposed that the Python development community stop changing language features for "several years." "The reason is that frequent changes to the language cause pain for implementors of alternate implementations (Jython, IronPython, PyPy, and others probably already in the wings) at little or no benefit to the average user (who won't see the changes for years to come and might not be in a position to upgrade to the latest version for years after)." Besides, he would really like to see the community working on building acceptance for Python 3.

Full Story (comments: 62)

Python-URL! - weekly Python news and links

The October 25, 2009 edition of the Python-URL! is online with a new collection of Python article links.

Full Story (comments: none)

ffnet 0.6.2 released

Version 0.6.2 of ffnet, a feed-forward neural network training solution for python, has been announced. "This release contains minor enhancements and compatibility improvements: - ffnet works now with >=networkx-0.99; - neural network can be called now with 2D array of inputs, it also returns numpy array instead of python list; - readdata function is now alias to numpy.loadtxt; - docstrings are improved."

Full Story (comments: none)

mds-utils 1.2.0 released

Version 1.2.0 of mds-utils has been announced, it adds some new capabilities. "mds-utils is a library intended to become a collection of several C++ utilities. It makes heavy usage of the Boost C++ libraries."

Full Story (comments: none)

python-daemon 1.5.2 released

Version 1.5.2 of python-daemon has been announced. "Since version 1.5 the following significant improvements have been made: * The documented option 'prevent_core', which defaults to True allowing control over whether core dumps are prevented in the daemon process, is now implemented (it is specified in PEP 3143 but was accidentally omitted until now). * A document answering Frequently Asked Questions is now added."

Full Story (comments: none)


Tcl-URL! - weekly Tcl news and links

The October 23, 2009 edition of the Tcl-URL! is online with new Tcl/Tk articles and resources.

Full Story (comments: none)

Version Control

GIT released

Version of the GIT distributed version control system has been announced, it includes several bug fixes.

Full Story (comments: none)

Stacked Git 0.15 released

Version 0.15 of Stacked Git has been announced, it includes some new capabilities. "StGit is a Python application providing functionality similar to Quilt (i.e. pushing/popping patches to/from a stack) on top of Git. These operations are performed using Git commands, and the patches are stored as Git commit objects, allowing easy merging of the StGit patches into other repositories using standard Git functionality."

Full Story (comments: none)

Page editor: Forrest Cook


Non-Commercial announcements

EFF: 'Hall of Shame' Calls Out Bogus Internet Censorship

The Electronic Frontier Foundation has announced the launch of its Takedown Hall of Shame site. "Websites like YouTube have ushered in a new era of creativity and free speech on the Internet, but not everyone is celebrating. Some of the web's most interesting content has been yanked from popular websites with bogus copyright claims or other spurious legal threats. So today the Electronic Frontier Foundation (EFF) is launching its "Takedown Hall of Shame" to call attention to particularly bogus takedowns -- and showcase the amazing online videos and other creative works that someone doesn't want you to see."

Full Story (comments: none)

FSFE: Solution for Oracle/Sun deal: Make MySQL independent

The Free Software Foundation Europe has a press release covering its thoughts on how to resolve the Oracle-Sun merger issues regarding MySQL. The European Commission is currently looking at the merger and the disposition of MySQL is seen as one of the biggest stumbling blocks to its approval. The press release refers to FSFE President Karsten Gerloff's lengthy blog posting, which lays out the case for making an independent organization for MySQL: "The dual-licensing approach, and the reliance on proprietary licenses as a source of revenue, has severely hampered the growth of what could have turned by now into a much bigger ecosystem. The strategy has led to a huge gap between the original developer (MySQL as a company) and second-tier firms providing support and development services. It also forced developers who wanted to contribute to MySQL to sign unequal copyright agreements. Some did, some didn't. As a consequence, MySQL's development community is not as strong as it could be."

Comments (8 posted)

Introducing L2Ork, the Linux Laptop Orchestra

Ivica Ico Bukvic has announced l2ork, the Digital Interactive Sound and Intermedia Stuido (DISIS) Linux Laptop Orchestra project "I wanted to share with you my latest Linux-based and project that has been sucking up most of my time over the past year or so to the point it seemed as if I have disappeared off the face of the Earth."

Full Story (comments: 3)

PgUS Board nominations now open

Nominations are open for the PgUS Board. "We are now accepting nominations for the United States PostgreSQL Association (PgUS) board for the Fall 2009 elections; please submit nominations".

Full Story (comments: none)

White House goes Open Source (Netcraft)

Netcraft reports that the White House has changed its web content management system from Microsoft IIS 6.0 to Drupal. "The White House launched a new version of its website on Saturday. While little has changed on the surface, the underlying technology is now powered by the open source Drupal content management system."

Comments (none posted)

Commercial announcements

EnterpriseDB announces strategic investment by Red Hat (Reuters)

EnterpriseDB has announced a partnership with Red Hat. "EnterpriseDB, the enterprise Postgres company today announced that Red Hat, the world's leading provider of open source solutions, has made a financial investment in EnterpriseDB as part of a partnership aimed at increasing enterprise adoption of open source IT infrastructure. "EnterpriseDB has clearly established itself as a leading enterprise Postgres company, which is why Red Hat has chosen to partner with and invest in the company. EnterpriseDB is also working to create customer value through a subscription support model. Clearly, this is a model we see as beneficial," said Jim Whitehurst, CEO of Red Hat."

Comments (9 posted)

MontaVista releases new market-specific distributions for MVL6

MontaVista has announced market-specific distributions for MVL6. "MontaVista® Software, Inc., the leader in embedded Linux® commercialization, today announced more new Market Specific Distributions (MSDs) for MontaVista Linux 6. The new MSDs continue to expand the market specific focus of MVL6, delivering support for industrial automation, automotive, Android, portable multimedia devices, and multicore networking applications. All the new MSDs will be available this quarter and support processors from Cavium, Freescale, Intel, and Texas Instruments."

Full Story (comments: none)

Qualcomm announces mobile open-source subsidiary

Qualcomm has announced the launch of a new subsidiary with a focus on open-source mobile development. "Qualcomm Incorporated, a leading developer and innovator of advanced wireless technologies, products and services, today announced that it has established a separate wholly-owned subsidiary, Qualcomm Innovation Center, Inc. (QuIC), focused on mobile open source platforms. QuIC has brought together a dedicated group of engineers to optimize open source software with Qualcomm technology. The QuIC board of directors has named Rob Chandhok, senior vice president of software strategy for Qualcomm CDMA Technologies, as president of QuIC." (Thanks to Lasse Bigum).

Comments (none posted)

Raytheon unveils Linux 'Insider Threat' rooter-out routers (the Register)

the Register covers Raytheon's use of Linux in its routers. "US armstech mammoth Raytheon has announced that its "government insider threat management solution" for information security will be powered by Linux. Penguin-inside crypto modules to be used in Raytheon's mole-buster tech have now passed tough federal security validation, apparently. The insider-threat detector gear in question is Raytheon's SureView™, designed to root out the whole spectrum of security no-nos from "accidental data leaks" through "well-intentioned but inappropriate policy violations" to "deliberate theft of data"."

Comments (none posted)

Sequoia to release voting system software

Sequoia Voting Systems has announced that it will release the source for its "Frontier Election System" offering in November. "Fully disclosed source code is the path to true transparency and confidence in the voting process for all involved. Sequoia is proud to be the leader in providing the first publicly disclosed source code for a complete end-to-end election system from a leading supplier of voting systems and software." This release is carefully not described as "open source," and, in any case, source availability is not a full solution to the problem. But it still looks like a step in the right direction.

Comments (18 posted)

Articles of interest

Blaming Intel for how the world is (Moblin Zone)

Moblin Zone has a lengthy justification for Intel's GMA500 (aka "Poulsbo") graphics hardware. The post is in response to a Linux Journal article that lambasted Intel for "kicking its friends in the face" by using hardware that requires closed drivers. Essentially, Moblin Zone argues that Intel was targeting the device, not computer, market with "Menlow" (which includes the Poulsbo hardware). "Not only is there no significant penalty for closed drivers in the device world, sometimes, they work out better. There's a business advantage, in terms of vendor lock-in. If I'm a chip maker, my customer has to come back to me for a new driver or source-level license (with non-disclosure agreement) when they begin working on a new product model, or a firmware upgrade. In the thin-margin world of device parts, that kind of ongoing revenue stream might make the difference between getting by or having to lay off engineers."

Comments (51 posted)

Tilera Readies Processors With 100 Cores (InformationWeek)

InformationWeek covers Tilera's latest releases in its TILE-Gx multi-core processor family. "Tilera on Monday introduced a series of general purpose processors ranging from 16 to 100 cores for use in servers. The processors would replace multiple processors and lower system costs. While it is too soon to tell whether Tilera's TILE-Gx family will one day challenge Xeon and Opteron server chips from Intel and Advanced Micro Devices, respectively, the announcement points to the ongoing industry trend of adding cores to boost performance. "

Comments (13 posted)


Cloud Computing: Good or Bad for Open Source? (Linux Journal)

Glyn Moody discusses open-source software and cloud computing on Linux Journal. "Cloud computing: you may have heard of it. It seems to be everywhere these days, and if you believe the hype, there's a near-unanimous consensus that it's the future. Actually, a few of us have our doubts, but leaving that aside, I think it's important to ask where does open source stand if the cloud computing vision *does* come to fruition? Would that be a good or bad thing for free software?"

Comments (24 posted)

New DoD memo on Open Source Software (David Wheeler's Blog)

David Wheeler investigates a new clarifying statement [PDF] for an old Department of Defense policy on the use of open-source software. "This 2009 memo is important for anyone who works with the DoD (including contractors) on software and systems that include software... and I suspect it will influence many other organizations as well. Let me explain why this new memo exists, and what it says. Back in 2003 the DoD released a formal memo titled Open Source Software (OSS) in the Department of Defense. This older memo was supposed to make it clear that it was fine to use and develop OSS in the DoD. Unfortunately, as the new 2009 memo states, "there have been misconceptions and misinterpretations of the existing laws, policies and regulations that deal with software and apply to OSS that have hampered effective DoD use and development of OSS"."

Comments (58 posted)

Teaching with Tux (Linux Journal)

Over at Linux Journal, Mike Diehl looks at three educational programs all featuring Tux the penguin. Programs to teach typing and practice math skills are two of those he looks at, in addition to TuxPaint, which didn't, at first, strike him as particularly educational: "So how is this educational? At the lower ages, this might simply be a first introduction to using the mouse. In this case, the parent or educator would help the student select colors and draw lines and shapes. Older, pre-readers, could use this program to tell a story in storyboard fashion. Still older children could use this program to create their own comic strips complete with text. Of course, you could also use Tux Paint to teach students art concepts like color, line, and texture. It doesn't matter how you use it though. Tux Paint is a lot of fun."

Comments (6 posted)

Education and Certification

LPI at Software Freedom Day Tunis, Tunisia

The Linux Professional Institute will be holding Linux certification exams at the Software Freedom Day on October 31 in Tunis, Tunisia. "Software Freedom Day (SFD) is an annual worldwide celebration of Free/Open Source software. LPI's affiliate for the region, LPI-Maghreb (Tunisia, Algeria, Moroco and Libya) has participated in the event for the last four years."

Full Story (comments: none)

Advanced Scientific Programming in Python School - Warsaw, Poland

A class on Advanced Scientific Programming in Python will take place on February 8-12, 2010 in Warsaw, Poland. "Scientists spend more and more time writing, maintaining, and debugging software. While techniques for doing this efficiently have evolved, only few scientists actually use them. As a result, instead of doing their research, they spend far too much time writing deficient code and reinventing the wheel. In this course we will present a selection of advanced programming techniques with theoretical lectures and practical exercises tailored to the needs of a programming scientist."

Full Story (comments: none)

Event Reports

Web 2.0 Summit: The Browser Is What Matters (InformationWeek)

InformationWeek covers comments by Google's VP of product management, Sundar Pichai, at the Web 2.0 Summit. "To the suggestion that Chrome OS -- the operating system that Google is developing around its Chrome browser -- is on a collision course with Windows, Pichai responded that the world is entering a period of tremendous innovation in personal computing. "Browsers are suddenly hot again and I think operating systems are too," he said, referring both to Chrome OS and Android, Google's operating system for mobile devices "There haven't been other choices for a long time," he said. "Most operating systems today were designed before the Web existed." "The goal with both our efforts is to get great free open source software stacks out there," he said. In the case of Chrome OS, everything is built around the browser."

Comments (32 posted)

Calls for Presentations

CeBIT Open Source 2010: Call for Projects

A Call for Projects has gone out for CeBIT Open Source 2010. "The largest IT trade show on earth will take place from March 2 through 6 in Hannover, Germany. The Deutsche Messe organization that runs the trade show initiated Open Source as a theme focus for the first time in 2009, and the surge of visitors into a constantly packed hall exceeded all expectations. It's clear that Open Source will play a major role again at CeBIT in 2010. As an incentive, the theme will get a prominent new location in Hall 2, where exhibitors, the Open Source Forum and the Open Source Project Lounge will find a new home."

Comments (none posted)

RailsConf call for proposals

A call for proposals has gone out for RailsConf 2010, Submissions are due by March 17. "The Call for Participation has opened for RailsConf 2010, when the Ruby on Rails community will gather June 7-10, 2010, at the Baltimore Convention Center in Baltimore, MD. RailsConf, co-produced by Ruby Central, Inc. and O'Reilly Media, Inc., is the largest official conference dedicated to everything Rails. Program chair Chad Fowler invites proposals for conference sessions, workshops, and panels from Rubyists, hackers, web developers, system administrators, and anyone else with a passion for Rails."

Full Story (comments: none)

Upcoming Events

A Hackfest To Improve Linux Video Playback (Phoronix)

A Linux video playback hackfest has been announced. "When it comes to video playback on Linux, the premiere choice for video acceleration is currently using VDPAU with its CPU-efficient, GPU-accelerated capabilities that even has no problems playing 1080p video files with extremely low-end hardware. However, VDPAU is not yet widespread in all Linux video drivers, and other free software developers have been working on improving other areas of the Linux video stack too. One of these developers is GNOME's Benjamin Otte who has been working on using Cairo/Pixman for raw video in GStreamer. Additionally, he has organized a Linux video "hackfest" that will take place next month in Barcelona, Spain to further this Linux video playback work." (Thanks to James).

Comments (none posted)

PostgreSQL Conference 2009 Japan

Registration is open for the PostgreSQL Conference 2009 Japan. "The PostgreSQL Conference 2009 Tokyo Executive Committee are proud to announce that the two days programme sessions, JPUG 10th Anniversary Conference, are going to be held on 20th and 21st November, 2009, at AM Hamamatsucho, Tokyo."

Full Story (comments: none)

Page editor: Forrest Cook

Copyright © 2009, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds