In particular, developers are concerned and unhappy about the copyright assignment policy that Canonical has chosen for all of its projects. This agreement [PDF] is a relatively simple read; it fits on a single page. It applies to a long list of projects, including Bazaar, Launchpad, Quickly, Upstart, and Notify-osd; contributions to any of those projects must be made under the terms of this agreement.
So what do contributors agree to? The core term is this:
So Canonical gets outright ownership of the code. In return, Canonical gives the original author rights to do almost anything with that code.
Assigning copyright to Canonical could well be an obstacle for potential contributors, but there are a couple of other terms which make things worse. One of them is this:
There are many free software developers who might balk at giving their code away to somebody who "ordinarily" will make it available under a free license. And the final sentence is even worse; "other license terms" is, of course, euphemistic language for "proprietary terms."
Finally, there is the patent pledge:
This language is likely to be just fine for many developers who have no intention of asserting patents against anybody anyway. But it's worth noting that (1) the patent grant is broad, including anything which might be added to the program (by others) in the future, and (2) there is no "self defense" exception allowing patents to be used to fight off litigation initiated by others. So, to a patent holder, this language is going to look like a unilateral disarmament pledge with unknown (and unknowable) scope. For many companies - even those which are opposed to software patents in general - that requirement may well be enough, on its own, to break the deal.
Contributor agreements abound, of course, though their terms vary widely. One might compare Canonical's agreement with the Free Software Foundation's language, which reads:
Not all developers are big fans of the FSF, but most of them trust it to live up to those particular terms. The FSF agreement makes no mention of patents at all (though GPLv3 is certainly not silent on the subject).
What about other projects? The Apache Software Foundation has an agreement by which the ASF is granted a license (which it promises not to use "in a way that is contrary to the public benefit or inconsistent with its nonprofit status") but the author retains ownership of all code. Sun's contributor agreement [PDF], which now covers MySQL too, gives Sun the right to do anything with the code, but shares joint ownership with the author. An extreme example is the SugarCRM agreement, which appears to transfer not just the author's copyrights, but his or her patents (the actual patents, not a license) as well.
Agreements like Sun's and SugarCRM's are common when dealing with corporate-owned projects; they clearly prioritize control and the ability to take things proprietary over the creation of an independent development community. More community-oriented projects, instead, tend to take a different approach to contributor agreements. Canonical is being criticized in a way that SugarCRM is not, despite the fact that Canonical's agreement appears to be the friendlier of the two. A plausible reason for that difference is that Canonical presents itself as a community-oriented organization, but it is pushing a more corporate-style contributor agreement.
Canonical's policy is especially likely to worry other Linux distributors. They are often happy to contribute to a project controlled by a different distributor, but they do not normally do so under terms which allow the recipient to take the code proprietary. Licenses like the GPL ensure fair dealing between companies; contributor agreements which allow "other license terms" remove any assurance of fair dealing. It is not surprising that some people are uninterested in contributing code under such terms.
The real sticking point, at the moment, appears to be Upstart. Other distributors either have adopted it or are considering doing so; it does appear to be a substantial improvement over the old SYSV init scheme. In the course of adopting Upstart, these distributors are certain to fix problems and make improvements to suit their needs. But they are rather less certain to contribute those changes back under Canonical's terms. In his wanderings, your editor has heard developers talk about possibly forking Upstart. Another developer claimed to be working on a completely new alternative system for system initialization which would take lessons from Upstart, but which would be an independent development. Neither of these outcomes seems optimal.
Your editor sent in a query asking what prevents Canonical from adopting more contributor-friendly terms, but got no answer over the course of a couple of days. Groups requiring copyright assignment often claim that it's necessary for them to be able to take action against copyright infringers. But the projects which have had the most success in that area - the Linux kernel and Busybox, for example - have no copyright assignment policy. The other thing that copyright assignment allows, of course, is a relicensing of the code. The FSF has made use of this privilege to move its projects to GPLv3; companies like MySQL have, instead, used it to ship code under proprietary terms. One might assume that Canonical has no such intent, but the fact that Canonical has explicitly reserved the right to do so is unlikely to make people comfortable.
When developers contribute code to a project, they tend to get intangible rewards in return. So asking them to hand over ownership of the code as well might seem to be pushing things a little too far. Even so, many developers are willing to contribute under such terms. But there are limits, and allowing a competitor to take code proprietary may well be beyond those limits - as are overly-broad patent grants for contributors who are concerned about such things. Companies which demand such rights may find that their community projects are not as successful as they would like.
Mozilla Labs recently pulled the covers off of Raindrop, a new project that attempts to rethink how messaging software presents information to users. In one sense, Raindrop is designed to function as a "grand unified inbox" aggregating email, instant messaging, and a wide range of site-specific message channels. These channels otherwise exist in complete isolation, which requires users to check multiple applications and web services to collect their incoming communication. But Raindrop also strives to better present the aggregated dispatches and notices, automatically sorting individual conversations from group discussions, and personal messages from automated announcements and updates.
Raindrop's web page says its mission is making it "enjoyable to participate in conversations from people you care about, whether the conversations are in email, on twitter, a friend's blog or as part of a social networking site." To that end, the application abstracts away from the user the work of retrieving messages, notifications, and replies from the various web and email accounts, and presents them together as a unified whole. On top of that, Raindrop attempts to figure out which messages and conversations are most likely to be important to the user, and filters them up to the top of the stack. One of the introduction videos gives the example of sorting personal email to the top of the stack, while putting automatically-generated alerts to the bottom.
Clearly, there are more than just two categories of message (personal and automatic); Raindrop filters list email as well, but pays more attention to threads in which the user is participating. On the microblogging front, Raindrop classifies direct messages and "@" replies above general status notices, and can thread back-and-forth exchanges just like email.
Raindrop's interface is undergoing constant study and redesign, but at present it features a "home" screen with a combined list of all messages, plus links to specialized views for content, including "Direct Messages." "Group Conversations," "Mail Folders," and "Mailing Lists." Raindrop's home screen sorts the newest conversations at the top, and all messages appear as conversation "bubbles" with a preview of their contents. Raindrop threads related messages together, and it flags each conversation with an icon to distinguish between what it believes are person-to-person conversations, group discussions, and announcements or other impersonal messages. At the bottom of the home screen is a summary block for content in which the user is not a direct participant — general Twitter updates, mailing list threads between other people, and so on.
The Raindrop team is making considerable efforts to solicit input and feedback from real-world users in order to adapt its design. The project's "guiding principles" emphasize its user-centric and participatory process. The project has not yet made a pre-packaged release, but has tagged a 0.1 milestone in its source code repository. In keeping with the participatory goal, the designers have issued two previews of interface changes, although neither is yet available to run.
After checking out the code, the included script check-raindrop.py will check for specific Python packages (including Twisted, Paisley, and several support packages for dealing with specific services such as Twitter and RSS feeds) and check that CouchDB is configured and running. Once the script reports that everything is satisfied, users must manually create a ~/.raindrop file and populate it with account information for the services they wish to monitor. The current release includes IMAP email accounts, generic RSS feeds, and the popular commercial services Gmail, Twitter, and Skype.
Once Raindrop is configured, users start the service with the included run-raindrop.py script. The first time through, according to the installation guide, this script should be executed as:
run-raindrop.py sync-messages --max-age=Ndayswhich will fetch the previous N days' worth of incoming messages from each account. run-raindrop.py can take several minutes to process each account, so it is best to choose a small value for N. When the import finishes, users can access the Raindrop application from a browser at:
At first glance, Raindrop's home screen shows what one would expect from any email client: message threads. More subtle than this automatic threading is Raindrop's attempt to combine all message sources into one "inflow" (as the project calls it). Each conversation category Raindrop presents combines threads from all of the configured accounts; "Group Conversations" contains email and @replies, "Sent" contains outgoing tweets and mail, and so on.
Lead designer Bryan Clark describes this automated sifting of content by message type as one of the key goals of the project. The first iteration of the user interface looks much like a webmail client, but the prototypes posted by the developers indicate that they plan to push the separation of different content types even further, perhaps clustering announcements and other non-personal messages into separate areas of the screen, giving more room to the "important" conversations with a "dashboard"-style layout.
As of today, it is difficult to get a solid feel for how this intelligent processing of messages will work in practice, because there are so few supported services. It is certainly handy to access all of the assorted account inboxes in a single location, but the actual value of merging message sources increases the more sources there are to consider. Users who interact with the same contacts via Twitter, Skype, and email can test the combined-message-threading more rigorously, but for many users, additional services may have to be added to make the user experience diverge significantly from a traditional, single-source web application — perhaps even a service from outside the web itself, such as an SMS gateway.
The Raindrop team solicits input from outside users and developers through user and developer mailing lists, and has posted documentation on the front- and back-end architecture. In addition, the design team working on the user interface and user experience maintains a blog chronicling its work, and posts its design ideas and mock-ups to Flickr. Users and developers are encouraged to send feedback and ideas to both groups.
Raindrop Extender includes two extensions that the user can activate and begin using on his or her local Raindrop installation immediately. One parses each message for URLs and appends a list of the URLs found in each message to the message's preview bubble for easy access. The second performs a similar task for Flickr URLs, but rather than providing a link, it fetches and displays a thumbnail image of the file in the preview.
Several early blog reactions to Raindrop compared it to Google's Wave, but beyond the aggregation of multiple content sources, the two projects have little in common. Wave centers around real-time and collaborative content editing, while Raindrop focuses on filtering messages in a user-centric way. Other projects have attempted to "rethink the inbox" over the years — much of the Getting Things Done (GTD) craze took aim at message processing, for example, and, although it has never attracted critical mass, a big part of the Chandler project's goal was to merge calendar, email, and to-do into a unified stream. A horde of Firefox and Thunderbird extensions exist to try and combine the multitude of single-site message streams into a single application.
Raindrop has certainly found a problem in need of a solution; even with open standards and open protocols, online communication today has splintered into more and more messaging services that are blindly ignorant of each other — consider how many ostensibly "VoIP services" also provide instant messaging functionality as if users were in need of another IM account. But above all else, the Raindrop design team seems to understand that to a user, an incoming message is important or unimportant based on who sent it and what it says — it does not matter which from which site or protocol the message originated. Its "grand unified inbox" does not stop at un-splintering all of the incoming content, it actually tries to make useful sense out of it.
The current 0.1 milestone of Raindrop is clearly just the first drop in the bucket, but it deserves kudos for tackling this complicated issue — and for doing it in a completely open way. Clark and the other Raindrop developers at Mozilla Messaging are the team that developed Thunderbird; whether Raindrop's concepts remain limited to a web application, become integrated into Thunderbird and other stand-alone clients, or some combination of both remains to be seen. Wherever it goes, though, Raindrop will be an interesting experiment to watch.
Given this context, it makes sense that the 2009 Kernel Summit went to Tokyo. Japan (and the Linux Foundation) did a great job of hosting this high-profile event; some developers were heard to suggest that the summit should be held there every year. But one also should not overlook the significance of the first Japan Linux Symposium which followed the Summit. JLS 2009 is the beginning of what is intended to be an annual, world-class Linux gathering. Your editor's impression is that this event has gotten off to a good start.
The JLS program featured a long list of developers from Asia and beyond. Your editor will summarize a few of the talks here; others will be covered separately.
Arguably, one important prerequisite to the creation of a thriving development community is the existence of local rock-star programmers who can serve as an inspiration to others. Japan certainly has one of those in the form of Yukihiro Matsumoto, best known as the creator of the Ruby language. He is known in Japan as an inspirational speaker, though, your editor fears, some of that inspiration was lost as the simultaneous translators worked flat-out to keep up with his fast-paced talk. Certainly the audience was clearly thrilled to have an opportunity to hear him speak.
His talk, held during the first-day keynote block, was aimed at a non-technical audience; it thus offered relatively little that would be new to LWN readers. "Matz" talked about the Unix philosophy and how it suits his way of working - "simplicity," "extensibility," and "programmability" were the keywords here. Open source was a good thing for him as well; it allowed him to play with (and learn from) a wide variety of software and set the stage for the development of Ruby. The posting of Ruby itself was a big surprise - he had bug reports and patches within hours of the creation of the mailing list. Without the open source community, Ruby would never have reached its current level of functionality or adoption.
Amusingly, Matsumoto-san noted that his objective at the outset was to create an object-oriented Perl. He did not know about Python at the time; had he stumbled across that language earlier, things might have gone much differently.
Security modules are among the most difficult types of code to merge into the kernel. Pathname-based access control techniques are a hard sell even by the standards of security code in general; one need only look at the fate of AppArmor to see how difficult it can be. So a first-time contributor who merges a security module using pathname-based techniques has accomplished something notable. That contributor is Toshiharu Harada, who saw TOMOYO Linux merged into 2.6.30, two years after its initial posting. Harada-san talked about his experience in a session at JLS.
Getting started with kernel development is hard, despite the existence of a lot of good documents on how to go about it. We still make mistakes. The biggest problems are simple human nature and the fact that we don't like reading documentation; these, he said, are difficult issues to patch. There is too much stuff under the kernel's documentation directory, and we would much rather go and code something than read. But there are things we should look at; he suggested HOWTO, SubmitChecklist, and CodingStyle. He also liked Linus's ManagementStyle document, which contains such un-Japanese advice as:
Linux kernel documentation, Harada-san noted, is tremendously practical.
His advice - derived from the many mistakes made in the process of getting TOMOYO Linux merged - was equally practical. Send patches, not just URLs. Stick to the coding style. Keep your patch series bisectable. Use existing data structures and APIs in the kernel. Be sure to send copies to the right people. Don't ask others to make changes for you - just make them. Try not to waste reviewers' time. And so on.
There are, he noted, lots of kernel developers who are willing to help those trying to figure out the system. Arguably the real lesson from the talk - never explicitly stated - was related to that: Harada-san was able to overcome obstacles and get his code into the kernel because he listened to the people who were trying to help him. If more developers would adopt that approach, we would have fewer failed attempts to engage with the development process.
Satoru Ueda is one of the strongest proponents of the use of - and contributions to - Linux within Sony. His efforts once led to a Sony vice-CEO asking him whether he was actually working for Panasonic, which seemed to be the beneficiary of his efforts. Ueda-san used his JLS talk to examine why Japanese developers often hesitate to work with the development community.
Is Japanese non-participation, he asked, a cultural problem? In part it might be. In general, he says, Japanese people tend to respond to strangers with fear, worrying about what unknown people might do to them. Westerners, instead, tend to be much more aware that strangers, while potentially dangerous, can also bring good things. That makes them more open to things like working in development communities.
That said, Japanese attitudes in general - and toward the open source community in particular - are changing. Japanese hesitation in this area is not really a cultural issue, set in stone; instead, getting past it is just a matter of adaptation.
Economics is also an important issue. Japanese executives are starting to see the economic advantages of open source software, and that is making them fairly excited about being a part of it. Mid-level managers are decidedly less enthusiastic; they fear that community participation could erode their power and influence within the company. They also feel stronger than the community and feel a need to keep core development competence within the company. Developers, too, are hesitant. The high visibility afforded by community participation is relatively unhelpful in Japan, where labor mobility is quite low. They fear that managers may not understand what they do, they worry about working in an unfamiliar language, and they fear being flamed in public.
Again, things seem to be getting better. Labor mobility is on the rise in Japan, and some managers are beginning to figure things out. And there are a lot of open-source developers in Japan. So, in the end, Ueda-san is optimistic about the future of Japanese participation in the development community.
Looking at how the Japan Linux Symposium went, your editor would be inclined to agree with that optimism. The event was well attended by highly-engaged developers from Japan and beyond. Questions during the talks were subdued in the Japanese fashion, but the hallway discussions were lively. JLS mirrors a growing and enthusiastic development community. This event is off to a good start; if it can retain its success next year in the absence of the Kernel Summit, it may well become one of the definitive conferences worldwide.
Physical security is important. The "Evil Maid" attack serves as a reminder that briefly allowing a laptop out of your control, even with an encrypted hard disk, means that all security bets are off—the machine should be considered potentially compromised. Obviously different users have different levels of paranoia about their data security, but the Evil Maid attack shows just how simple it can be for others to access your data.
There is nothing particularly new in the proof-of-concept (PoC) attack against TrueCrypt disk encryption software, but the simplicity of the approach should give one pause. Joanna Rutkowska described the attack back in January, but the need for physical computer security goes back much further than that. But, folks are less wary of physical attacks against laptops today because of whole-disk encryption. Rutkowska's PoC, along with last year's report on "cold boot" attacks, should make it clear that encryption—at least without some kind of Trusted Platform Module (TPM) support—is not a complete solution
The basic idea behind Evil Maid is that someone gets access to a laptop for a fairly short period of time (a few minutes), and, in that time, boots it from a USB key. One obvious vector is a hotel maid (or someone acting as one), who enters someone's room while they are out to dinner, which is what gives the attack its name. The USB key contains a payload that hooks the TrueCrypt password prompting code and stores the last password entered. The payload gets added to the Master Boot Record (MBR) of the laptop so that it becomes active on the next boot.
While it has not been implemented in the PoC, there is no reason that the malware couldn't send the password off via the network; currently it just reports it back the next time the Evil Maid USB key is booted. That would require the attacker to access the laptop twice—with its user typing in the encryption key in between—but a multi-day hotel stay would give ample opportunity for that to occur.
As Bruce Schneier points out, this attack is in no way limited to TrueCrypt, as other solutions suffer from the same vulnerabilities. Both Schneier and Rutkowska look at some potential workarounds, but, in the final analysis, physical access allows an attacker too many ways around these security measures. Even Trusted Computing, with appropriate TPM hardware, can succumb to certain kinds of attacks.
Microsoft's BitLocker drive encryption uses the TPM, which provides reasonable assurance that the right code is being booted, but even that can fall prey to Evil Maid-style attacks, as Rutkowska describes:
Rutkowska also describes a "Poor Man's Solution" which calculates hashes of various unencrypted portions of the disk (especially the MBR). The Disk Hasher is a bootable Linux-based USB key that calculates and stores the hashes on the USB key, as well as verifying the correct hashes prior to booting. As she points out, it only protects against disk-based attacks—BIOS reflashing would subvert Disk Hasher.
Requiring a password in the BIOS before booting is another possible workaround, but one that may not provide as much security as it at first seems. BIOS reflashing is one possible attack, but an easier—though more time-consuming than the "standard" Evil Maid attack—method would be to remove the disk, attach it to another laptop and install the necessary code. It also adds complexity to the attack, but the 5-15 minutes needed to swap out a laptop hard disk is not all that difficult to come by in the hotel scenario.
This PoC, along with other attacks against encrypted disks, is very useful to remind users that hard disk encryption is no panacea. You still must consider which kinds of threats you are trying to protect against. Disk encryption is great for preventing accidental disclosure of private information when someone steals a laptop, but is much less useful for an attack that is focused on accessing the data on a particular laptop. Much like internet security, fairly straightforward protection techniques are fine to thwart the random attacker but are probably insufficient for one who is focused on subverting your defenses in particular.
Brief items3.5.4 and 3.0.15. Each fixes some fairly serious sounding security problems (3.5.4, 3.0.15) including multiple "critical" flaws. "We strongly recommend that all Firefox users upgrade to this latest release. If you already have Firefox 3.5 or Firefox 3, you will receive an automated update notification within 24 to 48 hours. This update can also be applied manually by selecting "Check for Updates..." from the Help menu. " Distribution updates will presumably be available soon as well.
|Package(s):||acroread||CVE #(s):||CVE-2007-0048 CVE-2009-2979 CVE-2009-2980 CVE-2009-2981 CVE-2009-2982 CVE-2009-2983 CVE-2009-2985 CVE-2009-2986 CVE-2009-2988 CVE-2009-2990 CVE-2009-2991 CVE-2009-2993 CVE-2009-2994 CVE-2009-2996 CVE-2009-2997 CVE-2009-2998 CVE-2009-3431 CVE-2009-3458 CVE-2009-3459 CVE-2009-3462|
|Created:||October 26, 2009||Updated:||October 28, 2009|
From the CVE entries:
CVE-2007-0048: Adobe Acrobat Reader Plugin before 8.0.0, and possibly the plugin distributed with Adobe Reader 7.x before 7.1.4, 8.x before 8.1.7, and 9.x before 9.2, when used with Internet Explorer, Google Chrome, or Opera, allows remote attackers to cause a denial of service (memory consumption) via a long sequence of # (hash) characters appended to a PDF URL, related to a "cross-site scripting issue."
CVE-2009-2979: Adobe Reader and Acrobat 9.x before 9.2, 8.x before 8.1.7, and possibly 7.x through 7.1.4 do not properly perform XMP-XML entity expansion, which allows remote attackers to cause a denial of service via a crafted document.
CVE-2009-2980: Integer overflow in Adobe Reader and Acrobat 7.x before 7.1.4, 8.x before 8.1.7, and 9.x before 9.2 allows attackers to cause a denial of service or possibly execute arbitrary code via unspecified vectors.
CVE-2009-2981: Adobe Reader and Acrobat 7.x before 7.1.4, 8.x before 8.1.7, and 9.x before 9.2 do not properly validate input, which might allow attackers to bypass intended Trust Manager restrictions via unspecified vectors.
CVE-2009-2982: An unspecified certificate in Adobe Reader and Acrobat 9.x before 9.2, 8.x before 8.1.7, and possibly 7.x through 7.1.4 might allow remote attackers to conduct a "social engineering attack" via unknown vectors.
CVE-2009-2983: Adobe Reader and Acrobat 9.x before 9.2, 8.x before 8.1.7, and possibly 7.x through 7.1.4 allow attackers to cause a denial of service (memory corruption) or possibly execute arbitrary code via unspecified vectors.
CVE-2009-2985: Adobe Reader and Acrobat 7.x before 7.1.4, 8.x before 8.1.7, and 9.x before 9.2 allow attackers to cause a denial of service (memory corruption) or possibly execute arbitrary code via unspecified vectors, a different vulnerability than CVE-2009-2996.
CVE-2009-2986: Multiple heap-based buffer overflows in Adobe Reader and Acrobat 7.x before 7.1.4, 8.x before 8.1.7, and 9.x before 9.2 might allow attackers to execute arbitrary code via unspecified vectors.
CVE-2009-2988: Adobe Reader and Acrobat 7.x before 7.1.4, 8.x before 8.1.7, and 9.x before 9.2 do not properly validate input, which allows attackers to cause a denial of service via unspecified vectors.
CVE-2009-2990: Array index error in Adobe Reader and Acrobat 9.x before 9.2, 8.x before 8.1.7, and possibly 7.x through 7.1.4 might allow attackers to execute arbitrary code via unspecified vectors.
CVE-2009-2991: Unspecified vulnerability in the Mozilla plug-in in Adobe Reader and Acrobat 8.x before 8.1.7, and possibly 7.x before 7.1.4 and 9.x before 9.2, might allow remote attackers to execute arbitrary code via unknown vectors.
CVE-2009-2994: Buffer overflow in Adobe Reader and Acrobat 7.x before 7.1.4, 8.x before 8.1.7, and 9.x before 9.2 might allow attackers to execute arbitrary code via unspecified vectors.
CVE-2009-2996: Adobe Reader and Acrobat 7.x before 7.1.4, 8.x before 8.1.7, and 9.x before 9.2 allow attackers to cause a denial of service (memory corruption) or possibly execute arbitrary code via unspecified vectors, a different vulnerability than CVE-2009-2985.
CVE-2009-2997: Heap-based buffer overflow in Adobe Reader and Acrobat 7.x before 7.1.4, 8.x before 8.1.7, and 9.x before 9.2 might allow attackers to execute arbitrary code via unspecified vectors.
CVE-2009-2998: Adobe Reader and Acrobat 7.x before 7.1.4, 8.x before 8.1.7, and 9.x before 9.2 do not properly validate input, which might allow attackers to execute arbitrary code via unspecified vectors, a different vulnerability than CVE-2009-3458.
CVE-2009-3431: Stack consumption vulnerability in Adobe Reader and Acrobat 9.1.3, 9.1.2, 9.1.1, and earlier 9.x versions; 8.1.6 and earlier 8.x versions; and possibly 7.1.4 and earlier 7.x versions allows remote attackers to cause a denial of service (application crash) via a PDF file with a large number of [ (open square bracket) characters in the argument to the alert method. NOTE: some of these details are obtained from third party information.
CVE-2009-3458: Adobe Reader and Acrobat 7.x before 7.1.4, 8.x before 8.1.7, and 9.x before 9.2 do not properly validate input, which might allow attackers to execute arbitrary code via unspecified vectors, a different vulnerability than CVE-2009-2998.
CVE-2009-3459: Heap-based buffer overflow in Adobe Reader and Acrobat 7.x before 7.1.4, 8.x before 8.1.7, and 9.x before 9.2 allows remote attackers to execute arbitrary code via a crafted PDF file that triggers memory corruption, as exploited in the wild in October 2009. NOTE: some of these details are obtained from third party information.
CVE-2009-3462: Adobe Reader and Acrobat 7.x before 7.1.4, 8.x before 8.1.7, and 9.x before 9.2 on Unix, when Debug mode is enabled, allow attackers to execute arbitrary code via unspecified vectors, related to a "format bug."
|Created:||October 26, 2009||Updated:||October 28, 2009|
From the CVE entry:
CVE-2009-2992: An unspecified ActiveX control in Adobe Reader and Acrobat 9.x before 9.2, 8.x before 8.1.7, and possibly 7.x through 7.1.4 does not properly validate input, which allows attackers to cause a denial of service via unknown vectors.
|Package(s):||firefox seamonkey||CVE #(s):||CVE-2009-1563 CVE-2009-3274 CVE-2009-3370 CVE-2009-3372 CVE-2009-3373 CVE-2009-3374 CVE-2009-3375 CVE-2009-3376 CVE-2009-3380 CVE-2009-3382|
|Created:||October 28, 2009||Updated:||June 14, 2010|
|Description:||Firefox 3.5.4 and 3.0.15 have been released with fixes for the usual set of scary vulnerabilities.|
|Package(s):||kernel||CVE #(s):||CVE-2005-4881 CVE-2009-3228|
|Created:||October 22, 2009||Updated:||October 8, 2010|
|Description:||From the Red Hat alert:
multiple, missing initialization flaws were found in the Linux kernel. Padding data in several core network structures was not initialized properly before being sent to user-space. These flaws could lead to information leaks. (CVE-2005-4881, CVE-2009-3228, Moderate)
|Created:||October 22, 2009||Updated:||October 28, 2009|
|Description:||From the National Vulnerability Database
"Off-by-one error in the options_write function in drivers/misc/sgi-gru/gruprocfs.c in the SGI GRU driver in the Linux kernel 22.214.171.124 and earlier on ia64 and x86 platforms might allow local users to overwrite arbitrary memory locations and gain privileges via a crafted count argument, which triggers a stack-based buffer overflow. "
|Created:||October 22, 2009||Updated:||March 1, 2010|
|Description:||From the National Vulnerability Database
"The Linux kernel before 2.6.31-rc7 does not properly prevent mmap operations that target page zero and other low memory addresses, which allows local users to gain privileges by exploiting NULL pointer dereference vulnerabilities, related to (1) the default configuration of the allow_unconfined_mmap_low boolean in SELinux on Red Hat Enterprise Linux (RHEL) 5, (2) an error that causes allow_unconfined_mmap_low to be ignored in the unconfined_t domain, (3) lack of a requirement for the CAP_SYS_RAWIO capability for these mmap operations, and (4) interaction between the mmap_min_addr protection mechanism and certain application programs. "
|Created:||October 22, 2009||Updated:||February 15, 2010|
|Description:||From the National Vulnerability Database
"The get_random_int function in drivers/char/random.c in the Linux kernel before 2.6.30 produces insufficiently random numbers, which allows attackers to predict the return value, and possibly defeat protection mechanisms based on randomization, via vectors that leverage the function's tendency to "return the same value over and over again for long stretches of time.""
|Created:||October 22, 2009||Updated:||February 15, 2010|
|Description:||From the National Vulnerability Database
"NFSv4 in the Linux kernel 2.6.18, and possibly other versions, does not properly clean up an inode when an O_EXCL create fails, which causes files to be created with insecure settings such as setuid bits, and possibly allows local users to gain privileges, related to the execution of the do_open_permission function even when a create fails."
|Created:||October 22, 2009||Updated:||May 7, 2010|
|Description:||From the National Vulnerability Database
"The sg_build_indirect function in drivers/scsi/sg.c in Linux kernel 2.6.28-rc1 through 2.6.31-rc8 uses an incorrect variable when accessing an array, which allows local users to cause a denial of service (kernel OOPS and NULL pointer dereference), as demonstrated by using xcdroast to duplicate a CD. NOTE: this is only exploitable by users who can open the cdrom device."
|Created:||October 23, 2009||Updated:||December 22, 2009|
|Description:||From the Debian advisory: Alistair Strachan reported an issue in the r8169 driver. Remote users can cause a denial of service (IOMMU space exhaustion and system crash) by transmitting a large amount of jumbo frames.|
|Created:||October 27, 2009||Updated:||February 15, 2010|
|Description:||From the National Vulnerability Database
The tcf_fill_node function in net/sched/cls_api.c in the netlink subsystem in the Linux kernel 2.6.x before 2.6.32-rc5, and 126.96.36.199 and earlier, does not initialize a certain tcm__pad2 structure member, which might allow local users to obtain sensitive information from kernel memory via unspecified vectors. NOTE: this issue exists because of an incomplete fix for CVE-2005-4881.
|Created:||October 23, 2009||Updated:||October 28, 2009|
|Description:||From the Debian advisory: An integer overflow when processing HTTP requests can lead to a heap-based buffer overflow. An attacker can use this to execute arbitrary code either via crafted Content-Length values or large HTTP request. This is partly because of an incomplete fix for CVE-2009-0840.|
|Created:||October 27, 2009||Updated:||October 28, 2009|
|Description:||From the Debian alert:
Jasson Bell discovered that a remote attacker could cause a denial of service (segmentation fault) by sending a crafted request.
|Package(s):||phpMyAdmin||CVE #(s):||CVE-2009-3696 CVE-2009-3697|
|Created:||October 26, 2009||Updated:||October 28, 2009|
From the CVE entries:
CVE-2009-3696: Cross-site scripting (XSS) vulnerability in phpMyAdmin 2.11.x before 188.8.131.52 and 3.x before 184.108.40.206 allows remote attackers to inject arbitrary web script or HTML via a crafted name for a MySQL table.
CVE-2009-3697: SQL injection vulnerability in the PDF schema generator functionality in phpMyAdmin 2.11.x before 220.127.116.11 and 3.x before 18.104.22.168 allows remote attackers to execute arbitrary SQL commands via unspecified interface parameters.
|Created:||October 23, 2009||Updated:||March 5, 2010|
|Description:||From the Ubuntu advisory: It was discovered that poppler contained multiple security issues when parsing malformed PDF documents. If a user or automated system were tricked into opening a crafted PDF file, an attacker could cause a denial of service or execute arbitrary code with privileges of the user invoking the program.|
|Created:||October 27, 2009||Updated:||October 28, 2009|
|Description:||From the Fedora alert:
|Created:||October 26, 2009||Updated:||June 15, 2011|
From the CVE entry:
A certain algorithm in Ruby on Rails 2.1.0 through 2.2.2, and 2.3.x before 2.3.4, leaks information about the complexity of message-digest signature verification in the cookie store, which might allow remote attackers to forge a digest via multiple attempts.
|Created:||October 27, 2009||Updated:||October 28, 2009|
|Description:||From the Fedora
The first issue would allow an attacker to touch/modify any file on the system. Essentially the issue is that get, post, and requests aren't sanitized or unescaped.
|Created:||October 27, 2009||Updated:||October 28, 2009|
|Description:||From the Fedora
The SLiM display manager includes the current directory in it's default path which opens up users to trojan attacks and other unexpected behavior. It should be removed from the default config.
|Created:||October 27, 2009||Updated:||October 28, 2009|
|Description:||From the Fedora
Multiple denial of service flaws were found in the SystemTap instrumentation system, when the --unprivileged mode was activated:
a, Kernel stack overflow allows local attackers to cause denial of service or execute arbitrary code via long number of parameters, provided to the print* call.
b, Kernel stack frame overflow allows local attackers to cause denial of service via specially-crafted user-provided DWARF information.
c, Absent check(s) for the upper bound of the size of the unwind table and for the upper bound of the size of each of the CIE/CFI records, could allow an attacker to cause a denial of service (infinite loop).
|Package(s):||viewvc||CVE #(s):||CVE-2009-3618 CVE-2009-3619|
|Created:||October 26, 2009||Updated:||October 28, 2009|
From the Tenable advisory:
Update of viewvc to version 1.0.9 fixes a cross-site scripting (XSS) problem and enhances filtering of illegal characters when displaying error messages (CVE-2009-3618, CVE-2009-3619).
|Created:||October 27, 2009||Updated:||October 28, 2009|
|Description:||From the Fedora
A denial of service (resource exhaustion) flaw was found in the way WordPress used to handle HTTP headers, contained in the "trackback" message, sent to WordPress. A local, unprivileged user could sent a specially-crafted trackback message to running instance of WordPress, leading to its crash.
Page editor: Jake Edge
The current stable kernel is 22.214.171.124, released (along with 126.96.36.199) on October 22. The 2.6.27 update is relatively small and focused on SCSI and USB serial devices; the 2.6.31 update, instead, addresses a much wider range of problems.
See also: Len Brown's photos from the kernel summit and JLS.
Following up on some conference discussions, Greg Kroah-Hartman decided to regularize the tracing file hierarchy through the creation of a new tracefs virtual filesystem. Tracefs looks a lot like .../debug/tracing in that the files have simply been moved from one location to the other. Tracefs has a simpler internal API, though, since it does not require all of the features supported by debugfs.
The idea of tracefs is universally supported, but this particular patch looks like it will not be going in anytime soon. The concern is that anything moved out of debugfs and into something more stable will instantly become part of the kernel ABI. Much of the current tracing interface has been thrown together to meet immediate needs; the sort of longer-term thinking which is needed to define an interface which can remain stable for years is just beginning to happen.
Ingo Molnar thinks that the virtual files which describe the available events could be exported now, but not much else. That leaves most of the interface in an unstable state, still. So Greg has withdrawn the patch for now; expect it to come back with the tracing developers are more ready to commit to their ABI. At that point, we can expect the debate to begin on the truly important question: /tracing or /sys/kernel/tracing?
Since then, John Linville has decided to test the system with a series of ancient wireless drivers. These include the "strip" driver ("STRIP is a radio protocol developed for the MosquitoNet project - to send Internet traffic using Metricom radios."), along with the arlan, netwave, and wavelan drivers. Nobody seems to care about this code, and it is unlikely that any users remain. If that is true, then there should be no down side to removing the code.
That hasn't stopped the complaints, though, mostly from people who believe that staging drivers out of the tree is an abuse of the process which may hurt unsuspecting users. It is true that users may have a hard time noticing this change until the drivers are actually gone - though their distributors may drop them before the mainline does. So the potential for an unpleasant surprise is there; mistaken removals are easily reverted, but that is only partially comforting for a user whose system has just broken.
The problem here is that there is no other way to get old code out of the tree. Once upon a time, API changes would cause unmaintained code to fail to compile; after an extended period of brokenness, a driver could be safely removed. Contemporary mores require developers to fix all in-tree users of an API they change, though, so this particular indicator no longer exists. That means the tree can fill up with code which is unused and which has long since ceased to work, but which still compiles flawlessly. Somehow a way needs to be found to remove that code. The "staging out" process may not be perfect, but nobody has posted a better idea yet.
In a discussion of the O_NODE open flag patch, an interesting, though obscure, security hole came to light. Jamie Lokier noticed the problem, and Pavel Machek eventually posted it to the Bugtraq security mailing list.
Normally, one would expect that a file in a directory with 700 permissions would be inaccessible to all but the owner of the directory (and root, of course). Lokier and Machek showed that there is a way around that restriction by using an entry in an attacking process's fd directory in the /proc filesystem.
If the directory is open to the attacker at some time, while the file is present, the attacker can open the file for reading and hold it open even if the victim changes the directory permissions. Any normal write to the open file descriptor will fail because it was opened read-only, but writing to /proc/$$/fd/N, where N is the open file descriptor number, will succeed based on the permissions of the file. If the file allows the attacking process to write to it, writing to the /proc file will succeed regardless of the permissions of the parent directory. This is rather counter-intuitive, and, even though it is a rather contrived example, seems to constitute a security hole.
The Bugtraq thread got off course quickly, by noting that a similar effect could be achieved creating a hardlink to the file before the directory permissions were changed. While that is true, Machek's example looked for that case by checking the link count on the file after the directory permissions had been changed. The hardlink scenario would be detected at that point.
One can imagine situations where programs do not put the right permissions on the files they use and administrators attempt to work around that problem by restricting access to the parent directory. Using this technique, an attacker could still access those files, in a way that was difficult to detect. As Machek noted, unmounting the /proc filesystem removes the problem, but "I do not think mounting /proc should change access control semantics."
There is currently some discussion of how, and to some extent whether, to address the problem, but a consensus (and patch) has not yet emerged.
Kernel development news
Part of the problem is that 10G Ethernet is still Ethernet underneath. There is value in that; it minimizes the changes required in other parts of the system. But it's an old technology which brings some heavy baggage with it, with the heaviest bag of all being the 1500-byte maximum transfer unit (MTU) limit. With packet size capped at 1500 bytes, a 10G network link running at full speed will be transferring over 800,000 packets per second. Again, that's an increase of three orders of magnitude from the 10Mb days, but CPUs have not kept pace. So the amount of CPU time available to process a single Ethernet packet is less than it was in the early days. Needless to say, that is putting some pressure on the networking subsystem; the amount of CPU time required to process each packet must be squeezed wherever possible.
(Some may quibble that, while individual CPU speeds have not kept pace, the number of cores has grown to make up the difference. That is true, but the focus of Herbert's talk was single-CPU performance for a couple of reasons: any performance work must benefit uniprocessor systems, and distributing a single adapter's work across multiple CPUs has its own challenges.)
Given the importance of per-packet overhead, one might well ask whether it makes sense to raise the MTU. That can be done; the "jumbo frames" mechanism can handle packets up to 9KB in size. The problem, according to Herbert, is that "the Internet happened." Most connections of interest go across the Internet, and those are all bound by the lowest MTU in the entire path. Sometimes that MTU is even less than 1500 bytes. Protocol-based mechanisms for finding out what that MTU is exist, but they don't work well on the Internet; in particular, a lot of firewall setups break it. So, while jumbo frames might work well for local networks, the sad fact is that we're stuck with 1500 bytes on the wider Internet.
If we can't use a larger MTU, we can go for the next-best thing: pretend that we're using a larger MTU. For a few years now Linux has supported network adapters which perform "TCP segmentation offload," or TSO. With a TSO-capable adapter, the kernel can prepare much larger packets (64KB, say) for outgoing data; the adapter will then re-segment the data into smaller packets as the data hits the wire. That cuts the kernel's per-packet overhead by a factor of 40. TSO is well supported in Linux; for systems which are engaged mainly in the sending of data, it's sufficient to make 10GB work at full speed.
The kernel actually has a generic segmentation offload mechanism (called GSO) which is not limited to TCP. It turns out that performance improves even if the feature is emulated in the driver. But GSO only works for data transmission, not reception. That limitation is entirely fine for broad classes of users; sites providing content to the net, for example, send far more data than they receive. But other sites have different workloads, and, for them, packet reception overhead is just as important as transmission overhead.
Solutions on the receive side have been a little slower in coming, and not just because the first users were more interested in transmission performance. Optimizing the receive side is harder because packet reception is, in general, harder. When it is transmitting data, the kernel is in complete control and able to throttle sending processes if necessary. But incoming packets are entirely asynchronous events, under somebody else's control, and the kernel just has to cope with what it gets.
Still, a solution has emerged in the form of "large receive offload" (LRO), which takes a very similar approach: incoming packets are merged at reception time so that the operating system sees far fewer of them. This merging can be done either in the driver or in the hardware; even LRO emulation in the driver has performance benefits. LRO is widely supported by 10G drivers under Linux.
But LRO is a bit of a flawed solution, according to Herbert; the real problem is that it "merges everything in sight." This transformation is lossy; if there are important differences between the headers in incoming packets, those differences will be lost. And that breaks things. If a system is serving as a router, it really should not be changing the headers on packets as they pass through. LRO can totally break satellite-based connections, where some very strange header tricks are done by providers to make the whole thing work. And bridging breaks, which is a serious problem: most virtualization setups use a virtual network bridge between the host and its clients. One might simply avoid using LRO in such situations, but these also tend to be the workloads that one really wants to optimize. Virtualized networking, in particular, is already slower; any possible optimization in this area is much needed.
The solution is generic receive offload (GRO). In GRO, the criteria for which packets can be merged is greatly restricted; the MAC headers must be identical and only a few TCP or IP headers can differ. In fact, the set of headers which can differ is severely restricted: checksums are necessarily different, and the IP ID field is allowed to increment. Even the TCP timestamps must be identical, which is less of a restriction than it may seem; the timestamp is a relatively low-resolution field, so it's not uncommon for lots of packets to have the same timestamp. As a result of these restrictions, merged packets can be resegmented losslessly; as an added benefit, the GSO code can be used to perform resegmentation.
One other nice thing about GRO is that, unlike LRO, it is not limited to TCP/IPv4.
The GRO code was merged for 2.6.29, and it is supported by a number of 10G drivers. The conversion of drivers to GRO is quite simple. The biggest problem, perhaps, is with new drivers which are written to use the LRO API instead. To head this off, the LRO API may eventually be removed, once the networking developers are convinced that GRO is fully functional with no remaining performance regressions.
In response to questions, Herbert said that there has not been a lot of effort toward using LRO in 1G drivers. In general, current CPUs can keep up with a 1G data stream without too much trouble. There might be a benefit, though, in embedded systems which typically have slower processors. How does the kernel decide how long to wait for incoming packets before merging them? It turns out that there is no real need for any special waiting code: the NAPI API already has the driver polling for new packets occasionally and processing them in batches. GRO can simply be performed at NAPI poll time.
The next step may be toward "generic flow-based merging"; it may also be possible to start merging unrelated packets headed to the same destination to make larger routing units. UDP merging is on the list of things to do. There may even be a benefit in merging TCP ACK packets. Those packets are small, but there are a lot of them - typically one for every two data packets going the other direction. This technology may go in surprising directions, but one thing is clear: the networking developers are not short of ideas for enabling Linux to keep up with ever-faster hardware.
The Btrfs filesystem was merged for the 2.6.29 kernel, mostly as a way to encourage wider testing and development. It is certainly not meant for production use at this time. That said, there are people doing serious work on top of Btrfs; it is getting to where it is stable enough for daring users. Current Btrfs includes an all-caps warning in the Kconfig file stating that the disk format has not yet been stabilized; Chris is planning to remove that warning, perhaps for the 2.6.33 release. Btrfs, in other words, is progressing quickly.
One relatively recent addition is full use of zlib compression. Online resizing and defragmentation are coming along nicely. There has also been some work aimed at making synchronous I/O operations work well.
Defragmentation in Btrfs is easy: any specific file can be defragmented by simply reading it and writing it back. Since Btrfs is a copy-on-write filesystem, this rewrite will create a new copy of the file's data which will be as contiguous as the filesystem is able to make it. This approach can also be used to control the layout of files on the filesystem. As an experiment, Chris took a bunch of boot-tracing data from a Moblin system and analyzed it to figure out which files were accessed, and in which order. He then rewrote the files in question to put them all in the same part of the disk. The result was a halving of the I/O time during boot, resulting in a faster system initialization and smiles all around.
Performance of synchronous operations has been an important issue over the last year. On filesystems like ext3, an fsync() call will flush out a lot of data which is not related to the actual file involved; that adds a significant performance penalty for fsync() use and discourages careful programming. Btrfs has improved the situation by creating an entirely separate Btree on each filesystem which is used for synchronous I/O operations. That tree is managed identically to, but separately from, the regular filesystem tree. When an fsync() call comes along, Btrfs can use this tree to only force out operations for the specific file involved. That gives a major performance win over ext3 and ext4.
A further improvement would be the ability to write a set of files, then flush them all out in a single operation. Btrfs could do that, but there's no way in POSIX to tell the kernel to flush multiple files at once. Fixing that is likely to involve a new system call.
Btrfs provides a number of features which are also available via the device mapper and MD subsystems; some people have wondered if this duplication of features makes sense. But there are some good reasons for it; Chris gave a couple of examples:
So what does the future hold? Chris says that the 2.6.32 kernel will include a version of Btrfs which is stable enough for early adopters to play with. In 2.6.33, with any luck, the filesystem will have RAID4 and RAID5 support. Things will then stabilize further for 2.6.34. Chris was typically cagey when talking about production use, though, pointing out that it always takes a number of years to develop complete confidence in a new filesystem. So, while those of us with curiosity, courage, and good backups could maybe be making regular use of Btrfs within a year, widespread adoption is likely to be rather farther away than that.
Most current processors can work with pages larger than 4KB. There are advantages to using larger pages: the size of page tables decreases, as does the number of page faults required to get an application into RAM. There is also a significant performance advantage that derives from the fact that large pages require fewer translation lookaside buffer (TLB) slots. These slots are a highly contended resource on most systems; reducing TLB misses can improve performance considerably for a number of large-memory workloads.
There are also disadvantages to using larger pages. The amount of wasted memory will increase as a result of internal fragmentation; extra data dragged around with sparsely-accessed memory can also be costly. Larger pages take longer to transfer from secondary storage, increasing page fault latency (while decreasing page fault counts). The time required to simply clear very large pages can create significant kernel latencies. For all of these reasons, operating systems have generally stuck to smaller pages. Besides, having a single, small page size simply works and has the benefit of many years of experience.
There are exceptions, though. The mapping of kernel virtual memory is done with huge pages. And, for user space, there is "hugetlbfs," which can be used to create and use large pages for anonymous data. Hugetlbfs was added to satisfy an immediate need felt by large database management systems, which use large memory arrays. It is narrowly aimed at a small number of use cases, and comes with significant limitations: huge pages must be reserved ahead of time, cannot transparently fall back to smaller pages, are locked into memory, and must be set up via a special API. That worked well as long as the only user was a certain proprietary database manager. But there is increasing interest in using large pages elsewhere; virtualization, in particular, seems to be creating a new set of demands for this feature.
A host setting up memory ranges for virtualized guests would like to be able to use large pages for that purpose. But if large pages are not available, the system should simply fall back to using lots of smaller pages. It should be possible to swap large pages when needed. And the virtualized guest should not need to know anything about the use of large pages by the host. In other words, it would be nice if the Linux memory management code handled large pages just like normal pages. But that is not how things happen now; hugetlbfs is, for all practical purposes, a separate, parallel memory management subsystem.
Andrea Arcangeli has posted a transparent hugepage patch which attempts to remedy this situation by removing the disconnect between large pages and the regular Linux virtual memory subsystem. His goals are fairly ambitious: he would like an application to be able to request large pages with a simple madvise() system call. If large pages are available, the system will provide them to the application in response to page faults; if not, smaller pages will be used.
Beyond that, the patch makes large pages swappable. That is not as easy as it sounds; the swap subsystem is not currently able to deal with memory in anything other than PAGE_SIZE units. So swapping out a large page requires splitting it into its component parts first. This feature works, but not everybody agrees that it's worthwhile. Christoph Lameter commented that workloads which are performance-sensitive go out of their way to avoid swapping anyway, but that may become less true on a host filling up with virtualized guests.
A future feature is transparent reassembly of large pages. If such a page has been split (or simply could not be allocated in the first place), the application will have a number of smaller pages scattered in memory. Should a large page become available, it would be nice if the memory management code would notice and migrate those small pages into one large page. This could, potentially, even happen for applications which have never requested large pages at all; the kernel would just provide them by default whenever it seemed to make sense. That would make large pages truly transparent and, perhaps, decrease system memory fragmentation at the same time.
This is an ambitious patch to the core of the Linux kernel, so it is perhaps amusing that the chief complaint seems to be that it does not go far enough. Modern x86 processors can support a number of page sizes, up to a massive 1GB. Andrea's patch is currently aiming for the use of 2MB pages, though - quite a bit smaller. The reasoning is simple: 1GB pages are an unwieldy unit of memory to work with. No Linux system that has been running for any period of time will have that much contiguous memory lying around, and the latency involved with operations like clearing pages would be severe. But Andi Kleen thinks this approach is short-sighted; today's massive chunk of memory is tomorrow's brief email. Andi would rather that the system not be designed around today's limitations; for the moment, no agreement has been reached on that point.
In any case, this patch is an early RFC; it's not headed toward the mainline in the near future. It's clearly something that Linux needs, though; making full use of the processor's capabilities requires treating large pages as first-class memory-management objects. Eventually we should all be using large pages - though we may not know it.
Patches and updates
Core kernel code
Filesystems and block I/O
Virtualization and containers
Page editor: Jonathan Corbet
News and Editorials
We briefly looked in on the discussion on defining the Fedora project a few weeks back. Since that time, there has been more discussion—not surprising—but also a bit more clarity on exactly what needs to be defined. While it may seem like an unnecessary, abstract exercise to some, it is clear from the discussion that there are some in the community who are directly impacted by the lack of a good shared vision of "what is Fedora?", or, perhaps more accurately: "who are Fedora's target users?".
There are a number of issues that are swirling around in the threads on the fedora-advisory-board mailing list. In general, there is dissatisfaction among users of Fedora, even highly technical users, because of the rapid, often not very exhaustively tested upgrades that are part-and-parcel of the Fedora experience. Fedora has a commitment to providing "leading edge" software to its users, but, to many users, leading edge does not equate to non-functional or hard-to-use. Unfortunately, that is what Fedora is delivering too much of the time.
As an example of technical users who have moved away from Fedora, Máirín Duffy quotes a user who contacted her off-list. The user has multiple clients, most of whom are quite technical as well, but have moved from Fedora to other distributions over the last two years or so. Upgrade instability is a major reason:
"Fedora boasts of an "innovation" target audience but is falling down in the two areas real world (excepting perhaps games and CGI) high-innovation users demand: stable upgrades and consistent usability. I believe if your group can wrestle these back under control the distro numbers would increase dramatically."
In summary, having technical users as a target isn't a good excuse for instability and complexity.
But, there is a tension between the goal of providing the "latest and greatest" and the goal of providing something that is consistently usable. Seth Vidal, sums it up this way: "And this is the crux of our problem: fedora is for latest leading-edge pkgs. It's not easy or reasonable to have the latest of things AND have a stable interface for them." The sense from the discussion, though, is that Fedora may have gone too far in the "bleeding edge" direction and that being a bit more cautious with which software versions are delivered is warranted. Bill Nottingham sees the need for a balance:
Mike McGrath brought up a subject that was clearly an undercurrent in the discussion, which he described as "the elephant in the room": Ubuntu. There is a sense that Fedora users, and potential users, are moving to, or starting out with, Ubuntu. There are good reasons for that, he said:
Targeting new users is quite different than targeting new technology, though. There is a real question whether Fedora can do both. There are lessons to be learned from Ubuntu, however, as William Jon McCann points out:
Duffy finds something of a middle ground:
There is a fairly clear split in the Fedora community about where to focus the project's efforts. There are some who would like to see Fedora make the effort to stabilize to the point where attracting new, non-technical users would be possible. Whereas others see that as largely impossible while upholding the "innovation" that has been the hallmark of the distribution.
That split makes life difficult when folks try to determine a direction to take or how to prioritize their work. Duffy, who does much of the design work for Fedora, describes the split and its effect on her work:
- Fedora is a beautiful, usable desktop for everyone (or at least, we're getting there.) Pandas are okay! We're ready to push to the masses.
- Fedora is a menagerie of equal spins for highly-technical folks and FOSS developers. Don't you dare insult our intelligence with pandas. Go back to Sesame street.
[...] The main issue from a design perspective is that if no target is defined, then the target becomes 'everybody' - and I personally feel it's impossible to make a top-notch, beautiful design when trying to please everybody.
Even determining the target user doesn't solve the underlying problems with stability, though, as Christopher Aillon points out:
The discussion, and the perceived need for a more stable system, led McGrath to make a "Desktop proposal". In it, he outlines the problems along with some potential solutions. As part of that, he would like see a new mission added to the "Fedora Mission": "Produce a usable, general purpose desktop operating system".
Putting "desktop", or even "operating system", into the mission didn't sit well with some, but the ideas in McGrath's proposal were largely met with approval. In many ways, he captured some of the thoughts that had been floating around in the threads. One problem that McGrath mentioned might be helped by Jesse Keating's idea for "No Frozen Rawhide" (as it has come to be called):
The Fedora board took up the question of defining target users for Fedora in its October 22 meeting. Project leader Paul Frields reported on the meeting at some length, noting that the No Frozen Rawhide (or "unfrozen rawhide") proposal was looked at favorably. There was also discussion of how to ensure that updates are smoother for users. But the main point that came out of the meeting was a preliminary definition of Fedora's target users:
Much of what the board discussed will also be hashed out face-to-face at the Fedora Users and Developers Conference (FUDCon) in Toronto in early December.
The Fedora project is at a bit of a crossroads right now, but the project seems to be taking the right steps to determine which direction to take. Unlike other distributions, Fedora tends to have these conversations in public, which allows others to observe and learn from the process. While that may make some uncomfortable, it should make for a healthier community overall. In the end, community is really what Fedora is striving for, and an OS is just a means to that end.
New ReleasesThe Ubuntu team is pleased to announce the Release Candidate for Ubuntu 9.10 Desktop and Server editions, Ubuntu 9.10 Server for UEC and EC2, and the Ubuntu Netbook Remix. Codenamed "Karmic Koala", 9.10 continues Ubuntu's proud tradition of integrating the latest and greatest open source technologies into a high-quality, easy-to-use Linux distribution. We consider this release candidate to be complete, stable, and suitable for testing by any user."
FedoraSomeone who (1) is voluntarily switching to Linux, (2) is familiar with computers, but is not necessarily a hacker or developer, (3) is likely to collaborate in some fashion when something's wrong with Fedora, and (4) wants to use Fedora for general productivity, either using desktop applications or a Web browser." The plan is to use this definition to focus efforts, while, hopefully, not restricting developments which have appeal beyond this audience. reports on open source ATI R600/700 3D support under Fedora 12. Using an experimental version of the Mesa drivers, which are available in the F12 repositories, Phoronix tried Compiz as well as several 3D games, reporting on the stability and rendering along with some screen shots. "First off, we would like to note that the ATI kernel mode-setting support by default in Fedora 12 has been working quite well from our testing. Even when using a dual-link DVI monitor running at 2560 x 1600, KMS has worked and properly mode-set to the right resolution. With a variety of hardware and different monitors, it has all worked quite well from this beta installation. When installing the mesa-dri-drivers-experimental package, upon rebooting we were able to immediately enable Compiz support without any problems. Compiz was running well with no visual defects and the performance was suitable for the Linux desktop."
SUSE Linux and openSUSEThis means that as of this year's election the openSUSE Board will be made up of equal numbers of Novell and non-Novell employees, 2 seats+Chairperson and 3 seats respectively. Candidates for this election will be voted in for a two (2) year term, ensuring that there is continuity within the Board." This team will decide over the requests and coordinates the whole updates progress (plan the release time according to the severity, interact with the package maintainer, coordinate QA testing, ...) based on a new update policy. It guarantees the best supply with updates. [...] Only maintenance (tagged as recommended, optional, YOU) updates are affected by this change. Security updates will be provided on the old and approved way by the SUSE security team. This is the fastest and established way to react on security problems."
Ubuntu familyDiscussion of Scott Moser's draft proposal for providing updated EC2 kernel (AKI), ramdisk (ARI) and filesystem (AMI) images on a regular basis throughout the cycle. [...] It was agreed that an update to the kernel requires all three images to be updated, and that an update to either the ramdisk or filesystem needs those two images to be updated."
Distribution Newslettersissue of CentOS Pulse is available. It covers the release of CentOS 5.4, a Linux hacker diary, an interview with CentOS developer Tim Verhoeven, a review of the cPanel conference, and more. DistroWatch Weekly for October 26, 2009 is out. "Ah, the excitement of an Ubuntu release! Yes, "Karmic Koala", the distribution's 11th official version will hit the undoubtedly crowded download servers later this week amid the excitement of those who enjoy the popular operating system -- and also to the annoyance of some of the more vocal anti-Ubuntu crowds on Linux blogs and forums. But Ubuntu is not the only Linux distribution that gets attention in this week's DistroWatch Weekly. Our lead article is a review of GNOME SlackBuild for Slackware Linux, a third-party effort to provide quality GNOME packages for the oldest surviving Linux distro. In the news section, Mandriva finally updates the artwork in preparation for the upcoming stable release, openSUSE brings a number of interesting features to challenge the competition, and Funtoo hints at a possible new life as a "fork" of Gentoo Linux. Also not to be missed, an amusing and frightening analysis of a web site that charges US$125 to download Mozilla Firefox. Finally, check out the new section of DistroWatch Weekly where Jesse Smith attempts to answer some of the questions that our readers regularly post in the comments section. Happy reading!" Our issue kicks off this week with news from the Fedora Planet community of Fedora developers and users, including thoughts on PHP security, a new tool, rpmguard, continued work on libguestfs, and a great Fedora 12 beta roundup. From Ambassadors we have an event report on ABLEConf in Phoenix, Arizona. Much goodness from the Quality Assurance beat, with updates on this past week's two Test days, detailed weekly meetings notes, and various Fedora 12 beta-related activities. In news from Fedora's Translation team, updates on milestone for Fedora 12 translation tasks, new contributors of a couple Fedora Localization Project language teams, and details on the next FLSCo election. In Art/Design news, some icon emblem work, Fedora 12 final wallpaper polish, and details on post-beta F12 desktop look changes. Security Advisories brings us up to date on a couple security releases for Fedora 10 and 11. Our issue rounds out with the always-interesting Virtualization beat, with discussion on paravirtualization and KVMs in Fedora, installing Virtio drivers in Windows XP, and details on Fedora 12's kernel samepage merging (KSM) feature. We hope you enjoy FWN 199!" issue of openSUSE Weekly News looks at Network World podcasts with Joe "Zonker" Brockmeier, an update from the openSUSE Boosters, wrong usage of LD_LIBRARY_PATH, a Kernel Log on what's coming in 2.6.32, and more. In this issue we cover: Release Candidate for Ubuntu 9.10 now available, October 21st America's Membership Board Meeting, Ubuntu IRC Council Elections, Keeping Ubuntu CD's Available, LoCo News, Launchpad: The next six months, Meet Matthew Revell, Launchpad offline 4:00UTC - 4:30UTC October 26th, The Planet, TurnKey: 40 Ubuntu-based virtual appliances released into the cloud, and much, much more!"
Distribution reviewsreview of the Fedora 12 beta. It looks specifically at virtualization features and PackageKit, but also makes mention of power management, a SystemTap-based tool called "scomes", and Moblin: "A special Moblin spin will be introduced with Fedora 12. This will allow users to install a complete Fedora installation with Intel's custom Moblin user experience. Upstream Moblin is already based on Fedora, so there is a lot of synergy between the two projects. The Fedora 12 Moblin spin isn't available yet, but users who want to get an early look can optionally install the Moblin environment in the desktop version of the Fedora 12 beta." look at the upcoming 11.2 openSUSE release. It focuses on the KDE 4.3 experience in the release, and interviews KDE hacker Lubos Lunak. "There were attempts at making Qt ports of Firefox in the past, but as far as I know there has never been one that would be really usable (and with the advances of WebKit and the fact that it's shipping with Qt I don't see that happening in the future). The reason for why we could achieve something in a few days that has been missing for years is down to the fact that I aimed pretty low — this is not a port of Firefox, but it's the same Gtk-based version of Firefox, with 'if running in KDE, call this small helper app' code inserted in desktop-specific places doing most of the job. Even with this approach I think Firefox now integrates into KDE reasonably well."
Page editor: Rebecca Sobol
One interesting feature of Mac OS X is the concept of a Universal Binary, a single binary file that runs natively on both PowerPC and Intel platforms. Professional game porter Ryan Gordon got sick of Mac developers pointing out that Linux doesn't have anything like that, so he did something about it and wrote FatELF. FatELF brings the idea of single binaries supporting multiple architectures to Linux.
Apple introduced the Universal Binary file format in 2005 to ease the transition of the Mac platform from the PowerPC architecture to the Intel architecture. The solution was to include both PowerPC and x86 versions of an application in one "fat binary". If a universal binary is run by Mac OS X, the operating system executes the appropriate section depending on the architecture in use. The big advantage was that Mac developers could distribute one executable of their software, so that end-users wouldn't have to worry about which version to download. Later, Apple went even further and allowed four-architecture binaries: 32 and 64 bit for both Intel and PowerPC.
This was not the first time Apple performed such a trick: in 1994 the company transitioned from Motorola 68k processors to PowerPC and introduced a "fat binary" which included executable code for both platforms. Moreover, NeXTSTEP, the predecessor of Mac OS X, had a fat binary file format (called "Multi-Architecture Binaries") which supported Motorola 68k, Intel x86, Sun SPARC, and HP PA-RISC. So Apple knew what needed to be done when they chose Intel as their new Mac platform. In fact, the Universal Binary format in Mac OS X is essentially the same as NeXTSTEP's Multi-Architecture Binaries. This was possible because Apple uses NeXTSTEP's Mach-O as the native object file format in Mac OS X.
Ryan Gordon is a well-known game porter: he has created ports of commercial games and other software to Linux and Mac OS X. Notable examples of his work are the Linux ports of the Unreal Tournament series, some of the Serious Sam Series, the Postal Series, Devastation and Prey, but also non-gaming software such as Google Earth and Second Life. With this experience, he knows a lot of both Mac OS X and Linux, so Ryan is well suited to implement the Mac OS X universal binary functionality in Linux.
His FatELF file format embeds multiple Linux binaries for different architectures in a single file. FatELF is actually a simple container format: it adds some accounting information at the start of the file and then appends all the ELF (Executable and Linking Format) binaries after it, adding padding for alignment. FatELF can be used for both executable files and shared libraries (.so files).
An obvious downside of FatELF is that the executable's size gets multiplied by the number of embedded ELF architectures. However, this only holds for the executable files and libraries; common non-executable resources such as images and data files are just shipped as they are without FatELF. For example, a game that ships with hundreds of megabytes of data will, relatively, become only slightly larger.
Moreover, a FatELF binary doesn't require more RAM to run than a regular ELF binary, because the operating system decides which chunk of the file is needed to run on the current system and ignores the ELF objects of the other architectures. This also means that the entire FatELF file does not have to be read (except for kernel modules), so the disk bandwidth overhead is minimal.
On the project's website, Ryan lists a lot of reasons why someone would use FatELF. Some of them are rather far-fetched, such as:
Another benefit in the same vein is that third party packages no longer have to publish multiple packages for different architectures. An obvious critique is that this multiplies the needed disk space and bandwidth if FatELF is used systematically.
However, there is something to be said for FatELF as a means to abstract away architecture differences for end-users. For example, install scripts for proprietary Linux software, such as the scripts for the graphics drivers by AMD and Nvidia, that select which driver to install based on the detected architecture, could be implemented as FatELF binaries. This seems like a cleaner solution than each software vendor implementing his own scripts and flaky logic to detect the right version. Web browser plug-ins are another type of binary that could be an interesting match for FatELF. In support of this idea, Ryan admits he made flaky shell script errors himself in the past:
Another use for FatELF is what Apple used its universal binary for: a transition to a new architecture. The 32-bit to 64-bit transition comes to mind, where FatELF makes it possible to no longer need separate /lib, /lib32 and /lib64 trees. It also makes it possible to get rid of IA-32 compatibility libraries: if you want to run a couple of 32-bit applications on a 64-bit system, you only need FatELF versions of the handful of packages needed by them. But more exotic transitions are also possible, for example when the ELF OSABI (Operating System Application Binary Interface) used by the system changes, or for CPUs that can handle different byte orders.
At the moment, Ryan has written a file format specification and documentation for FatELF. To make the fat binary concept possible on Linux, he created patches for the Linux kernel to support FatELF, and he also adapted the file command to recognize FatELF files, the binutils commands to allow GCC to link against a FatELF shared library, and gdb to be able to debug FatELF binaries. The patches are stored in a Mercurial repository "until they have been merged into the upstream project". The repository also hosts some tools to manipulate FatELF binaries, which are zlib-licensed.
One of the FatELF tools is fatelf-extract, which lets the user extract a specific ELF binary from a FatELF file, e.g. the x86_64 one. The fatelf-split command extracts all embedded ELF binaries, ending up with files like my_fatelf_binary-i386 and my_fatelf_binary-x86_64. The fatelf-info command reports interesting information about a FatELF file. A tool for developers is fatelf-glue, which will glue ELF binaries together, because GCC currently can't build FatELF binaries. You just have to build each ELF binary separately and then create a FatELF file of them.
As a proof-of-concept, Ryan created a VMware virtual machine image of Ubuntu 9.04 where almost every binary and library is a FatELF file with x86 and x86_64 support. The image can be downloaded and run in VMware Workstation or VMware Player to try the FatELF functionality. But this is not the regular use case. When FatELF is used, it's probably only for a handful of applications. FatELF files also coexist fine with ELF binaries: a FatELF binary can load ELF shared libraries and vice versa.
Ryan recalls the real point of inspiration for FatELF, a thread on the mailing list of the installer program MojoSetup. On May 20 2007, he writes on this list:
Two years later, Ryan has implemented this idea:
So after a few weeks of work in his spare time, Ryan got a working fat binary implementation for Linux. In contrast, building the virtual machine proof-of-concept literally took days, because it took a lot of work to automate. Ryan also spent a lot of time preparing to post the kernel patches:
Later in the discussion, Jeremy adds that a generic approach would allow the last executable in the file to be a shell script. If no other format was supported, this shell script would then be executed, doing something like displaying a useful message. Ryan seems unsure that the added flexibility is worth the extra complications, although he admitted that he would have chosen this route if other executable formats like a.out files "were still in widespread use and actively competed with ELF for mindshare." He also thinks it should be possible to support other executable formats in the existing FatELF format.
Some reactions to the patch that allows kernel modules to be FatELF binaries are less positive. For example, Jeremy objected to this because it would only encourage more binary modules. Ryan understands his concern, but answered: "I worry about refusing to take steps that would aid free software developers in case it might help the closed-source people, too." However, Jeremy didn't see it that way, casting doubt on the use case of FatELF kernel modules:
I don't see much upside in making it "easier" to distribute binary-only open source drivers separately. (It wouldn't help that much, in the end; the modules would still be compiled for some finite set of kernels, and if the user wants to use something else they're still stuck.)
Moreover, even for proprietary kernel modules the use case is not that compelling. Companies like Nvidia have to distribute modules for multiple kernel versions. If the OSABI version doesn't change, they can't use FatELF to pack together multiple drivers for this purpose. So, all in all, FatELF support for kernel modules seems a bit dubious.
In another discussion, Rayson Ho found that Apple (NeXT, actually) has patented the technologies behind universal binaries, as a "method and apparatus for architecture independent executable files" (#5432937 and #5604905). Something that may be considered prior art is the mix of 32-bit and 64-bit object files in a single archive on AIX, Rayson thinks. David Miller adds another possible prior art: TILO, a variant of the Sparc SILO boot loader, that packs a 32-bit and 64-bit Linux kernel into one file an figures out which one to actually boot depending on the machine it is running on, but Rayson doubts this counts, because the project was started in 1995 or 1996, while NeXT's patent filing is from 1993. Ryan also entered the discussion and clarified that FatELF has a few fields that Apple's format doesn't, so the flow chart in the patent isn't the same. However, it's not clear yet if Ryan should be concerned and if so, which changes he should make to work around the patent.
There are still a lot of things to do. Patches for module-init-tools, glibc (for loading shared FatELF libraries), and elfutils still have to be written. And the patches for binutils and gdb still have to be submitted, Ryan said:
Ryan even thinks about embedding binaries from other UNIXes into a FatELF file. He mentions FreeBSD, OpenBSD, NetBSD and OpenSolaris. In principle, each operating system using ELF files for its binaries could be supported. In addition to the ones mentioned, this also includes DragonFly BSD, IRIX, HP-UX, Haiku, and Syllable. The implementations should not be difficult, according to Ryan:
The support for other operating systems will make it possible to ship one file that works across Linux and FreeBSD, for example, without a platform compatibility layer. This could also be an interesting feature for hybrid Debian GNU/Linux and Debian GNU/kFreeBSD binaries.
The biggest hurdle that FatELF is facing now are adoption pains, Ryan explains:
Another disadvantage is the problems with creating fat binaries in build systems. For example, Erik de Castro Lopo writes about this on his blog. According to Ryan making the build systems handle this situation cleanly still needs some work. He expects the most popular way to build FatELF files will be to do two totally independent builds and glue them together instead of rethinking autoconf and such.
While a universal binary seems much less interesting for Linux than for Mac OS X, because most software in Linux is installed from within a package manager that knows the architecture, the concept is interesting for proprietary Linux software such as games. For a non-expert user, it's not evident if their processor is 32 or 64 bit. A FatELF download embedding both the x86 and x86_64 binary may be a good solution for this problem. And if ARM-based smartbooks become more popular, an x86/x86_64/arm FatELF binary may be the perfect way to distribute a binary that works on 32 bit Intel Atom netbooks, 64 bit Intel computers and ARM smartbooks.
Database SoftwareMySQL Community Server 5.1.40, a new version of the popular Open Source Database Management System, has been released. MySQL 5.1.40 is recommended for use on production systems." announced. "The second alpha release for PostgreSQL version 8.5, 8.5alpha2, is now available. This alpha contains several new major features added since the previous alpha. Please download, install, and test it to give us early feedback on the features being developed for the next version of PostgreSQL."
Web Site Developmentannounced. "Update: There is a small regression in mod_magnet, see #1307. We finally added TLS SNI, and many other small improvements. We also fixed pipelining (that should fix problem with lighty as debian mirror) and some mod_fastcgi bugs this should result in improved handling of overloaded and crashed backends (you know which one :D)." The luban package is a python-based, cross- platform user interface builder. It provides UI developers a generic language to describe a user interface, and the description can be rendered as web or native interfaces. Gongshuzi, an application built by using luban, can help users visually develop UIs and run the UIs as web or native applications." nginx web server has been announced. See the CHANGES document for more information.
Miscellaneousannounced the release of the platform microkernel (EKA2) and supporting development kit under the Eclipse Public License (EPL). "To enable the community to fully utilise the open source kernel, Symbian is providing a complete development kit, free of charge, including ARM's high performance RVCT compiler toolchain. The provision of the kit demonstrates Symbian's commitment to lowering access barriers to encourage the wider development community - such as research institutions, enthusiast groups and individual developers - to get creative with the code."
Business ApplicationsTryton is a three-tiers high-level general purpose application platform under the license GPL-3 written in Python and using PostgreSQL as database engine. It is the core base of a complete business solution providing modularity, scalability and security. This new series comes up with new modules, security and performance improvements as well as the SQLite support and welcomes the arrival of Neso, the standalone version of Tryton."
Desktop EnvironmentsThis is the first update to GNOME 2.28. It contains the usual mixture of bug fixes, translations updates and documentation improvements that are the hallmark of stable GNOME releases, thanks to our wonderful team of GNOME contributors! The next stable version of GNOME will be GNOME 2.28.2, which is due on December 16. Meanwhile, the GNOME community is actively working on the development branch of GNOME that will lead to the next major release in March 2010." Attendance * Diego Escalante * Germán Póo-Caamaño * Lucas Rocha * Srinivasa Ragavan * Stormy Peters".
Financial ApplicationsSQL-Ledger, a web-based double entry accounting/ERP system, has been announced. Changes include: "1. Version 2.8.26 2. fixed AR aging duplicates in report 3. DST duedate and terms calculation".
Gamesreports from the Gluon sprint recently held in Munich. "Gluon was conceived when the project's creator, Sacha Schutz, looked around the internet and saw how popular casual games based on Flash were. He saw the need for something which would make it possible to create similar games in a simple manner using technologies unrestricted by the closed world of proprietary software."
GraphicsInkscape vector graphics editor has been announced. "Hopefully pre4 is the final prerelease. Please download the files and let us know if you stumble upon any serious bugs except the infamous crash when undoing changes in live path effects. We probably won't release the final version within next couple of weeks, because we really need the LPE bug fixed."
Interoperabilityannounced. Changes include: "- Many crypto fixes, particularly on 64-bit. - Improved DVD access on Mac OS. - Several common controls improvements. - Various HTML support improvements. - More DIB optimizations. - Various bug fixes."
Medical ApplicationsI'm pleased to announce an new Subproject from openSUSE: openSUSE_Medical. This new Project tries to package more Software for doctors's practice or clinical needs. With our work we try to bridge a gap in the market."
Music Applicationsguitarix is a simple Linux Rock Guitar amplifier and is designed to achieve nice thrash/metal/rock/blues guitar sounds. Guitarix uses the Jack Audio Connection Kit as its audio backend and brings in one input and two output ports to the jack graph." probability sequencing language 1.02 has been released. psl is a text based piano roll language that is inspired by the probability in jeskola buzz, but with more control than you can get in a midi based envir[on]ment. every note has a percentage chance of hitting or it is marked with an x. The frequency on the roll is entirely up to the user. support for decimals has been added to 1.02."
MiscellaneousI'm pleased to announce the XYZCommander version 0.0.2! XYZCommander is a pure console visual file manager."
Languages and Tools
CGnu Compiler Collection (GCC) now has support for the Renesas RX processor. "Support has been added for the Renesas RX processor (RX) target by Red Hat, Inc." DragonEgg (using LLVM for GCC code generation), and more. "A major highlight of the LLVM 2.6 release is the first public release of the Clang compiler, which is now considered to be production quality for C and Objective-C code on X86 targets. Clang produces much better error and warning messages than GCC and can compile Objective-C code 3x faster than GCC 4.2, among other major features."
PythonThis is the latest production-ready version in the Python 2.6 series. We had a little trouble with the Python 2.6.3 release; a number of unfortunate regressions were introduced. I take responsibility for rushing it out, but the good news is that Python 2.6.4 fixes the known regressions in 2.6.3. We've had a lengthy release candidate cycle this time, and are confident that 2.6.4 is a solid release. We highly recommend you upgrade to Python 2.6.4." The reason is that frequent changes to the language cause pain for implementors of alternate implementations (Jython, IronPython, PyPy, and others probably already in the wings) at little or no benefit to the average user (who won't see the changes for years to come and might not be in a position to upgrade to the latest version for years after)." Besides, he would really like to see the community working on building acceptance for Python 3. This release contains minor enhancements and compatibility improvements: - ffnet works now with >=networkx-0.99; - neural network can be called now with 2D array of inputs, it also returns numpy array instead of python list; - readdata function is now alias to numpy.loadtxt; - docstrings are improved." mds-utils is a library intended to become a collection of several C++ utilities. It makes heavy usage of the Boost C++ libraries." Since version 1.5 the following significant improvements have been made: * The documented option 'prevent_core', which defaults to True allowing control over whether core dumps are prevented in the daemon process, is now implemented (it is specified in PEP 3143 but was accidentally omitted until now). * A document answering Frequently Asked Questions is now added."
Version ControlStGit is a Python application providing functionality similar to Quilt (i.e. pushing/popping patches to/from a stack) on top of Git. These operations are performed using Git commands, and the patches are stored as Git commit objects, allowing easy merging of the StGit patches into other repositories using standard Git functionality."
Page editor: Forrest Cook
Non-Commercial announcementsTakedown Hall of Shame site. "Websites like YouTube have ushered in a new era of creativity and free speech on the Internet, but not everyone is celebrating. Some of the web's most interesting content has been yanked from popular websites with bogus copyright claims or other spurious legal threats. So today the Electronic Frontier Foundation (EFF) is launching its "Takedown Hall of Shame" to call attention to particularly bogus takedowns -- and showcase the amazing online videos and other creative works that someone doesn't want you to see." press release covering its thoughts on how to resolve the Oracle-Sun merger issues regarding MySQL. The European Commission is currently looking at the merger and the disposition of MySQL is seen as one of the biggest stumbling blocks to its approval. The press release refers to FSFE President Karsten Gerloff's lengthy blog posting, which lays out the case for making an independent organization for MySQL: "The dual-licensing approach, and the reliance on proprietary licenses as a source of revenue, has severely hampered the growth of what could have turned by now into a much bigger ecosystem. The strategy has led to a huge gap between the original developer (MySQL as a company) and second-tier firms providing support and development services. It also forced developers who wanted to contribute to MySQL to sign unequal copyright agreements. Some did, some didn't. As a consequence, MySQL's development community is not as strong as it could be." l2ork, the Digital Interactive Sound and Intermedia Stuido (DISIS) Linux Laptop Orchestra project "I wanted to share with you my latest Linux-based and Linuxaudio.org-related project that has been sucking up most of my time over the past year or so to the point it seemed as if I have disappeared off the face of the Earth." We are now accepting nominations for the United States PostgreSQL Association (PgUS) board for the Fall 2009 elections; please submit nominations". reports that the White House has changed its web content management system from Microsoft IIS 6.0 to Drupal. "The White House launched a new version of its website on Saturday. While little has changed on the surface, the underlying technology is now powered by the open source Drupal content management system."
Commercial announcementsannounced a partnership with Red Hat. "EnterpriseDB, the enterprise Postgres company today announced that Red Hat, the world's leading provider of open source solutions, has made a financial investment in EnterpriseDB as part of a partnership aimed at increasing enterprise adoption of open source IT infrastructure. "EnterpriseDB has clearly established itself as a leading enterprise Postgres company, which is why Red Hat has chosen to partner with and invest in the company. EnterpriseDB is also working to create customer value through a subscription support model. Clearly, this is a model we see as beneficial," said Jim Whitehurst, CEO of Red Hat." MontaVista® Software, Inc., the leader in embedded Linux® commercialization, today announced more new Market Specific Distributions (MSDs) for MontaVista Linux 6. The new MSDs continue to expand the market specific focus of MVL6, delivering support for industrial automation, automotive, Android, portable multimedia devices, and multicore networking applications. All the new MSDs will be available this quarter and support processors from Cavium, Freescale, Intel, and Texas Instruments." announced the launch of a new subsidiary with a focus on open-source mobile development. "Qualcomm Incorporated, a leading developer and innovator of advanced wireless technologies, products and services, today announced that it has established a separate wholly-owned subsidiary, Qualcomm Innovation Center, Inc. (QuIC), focused on mobile open source platforms. QuIC has brought together a dedicated group of engineers to optimize open source software with Qualcomm technology. The QuIC board of directors has named Rob Chandhok, senior vice president of software strategy for Qualcomm CDMA Technologies, as president of QuIC." (Thanks to Lasse Bigum). covers Raytheon's use of Linux in its routers. "US armstech mammoth Raytheon has announced that its "government insider threat management solution" for information security will be powered by Linux. Penguin-inside crypto modules to be used in Raytheon's mole-buster tech have now passed tough federal security validation, apparently. The insider-threat detector gear in question is Raytheon's SureView, designed to root out the whole spectrum of security no-nos from "accidental data leaks" through "well-intentioned but inappropriate policy violations" to "deliberate theft of data"." announced that it will release the source for its "Frontier Election System" offering in November. "Fully disclosed source code is the path to true transparency and confidence in the voting process for all involved. Sequoia is proud to be the leader in providing the first publicly disclosed source code for a complete end-to-end election system from a leading supplier of voting systems and software." This release is carefully not described as "open source," and, in any case, source availability is not a full solution to the problem. But it still looks like a step in the right direction.
Articles of interestlengthy justification for Intel's GMA500 (aka "Poulsbo") graphics hardware. The post is in response to a Linux Journal article that lambasted Intel for "kicking its friends in the face" by using hardware that requires closed drivers. Essentially, Moblin Zone argues that Intel was targeting the device, not computer, market with "Menlow" (which includes the Poulsbo hardware). "Not only is there no significant penalty for closed drivers in the device world, sometimes, they work out better. There's a business advantage, in terms of vendor lock-in. If I'm a chip maker, my customer has to come back to me for a new driver or source-level license (with non-disclosure agreement) when they begin working on a new product model, or a firmware upgrade. In the thin-margin world of device parts, that kind of ongoing revenue stream might make the difference between getting by or having to lay off engineers." covers Tilera's latest releases in its TILE-Gx multi-core processor family. "Tilera on Monday introduced a series of general purpose processors ranging from 16 to 100 cores for use in servers. The processors would replace multiple processors and lower system costs. While it is too soon to tell whether Tilera's TILE-Gx family will one day challenge Xeon and Opteron server chips from Intel and Advanced Micro Devices, respectively, the announcement points to the ongoing industry trend of adding cores to boost performance. "
Resourcesdiscusses open-source software and cloud computing on Linux Journal. "Cloud computing: you may have heard of it. It seems to be everywhere these days, and if you believe the hype, there's a near-unanimous consensus that it's the future. Actually, a few of us have our doubts, but leaving that aside, I think it's important to ask where does open source stand if the cloud computing vision *does* come to fruition? Would that be a good or bad thing for free software?" investigates a new clarifying statement [PDF] for an old Department of Defense policy on the use of open-source software. "This 2009 memo is important for anyone who works with the DoD (including contractors) on software and systems that include software... and I suspect it will influence many other organizations as well. Let me explain why this new memo exists, and what it says. Back in 2003 the DoD released a formal memo titled Open Source Software (OSS) in the Department of Defense. This older memo was supposed to make it clear that it was fine to use and develop OSS in the DoD. Unfortunately, as the new 2009 memo states, "there have been misconceptions and misinterpretations of the existing laws, policies and regulations that deal with software and apply to OSS that have hampered effective DoD use and development of OSS"." looks at three educational programs all featuring Tux the penguin. Programs to teach typing and practice math skills are two of those he looks at, in addition to TuxPaint, which didn't, at first, strike him as particularly educational: "So how is this educational? At the lower ages, this might simply be a first introduction to using the mouse. In this case, the parent or educator would help the student select colors and draw lines and shapes. Older, pre-readers, could use this program to tell a story in storyboard fashion. Still older children could use this program to create their own comic strips complete with text. Of course, you could also use Tux Paint to teach students art concepts like color, line, and texture. It doesn't matter how you use it though. Tux Paint is a lot of fun."
Education and CertificationSoftware Freedom Day (SFD) is an annual worldwide celebration of Free/Open Source software. LPI's affiliate for the region, LPI-Maghreb (Tunisia, Algeria, Moroco and Libya) has participated in the event for the last four years." Scientists spend more and more time writing, maintaining, and debugging software. While techniques for doing this efficiently have evolved, only few scientists actually use them. As a result, instead of doing their research, they spend far too much time writing deficient code and reinventing the wheel. In this course we will present a selection of advanced programming techniques with theoretical lectures and practical exercises tailored to the needs of a programming scientist."
Event Reportscovers comments by Google's VP of product management, Sundar Pichai, at the Web 2.0 Summit. "To the suggestion that Chrome OS -- the operating system that Google is developing around its Chrome browser -- is on a collision course with Windows, Pichai responded that the world is entering a period of tremendous innovation in personal computing. "Browsers are suddenly hot again and I think operating systems are too," he said, referring both to Chrome OS and Android, Google's operating system for mobile devices "There haven't been other choices for a long time," he said. "Most operating systems today were designed before the Web existed." "The goal with both our efforts is to get great free open source software stacks out there," he said. In the case of Chrome OS, everything is built around the browser."
Calls for PresentationsCall for Projects has gone out for CeBIT Open Source 2010. "The largest IT trade show on earth will take place from March 2 through 6 in Hannover, Germany. The Deutsche Messe organization that runs the trade show initiated Open Source as a theme focus for the first time in 2009, and the surge of visitors into a constantly packed hall exceeded all expectations. It's clear that Open Source will play a major role again at CeBIT in 2010. As an incentive, the theme will get a prominent new location in Hall 2, where exhibitors, the Open Source Forum and the Open Source Project Lounge will find a new home." The Call for Participation has opened for RailsConf 2010, when the Ruby on Rails community will gather June 7-10, 2010, at the Baltimore Convention Center in Baltimore, MD. RailsConf, co-produced by Ruby Central, Inc. and O'Reilly Media, Inc., is the largest official conference dedicated to everything Rails. Program chair Chad Fowler invites proposals for conference sessions, workshops, and panels from Rubyists, hackers, web developers, system administrators, and anyone else with a passion for Rails."
Upcoming Eventsannounced. "When it comes to video playback on Linux, the premiere choice for video acceleration is currently using VDPAU with its CPU-efficient, GPU-accelerated capabilities that even has no problems playing 1080p video files with extremely low-end hardware. However, VDPAU is not yet widespread in all Linux video drivers, and other free software developers have been working on improving other areas of the Linux video stack too. One of these developers is GNOME's Benjamin Otte who has been working on using Cairo/Pixman for raw video in GStreamer. Additionally, he has organized a Linux video "hackfest" that will take place next month in Barcelona, Spain to further this Linux video playback work." (Thanks to James). The PostgreSQL Conference 2009 Tokyo Executive Committee are proud to announce that the two days programme sessions, JPUG 10th Anniversary Conference, are going to be held on 20th and 21st November, 2009, at AM Hamamatsucho, Tokyo."
Page editor: Forrest Cook
Copyright © 2009, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds