|
|
Subscribe / Log in / New account

LWN.net Weekly Edition for May 28, 2009

New rules for software contracts

By Jonathan Corbet
May 22, 2009
On May 18, the Linux Foundation announced that it had sent a joint letter to the American Law Institute protesting some provisions in the ALI's proposed principles to be applied to the law of software contracts. That was likely the first that many LWN readers had heard of this particular initiative - or, indeed, of the ALI in general. Your editor, being a masochistic sort of person, has plowed through all 305 pages of the principles (which were made official by the ALI on May 20) with an eye toward their effect on free software. What follows is a non-lawyerly summary of what he found.

In the U.S. justice system, the written law which emerges from the legislative branch is just a starting point. The actual law that one must follow is determined by the rulings of a wide variety of courts across the country. At times, divergent rulings can result in different interpretations of the same law in different regions. The process of creating a coherent set of high-court precedents can be long and messy, and it can create a great deal of uncertainty while it is happening. Uncertainty about the law is never a good thing.

The ALI sees its purpose as improving that process through the creation of documents stating its view of how laws should be interpreted. The process is long and detailed; in some ways, it resembles how technical standards are created. But, certainly, the ALI believes it carries a strong influence in how the courts work:

The final product, the work of highly competent group scholarship, thus reflects the searching review and criticism of learned and experienced members of the bench and bar. Many Institute publications have been accorded an authority greater than that imparted to any legal treatise, an authority more nearly comparable to that accorded to judicial decisions.

So if this group sets itself to the task of directing the interpretation of the law around software, it's probably worth paying attention to what they are saying. Unfortunately, the draft principles must be purchased ($40 for the PDF version) and cannot be redistributed, so your editor will be restricted to an exercise of his fair-use rights in talking about this text.

The ALI has set itself the task of clarifying the law around software, and software contracts in particular. So the principles apply to "agreements for the transfer of software for a consideration. Software agreements include agreements to sell, lease, license, access, or otherwise transfer or share software." One might think that this restriction would move much free software activity out of the scope of the principles, but the ALI makes it clear that the source-release requirements found in licenses like the GPL are "consideration." While not taking a firm stance, the principles also suggest that a court would treat the GPL as a contract. The result is a clear statement that GPL-licensed software falls within the scope of this document. Whether software distributed under more permissive licenses would be covered is not addressed.

Much text is expended toward the question of when a contract is formed. In particular, the ALI takes a clear stance that shrinkwrap (or "clickwrap") agreements do, indeed, form a contract which can be enforced. It goes beyond that, though, in that even the "I agree" button click may not be necessary. The reasoning is interesting:

For several reasons, § 2.02(b) does not establish a bright-line rule for enforcement requiring, for example, clicking "I agree" at the end of an electronic standard form. First, as already mentioned, case law already presents a wide variety of formation types that are not easily captured by a narrow rule and, for the most part, handle the issues in an effective manner. These include situations in which the transferee is aware of the terms because of a course of dealing or because the transferor delivered an update of previously downloaded software. The safeguard of requiring a click at the end of the form does not seem necessary in either case. Second, open-source transfers rarely follow the current click-wrap model, and these Principles should not upset an established custom unless problematic.

So the ALI says that it's pretty easy to be bound under a software contract. Doing something which requires the permissions in the GPL (redistributing software, for example) is enough. This is important: if the GPL is interpreted as a contract by some court, it will be relatively easy to demonstrate that somebody who is (say) distributing software in violation of the GPL's terms did, in fact, agree to be bound by that contract.

Warranties are treated in detail; this is the section that the Linux Foundation and Microsoft dislike. There are several types of warranties which are discussed:

  • Warranties against infringement of somebody else's patents, trademarks, or copyrights, and associated indemnification.

  • Guarantees of the quality of the software.

  • Warranties of merchantability. Essentially, this says that the software is fit to be sold; it's properly packaged, does vaguely what it says, etc.

  • Fitness for purpose: will the software actually perform the function for which it is sold?

  • Implied quality warranties: the software has no hidden defects.

Anybody who has actually read a software license agreement knows that software vendors routinely disclaim all of those warranties. And, in fact, the ALI principles allow them to do that - except for the last one. The text reads:

A transferor that receives money or a right to payment of a monetary obligation in exchange for the software warrants to any party in the normal chain of distribution that the software contains no material hidden defects of which the transferor was aware at the time of the transfer. This warranty may not be excluded.

So, if you are distributing free software under free-beer terms, you need not provide a warranty against "material hidden defects." The language is clear here: "consideration" is not enough to force the warranty; there must be actual money involved. But if you are, say, an enterprise Linux distributor, there could be a real problem here. Distributors and others shipping or supporting Linux in exchange for real money will, under these principles, be forced to provide a warranty against hidden defects in that software.

One might think that this is not a huge problem. Linux distributions do not, as a rule, have hidden defects. The list of defects that the distributor knows about is, almost by definition, found in the distributor's bug tracking system, and that information is usually widely available. The problem is that simply maintaining a publicly-available bug tracker is not, in the ALI's view, good enough:

Disclosure of a material hidden defect occurs when a reasonable transferee would understand the existence and basic nature of a defect. Disclosure ordinarily should involve a direct communication to the transferee, if feasible. A mere posting of defects on the transferor's website generally should be insufficient.

What a distributor will have to do is not exactly clear. Printing out the entire bug tracker seems like an unsatisfactory solution. Perhaps getting the customer to sign a piece of paper acknowledging awareness of the bug tracker would be sufficient. There is an unpleasant vagueness here, right in the portion of the principles which has proved to be most controversial.

The remainder of the document is concerned with breach of contracts and remedies. One term states that software providers must accept a recipient's cure of a specific breach. In the free software realm, that means that one cannot terminate somebody's right to use GPL-licensed software if that somebody acts in good faith to fix a failure to distribute source. In general, nobody wants to do that anyway, so this term really just goes along with existing practice.

The remedies section is mostly straightforward contract-enforcement stuff. There is one interesting paragraph in the discussion:

For example, software "hobbyists" (who do not "deal in software of the kind transferred" or "hold [themselves] out by occupation as having knowledge or skill peculiar to the software") may provide open-source software without charge under terms that disclaim all responsibility for performance and provenance. Under the circumstances, no remedy at all may be the appropriate result when the software does not perform or infringes a third party's intellectual property right. In other words, the transfer of open-source software in this context may be a case in which, essentially, there can be no breach by the transferor that supports the grant of a remedy to the transferee.

One might interpret this as saying that, say, patent holders cannot go after free software developers who distribute code held to be infringing. That would all depend on the interpretation of "hobbyist," though. Developers working in a corporate setting would certainly not qualify.

The principles also take on the topic of remote disablement of software - an interesting issue, even if it does not really apply to free software. In summary, remote disablement is allowed, but only in tightly constrained circumstances. It cannot be used with mass-market software, and, in any other situation, it requires a court order first. So, while remote disablement is allowed in principle, it is made difficult in practice.

The meeting of the ALI which approved these principles debated the topic for all of 30 minutes. Clearly the participants do not see much here which strikes them as controversial. For the most part, they may be right; this exercise would seem to make sense. If the courts adhere to these principles, the result should be increased clarity and predictability around software contract law. Beyond that, the proposed principles are generally friendly to free software, acknowledging that it operates under different circumstances and that our licenses are valid. These are good things.

The one real sticking point is the issue pointed out by Microsoft and the Linux Foundation: mandatory warranties. Even that could turn out to not be a huge problem in practice for free software; it relates to hidden defects, and we do not hide our defects. Proprietary vendors - who do tend to hide defects - will have a harder time with this provision. As long as some sort of reasonable interpretation of "disclosure" is reached, Linux distributors should be in reasonable shape.

So, while it would have been nice to have a wider, more public debate about this document, the end result does not appear (to your most certainly not-a-lawyer editor) to be all that bad. We can probably live with those terms.

Comments (38 posted)

Activities and the move to context-oriented desktops

May 27, 2009

This article was contributed by Bruce Byfield

The next buzzword on the desktop is likely to be "Activities." Today, the chances are high that you have never heard of them, or, if you have, that you have not identified the different uses of the term as having anything in common. However, what all the usages do have in common is that they signal a move away from a static desktop towards one that changes with the tasks being performed.

Any more exact definition is elusive. All the same, Activities are already part of the KDE 4 series, and scheduled to become more prominent in the upcoming 4.3 release. Similarly, GNOME 3.0, which is due out next year, will include its own, more limited concept of the term. But, under any implementation, the term signals a shift in the desktop, with free software developers leading the way.

The concept of Activities originates in Sugar, the desktop designed for the One Laptop Per Child (OLPC) project. In Sugar, "Activities" is used as a synonym for "application." However, Gary C. Martin, one of the coordinators for Sugar's Activity Team, explains that the change is more than semantics or marketing. Because Activities run within the general collaborative frame of Sugar, using them is intended as a very different experience than running a standalone application on a traditional desktop.

For me, the key parts of Activities are that they combine concepts of document, executable, and collaboration state into a single, simple to use user interface. With the Activity state automatically kept in the Journal, it's easy to resume or reflect on past work, and, with realtime collaboration as a first class feature, peer sharing and group work is strongly encouraged.

In other words, Sugar's Activities are not just about running an application, or learning how to produce a spreadsheet or a presentation. Instead, they are conceived as part of the total learning experience that Sugar is designed to provide.

"It's not about producing documents in applications", Martin explained. "It's the learning that happens while doing Activities. Activities are at the heart of learning in Sugar. They support a class working together, seeing what others are doing, sharing, learning, never losing their work, and [being] able to reflect on that work with their parents and teachers." Although the mechanics of running an Activity may not be all that different from running an application in GNOME or KDE, what matters is the context in which it is used.

The KDE implementation

Part of the design philosophy of KDE 4 is to accommodate the growing sophistication of users, according to long-time KDE developer Aaron Seigo. Thanks to mobile devices and gaming consoles, many users — particularly younger ones — find a static desktop confining. Nor is a traditional desktop especially suited for multiple, specialized uses that range from office productivity to social networking. Depending on whether people are working, attending classes, or socializing, their ideal desktop could vary considerably.

Even within a single activity, desktop needs can change, Seigo said:

Watching people use their computers, we noticed that a lot of people who work on more than one project at a time were manually arranging their icons between projects, Graphic designers, for instance, would have two or three projects they're working on. When they worked on one project, they would take all the icons and files they're working on, and they'd put those icons on the desktop. Then, when they were done with that project, they'd put those icons back in a folder that wasn't showing on the desktop, and move all the icons for a second project out on to the desktop.

To simplify computing for these more sophisticated users, the KDE concept of Activities was born: desktops with their own custom sets of widgets, icons, and applications, that could be switched by keyboard shortcuts or by zooming out via the Desktop Toolkit, the cashew-shaped icon on the upper right of the desktop.

Originally, these desktops were called "containments" by Chani Armitage, the developer who first implemented them:

But I didn't really like the word 'containment' because it was pretty technical. I'd been using an OLPC for a while at the time, so I actually got the inspiration from there. We've kind of co-opted the term and put it to a slightly different meaning than what they've been using.

In some senses, Activities resemble virtual desktops, which KDE and other desktops have had for years. And Seigo acknowledged that "they're complementary ideas." However, he suggested that the resemblance depends on how you use virtual desktops. If you use virtual desktops mainly to separate out windows — for example, to keep a virtual terminal always ready, or to run a full-screen browser — then there is little to choose between virtual desktops and Activities.

By contrast, if you use separate virtual desktops for different tasks, then Seigo suggested that Activities provide a superior experience, one closer to that of mobile devices and more suited to some of the functionality planned for the KDE 4.4 release (see below). Still, because of user demand, KDE 4.2 allows an Activity to be tied to a particular virtual desktop by changing a configuration setting, and the final version of KDE 4.3 will include a setting for the same task.

Seigo emphasized that the increased visibility of Activities in the 4.3 release is not intended to pressure people to use them. "What's really nice about this concept is that you can completely ignore it," he said. "It's completely unobtrusive." After all, he added, "for certain people, the current metaphors work well," especially those who do not carry their computers about or those who use them for basic productivity.

The main change heralded by Activities, according to Seigo is that, unlike on the traditional desktop, they do not enforce one particular way of working for every task:

This is no longer about forcing people into a mode of work or behavior. Rather, we're trying to build interfaces that are relevant to the device you're using them on, and also relevant to the user — which means where are you and who are you? That's something that hasn't started to sink in with a lot of people.

People are demanding more flexibility, and, with the current state of hardware, it can be provided today without any undue strain on system resources.

GNOME 3.0's workspaces

Possibly because of KDE's use of the term, the implementation of multiple workspaces (aka virtual desktops) is causing some confusion about GNOME-Shell, which is scheduled to become the basis for GNOME 3.0.

As implemented so far, GNOME-Shell's Activities is an overlay mode for organizing workspaces and arranging groups of windows on them. To a large extent, it resembles the Zoom view in KDE. However, some people are incorrectly referring to the workspaces themselves as Activities, a change in reference that might just stick, and make GNOME 3 more closely resemble KDE 4.

So far, these workspaces seem to function the same as those in recent versions of GNOME and KDE, with no capacity for separate customization. Nor have any plans to extend their functionality been announced. But, with ten months before GNOME 3.0's estimated release, that could change, especially if KDE's Activities become widely-used. At this point, though, even if the reference is different, the use of the term does suggest that GNOME developers are also thinking about contextualized computing. And even if the implementation remains what it is today, the overlay mode remains a de-emphasis of the single, static desktop.

Coming attractions

Exactly how GNOME 3.0 will implement Activities remains uncertain. Meanwhile, KDE developers are already contemplating future developments for contextualized computing. Armitage talked about allowing Activities to be de-activated and stored. Perhaps, too, she mused, applications could become more contextualized; for example, KMail might be set so that it used a particular address book when opened on a certain Activity.

A change already planned for KDE 4.4 is to associate an Activity with a certain location via the new geo-location layer in KDE. University instructors, for example, could have one Activity with their notes and slides that would automatically open when they started KDE in the class room for a particular class, and another Activity with their research that started in their offices. "You would basically train the computer," Seigo said. "As you move around, the interface comes to you."

Meanwhile, the different uses of the same term can obscure exactly what each project means by it. But, as Seigo said:

The idea that binds them all is a movement towards task-oriented computing. Our viewpoint is that tasks are highly-contextual: What are you doing? Where are you doing it? And who are you?

Whether Activities in any form will come to dominate the desktop is still uncertain. Possibly, they will remain the interest of a relatively small set of users. However, regardless of their ultimate success, the fact that the context-based computing is being emphasized more strongly is a shift in thinking about the desktop, and one that free software is leading. Virtual workspaces -- let alone KDE's Activities -- remain non-standard on Windows, while OS X's Spaces are turned off by default. On both, the static desktop remains the norm.

"Nobody else is looking at these things," Seigo said. "You don't see it on Windows and you don't see it on Mac. This is very much an innovation that free software pretty much owns. And I'm really happy to see that in GNOME and KDE we're right at the front of this development."

Comments (16 posted)

Coming soon: OpenSMTPD

By Jonathan Corbet
May 27, 2009
Back in November, the OpenBSD development community first heard about the OpenSMTPD project. OpenSMTPD is an all-new mail transfer agent implementation for OpenBSD; it is getting ready for release sometime soon. It is an interesting exercise in wheel reinvention which may well prove to be a useful project.

OpenSMTPD is developing most of the features that one would expect from an SMTP daemon. It can speak the SMTP protocol, including the SSL-based versions for added security. Virtual domains are supported, as are forward files and external delivery agents like procmail. There are plans to add a sendmail-like "milter" capability for mail-filtering extensions. In summary, it is growing to the point that it can do most of the basic things that the other MTA implementations do.

Given that those implementations represent a great deal of development and debugging time, and that a new mail daemon will surely bring new bugs and even security problems, one might well wonder why the OpenSMTPD developers are doing it. It appears to come down to a combination of licensing issues and a desire for a simpler and more OpenBSD-like tool.

The OpenBSD Journal article which brought OpenSMTPD to the community's attention includes this quote from Gilles Chehade, who started this project:

A few months ago, I had to dive into the configuration of sendmail to make a very small change. It turns out I spent almost an hour trying to make sense out of a maze of files that were plain unreadable. Even the slightest changes would cause me to stand a couple minutes thinking, just trying to make sure I really wanted to make that change.

It is a rare mail system administrator who has not had a moment like this; the lowest levels of sendmail configuration are a thing which must be seen to be believed. The higher-level "language" implemented with a set of M4 macros has helped to keep an entire generation of administrators sane, but it still presents its challenges. The end result is that, even though sendmail seems to be long past its period where new remote root exploits were a weekly experience, it is still a program with roots in the 1980's that many administrators prefer to avoid.

So what about Postfix? It turns out that Gilles likes Postfix reasonably well, but there is a fundamental problem with it: the IBM Public License under which Postfix is distributed includes copyleft-style source availability requirements. Copyleft is not particularly popular in OpenBSD circles, so that license ensures that Postfix will never be a part of the OpenBSD source tree. For Gilles, that meant that he needed to install Postfix separately after each OpenBSD installation; it also means that Postfix does not receive the same level of attention from OpenBSD's code auditors. So it seems that OpenSMTPD is being developed, at least partially, out of a desire to have an MTA under a permissive license which is less intimidating than sendmail.

Needless to say, the licensing issue is enough to exclude GPL-licensed solutions like exim as well.

Beyond licensing, though, it seems that the OpenSMTPD developers want to have an MTA which has more of an OpenBSD-like feel. The configuration file format will be simplified and have a format very similar to that of the "pf" packet filter. Techniques like privilege separation have been designed into the program almost since the beginning. And, of course, it will be a part of the unified OpenBSD source tree; it has been in the OpenBSD CVS repository since November.

Some people within the OpenBSD community have questioned the need for this kind of project, given the number of mail transfer agents already available. Certainly there are projects which are not worth the effort which goes into them, but, that said, it is usually a mistake to criticize the work of people who have decided to scratch a particular itch. Interesting things can come from such developments. From OpenSMTPD we may get an MTA which sheds a lot of legacy requirements (sendmail still has features that come from a time when one had to worry about routing a message via two DECnet hops, over the NSFnet, then into a CSNet node) and which, presumably, will offer a high degree of security.

Once it's stable, it would not be entirely surprising to see a Linux port of OpenSMTPD as well. Whether it will take off in the Linux world remains to be seen. Tools like OpenSSH are nearly universal on Linux systems; OpenCVS is ... less so. But options are usually good, and the OpenSMTPD developers are busily working toward the creation of another option for a crucial system component. It will be interesting to see how it turns out.

Comments (17 posted)

Page editor: Jonathan Corbet

Security

Sanitizing kernel memory

By Jake Edge
May 27, 2009

The contents of memory consist of vast quantities of useless—to an attacker at least—data, along with a small amount that would be of interest. Cryptographic keys, passwords, and the like are probable targets of those with malicious intent. Normally, the kernel guards memory from access by unprivileged processes, but, various kernel bugs have sometimes allowed memory information to leak. A recently proposed patch would eliminate a specific subset of those kinds of leaks by "sanitizing" pages as they are freed.

Larry Highsmith adapted code from the PaX project to add a flag to kernel pages marking them as "sensitive" pages. The pages would then be cleared as they were freed, so that any information leak from those pages would be useless. As part of the justification for the change, Highsmith noted Stanford University paper entitled "Shredding Your Garbage: Reducing Data Lifetime Through Secure Deallocation" as well as the "cold boot" attacks to recover memory from powered-down systems.

Highsmith's patch would eliminate cases where freed memory contents leak, either via a kernel bug or some other means, by clearing the page as it is freed, but only for memory marked as sensitive. The four additional patches in his original series then applied the sensitive flag to various kernel subsystems (crypto, audit, and key handling).

While the kernel hackers were generally agreeable to the idea of sanitizing memory, there were a number of objections to Highsmith's first attempt. A trivial one, which was fixed in later patches, was a Signed-off-by line that didn't give his full name (just "Larry H."). As the PaX project is developed by the pseudonymous "PaX Team"—thus not able to fulfill the requirements for a kernel sign off—several folks were quick to point out that a full name was required. More substantive objections were heard about using up a scarce resource in the form of a page flag. Alan Cox pointed out that a virtual memory area (VMA) flag would work as well, or that places in the patch that set the flag could just clear the memory instead:

[...] page flags are very precious, very few and if we run out will cost us a vast amount of extra kernel memory. If page flags were free the question would be trivial - but they are not. Thus it is worth asking whether its actually harder to remember to zap the buffer or set and clear the flag correctly.

There was a bit of a digression into the security issues surrounding suspend and hibernate, with Highsmith claiming that security conscious users just disabled that functionality altogether. Cox and Pavel Machek disagreed, noting the ability to encrypt the images that get written to disk with today's hibernate code. Cox was also concerned that marking things as sensitive makes an attacker's job easier:

If you've got a rogue module you already lost, except that by marking what is sensitive you made the bad guys job easier. Bit like the way people visually overlay maps and overhead shots from multiple sources and the 'scrubbed' secret locations stand out for you and are easier to find than if they were left.

In the end, any memory the kernel handles is potentially sensitive. Some applications—notably GPG—take great pains to try to ensure that their memory is not swapped and is cleared of keys and other sensitive data when they are no longer needed. As Ingo Molnar put it: "The whole kernel contains data that 'should not be leaked'." This led to a new approach: for users who want sanitized pages—based on the sanitize_mem boot time parameter—simply clear all pages when they are freed. A much smaller patch that implemented that scheme was then posted by Highsmith.

In addition, there are kernel allocations that are for objects smaller than a page which could contain sensitive data. Highsmith has also submitted changes to kfree() and kmem_cache_free() that would clear these objects as they are freed. In the end, with both of these patches applied in a kernel with sanitize_mem enabled, all free kernel memory will be cleared. But, of course, as several folks pointed out, in many cases the memory of interest will still be in use.

Certainly a kernel with sanitized memory is more resistant to leaking memory contents, but depending on the threat one is trying to defend against, it may not be enough. The physical attacks against memory contents (i.e. "cold boot") are still likely to be effective—though free memory won't be recoverable—and other kinds of bugs could still leak memory in use. Highsmith presented an analysis of kernel information leaks, which was partially based on this interesting list of CVEs and git commits that fixed them. In it, there were a half-dozen examples of information leaks that would have been prevented by his changes.

No further objections have been noted, and the patches are not terribly intrusive, so it would seem there is some chance they might make their way into 2.6.31.

Comments (13 posted)

Brief items

Walsh: Introducing the SELinux Sandbox

Dan Walsh and Eric Paris have been working on an SELinux "sandbox" which Walsh describes on his weblog. The basic idea is to use SELinux to restrict the kinds of actions a user application can perform. This would allow users to run untrusted programs or handle untrusted input in a more secure manner. "The discussions brought up an old Bug report of [mine] about writing policy for the 'little things'. SELinux does a great job of confining System Services, but what about applications executed by users. The bug report talked about confining grep, awk, ls ... The idea was couldn't we stop the grep or the mv command from suddenly opening up a network connection and copying off my /etc/shadow file to parts unknown." Paris also posted an introduction to the sandbox on linux-kernel.

Comments (84 posted)

New vulnerabilities

apache: local privilege escalation

Package(s):httpd apache CVE #(s):CVE-2009-1195
Created:May 27, 2009 Updated:September 14, 2010
Description: Apache has a flaw in its handling of the Options and AllowOverride directives. In certain specific configurations, local users may be allowed to execute commands from server-side-include scripts despite configuration to the contrary.
Alerts:
rPath rPSA-2010-0056-1 httpd 2010-09-13
Mandriva MDVSA-2009:323 apache 2009-12-07
Fedora FEDORA-2009-8812 httpd 2009-08-20
Slackware SSA:2009-214-01 httpd 2009-08-03
Red Hat RHSA-2009:1156-01 httpd 2009-07-14
Gentoo 200907-04 apache 2009-07-12
Mandriva MDVSA-2009:124-1 apache 2009-07-08
Debian DSA-1816-1 apache2 2009-06-16
SuSE SUSE-SA:2009:050 apache2,libapr1 2009-10-26
Ubuntu USN-787-1 apache2 2009-06-12
rPath rPSA-2009-0142-1 httpd 2009-11-12
rPath rPSA-2009-0142-2 httpd 2009-11-12
Mandriva MDVSA-2009:124 apache 2009-05-31
CentOS CESA-2009:1075 httpd 2009-05-28
Red Hat RHSA-2009:1075-01 httpd 2009-05-27

Comments (none posted)

cscope: arbitrary code execution

Package(s):cscope CVE #(s):CVE-2009-0148
Created:May 25, 2009 Updated:June 19, 2009
Description:

From the Debian advisory:

Matt Murphy discovered that cscope, a source code browsing tool, does not verify the length of file names sourced in include statements, which may potentially lead to the execution of arbitrary code through specially crafted source code files.

Alerts:
CentOS CESA-2009:1102 cscope 2009-06-19
CentOS CESA-2009:1101 cscope 2009-06-16
Red Hat RHSA-2009:1102-01 cscope 2009-06-15
Red Hat RHSA-2009:1101-01 cscope 2009-06-15
Gentoo 200905-02 cscope 2009-05-24
Debian DSA-1806-1 cscope 2009-05-24

Comments (none posted)

cscope: arbitrary code execution

Package(s):cscope CVE #(s):CVE-2009-1577
Created:May 25, 2009 Updated:June 16, 2009
Description:

From the Gentoo advisory:

Multiple stack-based buffer overflows were reported in the putstring function when processing an overly long function name or symbol in a source code file (CVE-2009-1577).

Alerts:
CentOS CESA-2009:1101 cscope 2009-06-16
Red Hat RHSA-2009:1101-01 cscope 2009-06-15
Gentoo 200905-02 cscope 2009-05-24

Comments (none posted)

Jetty: directory traversal, cross-site scripting

Package(s):jetty CVE #(s):CVE-2009-1523 CVE-2009-1524
Created:May 26, 2009 Updated:November 24, 2009
Description: From the CVE entries:

Directory traversal vulnerability in the HTTP server in Mort Bay Jetty before 6.1.17, and 7.0.0.M2 and earlier 7.x versions, allows remote attackers to access arbitrary files via directory traversal sequences in the URI. (CVE-2009-1523)

Cross-site scripting (XSS) vulnerability in Mort Bay Jetty before 6.1.17 allows remote attackers to inject arbitrary web script or HTML via a directory listing request containing a ; (semicolon) character.

Alerts:
Mandriva MDVSA-2009:291 jetty5 2009-10-29
SuSE SUSE-SR:2009:019 cups, jetty5, libqt4/dbus-1-qt, opera, puretls/jessie, kdegraphics3-pdf, qemu 2009-11-24
Fedora FEDORA-2009-5509 jetty 2009-05-26
Fedora FEDORA-2009-5513 jetty 2009-05-26
Fedora FEDORA-2009-5500 jetty 2009-05-26

Comments (none posted)

openssl: multiple vulnerabilities

Package(s):openssl CVE #(s):CVE-2009-1377 CVE-2009-1378
Created:May 21, 2009 Updated:March 2, 2010
Description: Openssl has two vulnerabilities, from the Mandriva alert:

The dtls1_buffer_record function in ssl/d1_pkt.c in OpenSSL 0.9.8k and earlier 0.9.8 versions allows remote attackers to cause a denial of service (memory consumption) via a large series of future epoch DTLS records that are buffered in a queue, aka DTLS record buffer limitation bug. (CVE-2009-1377)

Multiple memory leaks in the dtls1_process_out_of_seq_message function in ssl/d1_both.c in OpenSSL 0.9.8k and earlier 0.9.8 versions allow remote attackers to cause a denial of service (memory consumption) via DTLS records that (1) are duplicates or (2) have sequence numbers much greater than current sequence numbers, aka DTLS fragment handling memory leak. (CVE-2009-1378)

Alerts:
Slackware SSA:2010-060-02 openssl 2010-03-02
Mandriva MDVSA-2009:310 openssl 2009-12-03
Gentoo 200912-01 openssl 2009-12-01
Debian DSA-1888-1 openssl 2009-09-15
CentOS CESA-2009:1335 openssl 2009-09-15
Red Hat RHSA-2009:1335-02 openssl 2009-09-02
Ubuntu USN-792-1 openssl 2009-06-25
Fedora FEDORA-2009-5423 openssl 2009-05-25
Fedora FEDORA-2009-5412 openssl 2009-05-25
Fedora FEDORA-2009-5452 openssl 2009-05-25
SuSE SUSE-SR:2009:011 java, realplayer, acroread, apache2-mod_security2, cyrus-sasl, wireshark, ganglia-monitor-core, ghostscript-devel, libwmf, libxine1, net-snmp, ntp, openssl 2009-06-09
Mandriva MDVSA-2009:120 openssl 2009-05-21

Comments (none posted)

pidgin: buffer/integer overflows

Package(s):pidgin CVE #(s):CVE-2009-1373 CVE-2009-1376
Created:May 22, 2009 Updated:January 18, 2010
Description: From the Red Hat advisory:

A buffer overflow flaw was found in the way Pidgin initiates file transfers when using the Extensible Messaging and Presence Protocol (XMPP). If a Pidgin client initiates a file transfer, and the remote target sends a malformed response, it could cause Pidgin to crash or, potentially, execute arbitrary code with the permissions of the user running Pidgin. This flaw only affects accounts using XMPP, such as Jabber and Google Talk. (CVE-2009-1373)

It was discovered that on 32-bit platforms, the Red Hat Security Advisory RHSA-2008:0584 provided an incomplete fix for the integer overflow flaw affecting Pidgin's MSN protocol handler. If a Pidgin client receives a specially-crafted MSN message, it may be possible to execute arbitrary code with the permissions of the user running Pidgin. (CVE-2009-1376)

Alerts:
Ubuntu USN-886-1 pidgin 2010-01-18
Mandriva MDVSA-2009:321 pidgin 2009-12-06
Mandriva MDVSA-2009:230 pidgin 2009-09-11
Debian DSA-1870-1 pidgin 2009-08-19
SuSE SUSE-SR:2009:013 memcached, libtiff/libtiff3, nagios, libsndfile, gaim/finch, open-, strong, freeswan, libapr-util1, websphere-as_ce, libxml2 2009-08-11
Mandriva MDVSA-2009:173 pidgin 2009-07-29
Gentoo 200910-02 pidgin 2009-10-22
Mandriva MDVSA-2009:147 pidgin 2009-06-30
Mandriva MDVSA-2009:140 gaim 2009-06-25
Ubuntu USN-781-2 gaim 2009-06-03
Ubuntu USN-781-1 pidgin 2009-06-03
Fedora FEDORA-2009-5583 pidgin 2009-05-28
Fedora FEDORA-2009-5597 pidgin 2009-05-28
Fedora FEDORA-2009-5552 pidgin 2009-05-28
Slackware SSA:2009-146-01 pidgin 2009-05-27
Gentoo 200905-07 pidgin 2009-05-25
Debian DSA-1805-1 pidgin 2009-05-22
CentOS CESA-2009:1060 pidgin 2009-05-22
CentOS CESA-2009:1059 pidgin 2009-05-22
Red Hat RHSA-2009:1060-02 pidgin 2009-05-22
Red Hat RHSA-2009:1059-02 pidgin 2009-05-22

Comments (none posted)

pidgin: data corruption

Package(s):pidgin CVE #(s):CVE-2009-1374 CVE-2009-1375
Created:May 22, 2009 Updated:December 7, 2009
Description: From the Red Hat advisory:

A denial of service flaw was found in Pidgin's QQ protocol decryption handler. When the QQ protocol decrypts packet information, heap data can be overwritten, possibly causing Pidgin to crash. (CVE-2009-1374)

A flaw was found in the way Pidgin's PurpleCircBuffer object is expanded. If the buffer is full when more data arrives, the data stored in this buffer becomes corrupted. This corrupted data could result in confusing or misleading data being presented to the user, or possibly crash Pidgin. (CVE-2009-1375)

Alerts:
Mandriva MDVSA-2009:321 pidgin 2009-12-06
SuSE SUSE-SR:2009:013 memcached, libtiff/libtiff3, nagios, libsndfile, gaim/finch, open-, strong, freeswan, libapr-util1, websphere-as_ce, libxml2 2009-08-11
Mandriva MDVSA-2009:173 pidgin 2009-07-29
Mandriva MDVSA-2009:147 pidgin 2009-06-30
Ubuntu USN-781-1 pidgin 2009-06-03
Fedora FEDORA-2009-5583 pidgin 2009-05-28
Fedora FEDORA-2009-5597 pidgin 2009-05-28
Fedora FEDORA-2009-5552 pidgin 2009-05-28
Slackware SSA:2009-146-01 pidgin 2009-05-27
Gentoo 200905-07 pidgin 2009-05-25
Debian DSA-1805-1 pidgin 2009-05-22
CentOS CESA-2009:1060 pidgin 2009-05-22
Red Hat RHSA-2009:1060-02 pidgin 2009-05-22

Comments (none posted)

Page editor: Jake Edge

Kernel development

Brief items

Kernel release status

The current 2.6 development kernel is 2.6.30-rc7, released on May 23. "So go wild. I suspect I'll do an -rc8, but we're definitely getting closer to release-time - it would be good to get as much testing as possible, and it should generally be pretty safe to try this all out." The long-format changelog has the details.

The current stable 2.6 kernel remains 2.6.29.4; there have been no stable releases over the last week.

Comments (none posted)

Kernel development news

Quotes of the week

Interesting, how telling somebody that they need to learn C is considered an unacceptable thing to do. Hostile to newbies, or some such. Introducing more magic that has to be learnt if one wants to read the kernel source, OTOH, is just fine...
-- Al Viro

Sorry but are you really suggesting every program in the world that uses write() anywhere should put it into a loop? That seems just like really bad API design to me, requiring such contortions in a fundamental system call just to work around kernel deficiencies.

I can just imagine the programmers putting nasty comments about the Linux kernel on top of those loops and they would be fully deserved.

-- Andi Kleen discovers POSIX

Hey, don't look at me - blame Brian Kernighan or George Bush or someone.
-- Andrew Morton disclaims responsibility

Comments (5 posted)

In brief

By Jonathan Corbet
May 27, 2009
Union directories. While a number of developers are working on the full union mount problem, Miklos Szeredi has taken a simpler approach: union directories. Only top-level directory unification is provided, and changes can only be made to the top-level filesystem. That eliminates the need for a lot of complex code doing directory copy-up, whiteouts, and such, but also reduces the functionality significantly.

Optimizing writeback timers: on a normal Linux system, the pdflush process wakes up every five seconds to force dirty page cache pages back to their backing store on disk. This wakeup happens whether or not there is anything needing to be written back. Unnecessary wakeups are increasingly unwelcome, especially on systems where power consumption matters, so it would be nice to let pdflush sleep when there is nothing for it to do.

Artem Bityutskiy has put together a patch set to do just that. It changes the filesystem API to make it easier for the core VFS to know when a specific filesystem has dirty data. That information is then used to decide whether pdflush needs to be roused from its slumber. The idea seems good, but there's one little problem: this work conflicts with the per-BDI flusher threads patches by Jens Axboe. Jens's patches get rid of the pdflush timer and make a lot of other changes, so these two projects do not currently play well together. So Artem is headed back to the drawing board to base his work on top of Jens's patches instead of the mainline.

recvmmsg(). Arnaldo Carvalho de Melo has proposed a new system call for the socket API:

    struct mmsghdr {
	struct msghdr	msg_hdr;
	unsigned	msg_len;
    };

    ssize_t recvmmsg(int socket, struct mmsghdr *mmsg, int vlen, int flags);

The difference between this system call and recvmsg() is that it is able to accept multiple messages with a single call. That, in turn, reduces system call overhead in high-bandwidth network applications. The comments in the patch suggest that sendmmsg() is in the plans, but no implementation has been posted yet.

There was a suggestion that this functionality could be obtained by extending recvmsg() with a new message flag, rather than adding a new system call. But, as David Miller pointed out, that won't work. The kernel currently ignores unrecognized flags; that will make it impossible for user space to determine whether a specific kernel supports multiple-message receives or not. So the new system call is probably how this feature will be added.

Comments (6 posted)

Developer statistics for 2.6.30

By Jonathan Corbet
May 27, 2009
As the 2.6.30 development cycle heads toward a close, it is natural to look back at what has been merged and where it came from. So here is LWN's traditional look at who wrote the code which went into the mainline this time around.

Once again, 2.6.30 was a large development cycle; it saw the incorporation (through just after 2.6.30-rc7) of 11,733 non-merge changesets from 1125 developers. The number of changesets exceeds 2.6.29, but the number of developers falls just short of the 1166 seen last time around. Those developers added 1.14 million lines of code this time around, while taking out 513,000, for a net growth of 624,000 lines.

The individual developer statistics for 2.6.30 look like:

Most active 2.6.30 developers
By changesets
Ingo Molnar3242.8%
Bill Pemberton2271.9%
Stephen Hemminger2041.7%
Hans Verkuil1991.7%
Takashi Iwai1881.6%
Bartlomiej Zolnierkiewicz1861.6%
Steven Rostedt1791.5%
Greg Kroah-Hartman1501.3%
Jeremy Fitzhardinge1251.1%
Mark Brown1070.9%
Jaswinder Singh Rajput1050.9%
Rusty Russell1000.9%
Tejun Heo980.8%
Johannes Berg980.8%
Hannes Eder880.8%
Michal Simek850.7%
Luis R. Rodriguez850.7%
Sujith850.7%
David Howells800.7%
Yinghai Lu780.7%
By changed lines
Greg Kroah-Hartman1203539.0%
ADDI-DATA GmbH434203.3%
Mithlesh Thukral424243.2%
Alex Deucher265762.0%
David Schleef259051.9%
David Woodhouse246361.8%
Ramkrishna Vepa234951.8%
Lior Dotan225061.7%
Eric Moore222661.7%
Eilon Greenstein183991.4%
Jaswinder Singh Rajput181681.4%
Hans Verkuil180481.4%
David Howells179411.3%
Andy Grover163551.2%
Michal Simek158271.2%
Sri Deevi155141.2%
Frank Mori Hess154501.2%
Ben Hutchings150311.1%
Ingo Molnar138761.0%
Bill Pemberton138171.0%

On the changesets side, Ingo Molnar is at the top of the list this time around; as usual, he created a vast number of patches - about five per day - in the x86 architecture code, ftrace, and beyond. Bill Pemberton is perhaps better known as the maintainer of the Elm mail client; he did a lot of cleanup work with the COMEDI drivers in the -staging tree. The bulk of Stephen Hemminger's work involved converting network drivers to the new net_device_ops API. Hans Verkuil continues to improve the Video4Linux2 framework and associated drivers, and Takashi Iwai continues to generate a lot of patches as the ALSA maintainer.

Linus kicked off the 2.6.30 development cycle by noting that about one third of the changes in 2.6.30-rc1 were "crap." So, unsurprisingly, the top three entries in the "by changed lines" column all got there through the addition of -staging drivers. Alex Deucher added Radeon R6xx/R7xx support; many of his "changed lines" were associated microcode firmware. And David Schleef added another set of drivers to the -staging tree.

Contributions to 2.6.30 could be traced back to some 190 employers. Looking at the most-active employer information, we see:

Most active 2.6.30 employers
By changesets
(None)197016.8%
Red Hat130511.1%
(Unknown)118410.1%
Intel8557.3%
Novell8327.1%
IBM6305.4%
(Consultant)2932.5%
Atheros Communications2622.2%
Oracle2522.1%
University of Virginia2271.9%
Fujitsu2171.8%
Vyatta2041.7%
Renesas Technology1521.3%
NTT1211.0%
MontaVista1151.0%
HP1070.9%
Wolfson Microelectronics1050.9%
(Academia)1020.9%
Nokia980.8%
XenSource910.8%
By lines changed
(Unknown)18141313.6%
Novell16422912.3%
(None)1180958.9%
Intel860606.5%
Red Hat739545.5%
LinSysSoft Technologies647984.9%
ADDI-DATA GmbH434203.3%
SofaWare392452.9%
Broadcom319562.4%
AMD283642.1%
Entropy Wave259051.9%
IBM257021.9%
Oracle255881.9%
NTT252351.9%
Neterion234951.8%
LSI Logic223041.7%
Atheros Communications216271.6%
(Consultant)192091.4%
Freescale161391.2%
PetaLogix158461.2%

These numbers are somewhat similar to those seen in previous development cycles. There are a few unfamiliar companies here; they are pretty much all present as a result of contributions to -staging. It is interesting to note that Atheros and Broadcom, once known as uncooperative companies, are increasing their contributions over time.

Your editor has not looked at signoff statistics for the last few cycles. The interesting thing to be found in Signed-off-by tags is an indication of who the gatekeepers to the kernel are. Especially if one disregards signoffs by the author of each patch, what remains is (mostly) the signoffs of subsystem maintainers who approved the patches for merging. For 2.6.30, these numbers look like this:

Top non-author signoffs in 2.6.30
Individuals
David S. Miller121612.1%
John W. Linville8658.6%
Ingo Molnar8368.3%
Greg Kroah-Hartman7977.9%
Mauro Carvalho Chehab7847.8%
Andrew Morton6606.6%
James Bottomley2502.5%
Linus Torvalds2192.2%
Len Brown1891.9%
Takashi Iwai1651.6%
Jeff Kirsher1451.4%
Russell King1271.3%
H. Peter Anvin1201.2%
Mark Brown1151.1%
Jesse Barnes1111.1%
Benjamin Herrenschmidt1111.1%
Reinette Chatre1041.0%
Martin Schwidefsky950.9%
Avi Kivity910.9%
Paul Mundt890.9%
Employers
Red Hat426442.4%
Novell138613.8%
Intel9519.5%
Google6606.6%
(None)4084.1%
IBM3783.8%
Linux Foundation2192.2%
(Consultant)1661.6%
(Unknown)1271.3%
Wolfson Microelectronics1151.1%
Renesas Technology920.9%
Marvell910.9%
Atomide810.8%
Oracle800.8%
Astaro650.6%
Freescale630.6%
Cisco610.6%
Analog Devices600.6%
Univ. of Michigan CITI590.6%
Panasas580.6%

Signoffs have always been more concentrated than contributions in general. Still, one wonders how David Miller manages to approve a solid twenty patches every day. On the employer side, things are more concentrated than ever; over half of the patches going into the kernel go through the hands of a developer at Red Hat or Novell. Developers, it seems, work for a great many companies, but subsystem maintainers gravitate toward a small handful of firms.

All told, the picture remains one of a well-oiled, fast-moving development process. We also see a picture of a -staging tree which is growing at a tremendous rate; your editor is tempted to exclude -staging patches from future reports if the rate does not slow somewhat. Even without -staging, though, a lot of work is being done on the kernel, with the participation of a large group of developers, and it doesn't look like it will be slowing down anytime soon.

Postscript: Jan Engelhardt sent your editor a pointer to a short script which, through use of the git blame command, tallies up the "ownership" of every line in the kernel. The top results for 2.6.30-rc7 look like this:

Who last touched kernel code lines
LinesPctWho
406372335.17% Linus Torvalds
4640214.02% Greg Kroah-Hartman
942000.82% David Howells
860310.74% David S. Miller
826080.71% Luis R. Rodriguez
722000.62% Bryan Wu
701280.61% Takashi Iwai
668590.58% Ralf Baechle
557850.48% Hans Verkuil
540690.47% Paul Mundt
540070.47% Kumar Gala
532880.46% David Brownell
516400.45% Russell King
506110.44% Paul Mackerras
494990.43% Andrew Victor
493470.43% Mauro Carvalho Chehab
492560.43% Alan Cox
473050.41% Mikael Starvik
470400.41% Ben Dooks
443070.38% Benjamin Herrenschmidt

Linus shows a high ownership because he was the initial committer at the beginning of the git era. To a rough approximation, one can conclude that approximately one third of the code in the kernel has not been touched since that time. There are other interesting things which can be done with line-level statistics; your editor plans to explore this idea some in the future.

Comments (26 posted)

Compcache: in-memory compressed swapping

May 26, 2009

This article was contributed by Nitin Gupta

The idea of memory compression—compress relatively unused pages and store them in memory itself—is simple and has been around for a long time. Compression, through the elimination of expensive disk I/O, is far faster than swapping those pages to secondary storage. When a page is needed again, it is decompressed and given back, which is, again, much faster than going to swap.

An implementation of this idea on Linux is currently under development as the compcache project. It creates a virtual block device (called ramzswap) which acts as a swap disk. Pages swapped to this disk are compressed and stored in memory itself. The project home contains use cases, performance numbers, and other related bits. The whole aim of the project is not just performance — on swapless setups, it allows running applications that would otherwise simply fail due to lack of memory. For example, Edubuntu included compcache to lower the RAM requirements of its installer.

The performance page on the project wiki shows numbers for configurations that closely match netbooks, thin clients, and embedded devices. These initial results look promising. For example, in the benchmark for thin clients, ramzswap gives nearly the same effect as doubling the memory. Another benchmark shows that average time required to complete swap requests is reduced drastically with ramzswap. With a swap partition located on a 10000 RPM disk, average time required for swap read and write requests was found to be 168ms and 355ms, respectively. While with ramzswap, corresponding numbers were mere 12µs and 7µs, respectively — this includes time for checking zero-filled pages and compressing/decompressing all non-zero pages.

The approach of using a virtual block device is a major simplification over earlier attempts. The previous implementation required changes to the swap write path, page fault handler, and page cache lookup functions (find_get_page() and friends). Those patches did not gain widespread acceptance due to their intrusive nature. The new approach is far less intrusive, but at a cost: compcache has lost the ability to compress page cache (filesystem backed) pages. It can now compress swap cache (anonymous) pages only. At the same time, this simplicity and non-intrusiveness got it included in Ubuntu, ALT Linux, LTSP (Linux Terminal Server Project) and maybe other places as well.

It should be noted that, when used at the hypervisor level, compcache can compress any part of the guest memory and for any kind of guest OS (Linux, Windows etc) — this should allow running more virtual machines for a given amount of total host memory. For example, in KVM the guest physical memory is simply anonymous memory for the host (Linux kernel in this case). Also, with the recent MMU notifier support included in the Linux kernel, nearly the entire guest physical memory is now swappable [PDF].

Implementation

All of the individual components are separate kernel modules:

  • LZO compressor: lzo_compress.ko, lzo_decompress.ko (already in mainline)
  • xvMalloc memory allocator: xvmalloc.ko
  • compcache block device driver: ramzswap.ko
Once these modules are loaded, one can just enable the ramzswap swap device:
    swapon /dev/ramzswap0
Note that ramzswap cannot be used as a generic block device. It can only handle page-aligned I/O, which is sufficient for use as a swap device. No use case has yet come to light that would justify the effort to make it a generic compressed read-write block device. Also, to minimize block layer overhead, ramzswap uses the "no queue" mode of operation. Thus, it accepts requests directly from the block layer and avoids all overhead due to request queue logic.

The ramzswap module accepts parameters for "disk" size, memory limit, and backing swap partition. The optional backing swap partition parameter is the physical disk swap partition where ramzswap will forward read/write requests for pages that compress to a size larger than PAGE_SIZE/2 — so we keep only highly compressible pages in memory. Additionally, purely zero filled pages are checked and no memory is allocated for such pages. For "generic" desktop workloads (Firefox, email client, editor, media player etc.), we typically see 4000-5000 zero filled pages.

Memory management

One of the biggest challenges in this project is to manage variable sized compressed chunks. For this, ramzswap uses memory allocator called xvmalloc developed specifically for this project. It has O(1) malloc/free, very low fragmentation (within 10% of ideal in all tests), and can use highmem (useful on 32-bit systems with >1G memory). It exports a non-standard allocator interface:

    struct xv_pool *xv_create_pool(void);
    void xv_destroy_pool(struct xv_pool *pool);

    int xv_malloc(struct xv_pool *pool, u32 size, u32 *pagenum, u32 *offset, gfp_t flags);
    void xv_free(struct xv_pool *pool, u32 pagenum, u32 offset);

xv_malloc() returns a <pagenum, offset> pair. It is then up to the caller to map this page (with kmap()) to get a valid kernel-space pointer.

The justification for the use of a custom memory allocator was provided when the compcache patches were posted to linux-kernel. Both the SLOB and SLUB allocators were found to be unsuitable for use in this project. SLOB targets embedded devices and claims to have good space efficiency. However, it was found to have some major problems: It has O(n) alloc/free behavior and can lead to large amounts of wasted space as detailed in this LKML post.

On the other hand, SLUB has different set of problems. The first is the usual fragmentation issue. The data presented here shows that kmalloc uses ~43% more memory than xvmalloc. Another problem is that it depends on allocating higher order pages to reduce fragmentation. This is not acceptable for ramzswap as it is used in tight-memory situations, so higher order allocations are almost guaranteed to fail. The xvmalloc allocator, on the other hand, always allocates zero-order pages when it needs to expand a memory pool.

Also, both SLUB and SLOB are limited to allocating from low memory. This particular limitation is applicable only for 32-bit system with more than 1G of memory. On such systems, neither allocator is able to allocate from the high memory zone. This restriction is not acceptable for the compcache project. Users with such configurations reported memory allocation failures from ramzswap (before xvmalloc was developed) even when plenty of high-memory was available. The xvmalloc allocator, on the other hand, is able to allocate from the high memory region.

Considering above points, xvmalloc could potentially replace the SLOB allocator. However, this would involve lot of additional work as xvmalloc provides a non-standard malloc/free interface. Also, xvmalloc is not scalable in its current state (neither is SLOB) and hence cannot be considered as a replacement for SLUB.

The memory needed for compressed pages is not pre-allocated; it grows and shrinks on demand. On initialization, ramzswap creates an xvmalloc memory pool. When the pool does not have enough memory to satisfy an allocation request, it grows by allocating single (0-order) pages from kernel page allocator. When an object is freed, xvmalloc merges it with adjacent free blocks in the same page. If the resulting free block size is equal to PAGE_SIZE, i.e. the page no longer contains any object; we release the page back to the kernel.

This allocation and freeing of objects can lead to fragmentation of the ramzswap memory. Consider the case where a lot of objects are freed in a short period of time and, subsequently, there are very few swap write requests. In that case, the xvmalloc pool can end up with a lot of partially filled pages, each containing only a small number of live objects. To handle this case, some sort of xvmalloc memory defragmentation scheme would need to be implemented; this could be done by relocating objects from almost-empty pages to other pages in the xvmalloc pool. However, it should be noted that, practically, after months of use on several desktop machines, waste due to xvmalloc memory fragmentation never exceeded 7%.

Swap limitations and and tools

Being a block device, ramzswap can never know when a compressed page is no longer required — say, when the owning process has exited. Such stale (compressed) pages simply waste memory. But with recent "swap discard" support, this is no longer as much of a problem. Swap discard sends BIO_RW_DISCARD bio request when it finds a free swap cluster during swap allocation. Although compcache does not get the callback immediately after a page becomes stale, it is still better than just keeping those pages in memory until they are overwritten by another page. Support for the swap discard mechanism was added in compcache-0.5.

In general, the discard request comes a long time after a page has become stale. Consider a case where a memory-intensive workload terminates and there is no further swapping activity. In those cases, ramzswap will end up having lots of stale pages. No discard requests will come to ramzswap since no further swap allocations are being done. Once swapping activity starts again, it is expected that discard requests will be received for some of these stale pages. So, to make ramzswap more effective, changes are required in the kernel (not yet done) to scan the swap bitmap more aggressively to find any freed swap clusters — at least in the case of RAM backed swap devices. Also, an adaptive compressed cache resizing policy would be useful — monitor accesses to the compressed cache and move relatively unused pages to a physical swap device. Currently, ramzswap can simply forward uncompressible pages to a backing swap disk, but it cannot swap out memory allocated by xvmalloc.

Another interesting sub-project is the SwapReplay infrastructure. This tool is meant to easily test memory allocator behavior under actual swapping conditions. It is a kernel module and a set of userspace tools to replay swap events in userspace. The kernel module stacks a pseudo block device (/dev/sr_relay) over a physical swap device. When kernel swaps over this pseudo device, it dumps a <sector number, R/W bit, compress length> tuple to userspace and then forwards the I/O request to the backing swap device (provided as a swap_replay module parameter). This data can then be parsed using a parser library which provides a callback interface for swap events. Clients using this library can provide any action for these events — show compressed length histograms, simulate ramzswap behavior etc. No kernel patching is required for this functionality.

The swap replay infrastructure has been very useful throughout ramzswap development. The ability to replay swap traces allows for easy and consistent simulation of any workload without the need to set it up and run it again and again. So, if a user is suffering from high memory fragmentation under some workloads, he could simply send me swap trace for his workload and I have all the data needed to reproduce the condition on my side — without the need to set up the same workload.

Clients for the parser library were written to simulate ramzswap behavior over traces from a variety of workloads leading to easier evaluation of different memory allocators and, ultimately, development and enhancement of the xvmalloc allocator. In the future, it will also help testing variety of eviction policies to support adaptive compressed cache resizing.

Conclusion

The compcache project is currently under active development; some of the additional features planned are: adaptive compression cache resizing, allow swapping of xvmalloc memory to physical swap disk, memory defragmentation by relocating compressed chunks within memory and compressed swapping to disk (4-5 pages swapped out with single disk I/O). Later, it might be extended to compress page-cache pages too (as earlier patches did) — for now, it just includes the ramzswap component to handle anonymous memory compression.

Last time the ramzswap patches were submitted for review, only LTSP performance data was provided as a justification for this feature. Andrew Morton was not satisfied with this data. However, now there is a lot more data uploaded to the performance page on the project wiki that shows performance improvements with ramzswap. Andrew also pointed out lack of data for cases where ramzswap can cause performance loss:

We would also be interested in seeing the performance _loss_ from these patches. There must be some cost somewhere. Find a worstish-case test case and run it and include its results in the changelog too, so we better understand the tradeoffs involved here.

The project still lacks data for such cases. However, it should be available by the 2.6.32 time frame, when these patches will be posted again for possible inclusion in mainline.

Comments (25 posted)

An updated guide to debugfs

By Jonathan Corbet
May 25, 2009
LWN covered the debugfs API back in 2004. Rather more recently, Shen Feng kindly proposed the addition of LWN's debugfs article as a file in the Documentation directory. There was only one little problem with that suggestion: as one might expect, the debugfs API has changed a little since 2004. The following is an attempt to update the original document to cover the full API as it exists in the 2.6.30 kernel.

Debugfs exists as a simple way for kernel developers to make information available to user space. Unlike /proc, which is only meant for information about a process, or sysfs, which has strict one-value-per-file rules, debugfs has no rules at all. Developers can put any information they want there. The debugfs filesystem is also intended to not serve as a stable ABI to user space; in theory, there are no stability constraints placed on files exported there. The real world is not always so simple, though; even debugfs interfaces are best designed with the idea that they will need to be maintained forever.

Debugfs is typically mounted with a command like:

    mount -t debugfs none /sys/kernel/debug

(Or an equivalent /etc/fstab line). There is occasional dissent on the mailing lists regarding the proper mount location for debugfs, and some documentation refers to mount points like /debug instead. For now, user-space code which uses debugfs files will be more portable if it finds the debugfs mount point in /proc/mounts.

Note that the debugfs API is exported GPL-only to modules.

Code using debugfs should include <linux/debugfs.h>. Then, the first order of business will be to create at least one directory to hold a set of debugfs files:

    struct dentry *debugfs_create_dir(const char *name, struct dentry *parent);

This call, if successful, will make a directory called name underneath the indicated parent directory. If parent is NULL, the directory will be created in the debugfs root. On success, the return value is a struct dentry pointer which can be used to create files in the directory (and to clean it up at the end). A NULL return value indicates that something went wrong. If -ENODEV is returned, that is an indication that the kernel has been built without debugfs support and none of the functions described below will work.

The most general way to create a file within a debugfs directory is with:

    struct dentry *debugfs_create_file(const char *name, mode_t mode,
				       struct dentry *parent, void *data,
				       const struct file_operations *fops);

Here, name is the name of the file to create, mode describes the access permissions the file should have, parent indicates the directory which should hold the file, data will be stored in the i_private field of the resulting inode structure, and fops is a set of file operations which implement the file's behavior. At a minimum, the read() and/or write() operations should be provided; others can be included as needed. Again, the return value will be a dentry pointer to the created file, NULL for error, or -ENODEV if debugfs support is missing.

In a number of cases, the creation of a set of file operations is not actually necessary; the debugfs code provides a number of helper functions for simple situations. Files containing a single integer value can be created with any of:

    struct dentry *debugfs_create_u8(const char *name, mode_t mode,
				     struct dentry *parent, u8 *value);
    struct dentry *debugfs_create_u16(const char *name, mode_t mode,
				      struct dentry *parent, u16 *value);
    struct dentry *debugfs_create_u32(const char *name, mode_t mode,
				      struct dentry *parent, u32 *value);
    struct dentry *debugfs_create_u64(const char *name, mode_t mode,
				      struct dentry *parent, u64 *value);

These files support both reading and writing the given value; if a specific file should not be written to, simply set the mode bits accordingly. The values in these files are in decimal; if hexadecimal is more appropriate, the following functions can be used instead:

    struct dentry *debugfs_create_x8(const char *name, mode_t mode,
				     struct dentry *parent, u8 *value);
    struct dentry *debugfs_create_x16(const char *name, mode_t mode,
				      struct dentry *parent, u16 *value);
    struct dentry *debugfs_create_x32(const char *name, mode_t mode,
				      struct dentry *parent, u32 *value);

Note that there is no debugfs_create_x64().

These functions are useful as long as the developer knows the size of the value to be exported. Some types can have different widths on different architectures, though, complicating the situation somewhat. There is a function meant to help out in one special case:

    struct dentry *debugfs_create_size_t(const char *name, mode_t mode,
				         struct dentry *parent, 
					 size_t *value);

As might be expected, this function will create a debugfs file to represent a variable of type size_t.

Boolean values can be placed in debugfs with:

    struct dentry *debugfs_create_bool(const char *name, mode_t mode,
				       struct dentry *parent, u32 *value);

A read on the resulting file will yield either Y (for non-zero values) or N, followed by a newline. If written to, it will accept either upper- or lower-case values, or 1 or 0. Any other input will be silently ignored.

Finally, a block of arbitrary binary data can be exported with:

    struct debugfs_blob_wrapper {
	void *data;
	unsigned long size;
    };

    struct dentry *debugfs_create_blob(const char *name, mode_t mode,
				       struct dentry *parent,
				       struct debugfs_blob_wrapper *blob);

A read of this file will return the data pointed to by the debugfs_blob_wrapper structure. Some drivers use "blobs" as a simple way to return several lines of (static) formatted text output. This function can be used to export binary information, but there does not appear to be any code which does so in the mainline. Note that files created with debugfs_create_blob() are read-only.

There are a couple of other directory-oriented helper functions:

    struct dentry *debugfs_rename(struct dentry *old_dir, 
    				  struct dentry *old_dentry,
		                  struct dentry *new_dir, 
				  const char *new_name);

    struct dentry *debugfs_create_symlink(const char *name, 
                                          struct dentry *parent,
				      	  const char *target);

A call to debugfs_rename() will give a new name to an existing debugfs file, possibly in a different directory. The new_name must not exist prior to the call; the return value is old_dentry with updated information. Symbolic links can be created with debugfs_create_symlink().

There is one important thing that all debugfs users must take into account: there is no automatic cleanup of any directories created in debugfs. If a module is unloaded without explicitly removing debugfs entries, the result will be a lot of stale pointers and no end of highly antisocial behavior. So all debugfs users - at least those which can be built as modules - must be prepared to remove all files and directories they create there. A file can be removed with:

    void debugfs_remove(struct dentry *dentry);

The dentry value can be NULL.

Once upon a time, debugfs users were required to remember the dentry pointer for every debugfs file they created so that they could all be cleaned up. We live in more civilized times now, though, and debugfs users can call:

    void debugfs_remove_recursive(struct dentry *dentry);

If this function is passed a pointer for the dentry corresponding to the top-level directory, the entire hierarchy below that directory will be removed.

Comments (3 posted)

Patches and updates

Kernel trees

Linus Torvalds Linux 2.6.30-rc7 ?
Thomas Gleixner 2.6.29.4-rt15 ?
Thomas Gleixner 2.6.29.4-rt16 ?

Architecture-specific

Development tools

Device drivers

Filesystems and block I/O

Memory management

Networking

Arnaldo Carvalho de Melo New socket API: recvmmsg ?
Dmitry Eremin-Solenikov IEEE 802.15.4 implementation for Linux ?

Security-related

Virtualization and containers

Benchmarks and bugs

Page editor: Jonathan Corbet

Distributions

News and Editorials

Nexenta Core Platform 2: OpenSolaris for human beings

May 27, 2009

This article was contributed by Koen Vervloesem

Nexenta Core Platform is an operating system that ties the OpenSolaris kernel to a Ubuntu user space. The project emerged just a couple of months after Sun Microsystems released the OpenSolaris code in June 2005, and it was first called Gnusolaris, a direct port of Desktop Ubuntu to the OpenSolaris kernel. The developers soon realized that supporting a desktop distribution was too difficult due to the large number of support requests and the huge list of packages to maintain. So the Nexenta developers refocused their effort onto a "core" distribution, leaving the desktop integration to others.

In February 2008, these developers released Nexenta Core Platform 1.0, a server system based on OpenSolaris build 85 and Ubuntu Dapper Drake (6.06 LTS). Now, the next version is available: Nexenta Core Platform 2.0, based on OpenSolaris build 104 and Ubuntu Hardy Heron (8.04 LTS). Over 13,000 packages are in the repository, which is a lot more than in OpenSolaris and around half of Ubuntu's packages. Nexenta Core is also the base of NexentaStor, a NAS storage solution developed by Nexenta Systems.

Installing Nexenta

Nexenta's installer is rather basic. The disk is partitioned automatically. If the user selects two or more equal-sized disks, the installer configures them as mirrored ZFS disks. The automatic partitioning uses the whole disk for a root and a swap partition. At the moment, it's not possible to choose another partitioning layout, e.g. for a dual boot configuration, but some users have reportedly been able to do it by hacking the installer script. As a more flexible partitioning support is requested frequently by users, Nexenta community leader Anil Gulecha promises this will be one of the enhancements in the next release.

After the system is installed and the user has logged in, it looks like a normal Ubuntu Server system, using around 800 MB disk space. After an "apt-get update", the "apt-cache show" command shows over 13,000 available packages to install. Although Nexenta positions itself as a file server and NAS platform, much of these packages are graphical programs. X.Org can be installed, as well as Firefox, Xfce 4 and most of Gnome. All packages are the versions in Ubuntu 8.04. The Nexenta project has set up its own repository on apt.nexenta.org and there are some mirrors available.

OpenSolaris goodies

The fact that Nexenta is using the OpenSolaris kernel means that hardware support depends on the drivers available in OpenSolaris. This is a domain where Linux distributions clearly have a big advantage over Nexenta. The best way to find out whether specific hardware components are supported by Nexenta is by searching the Solaris Hardware Compatibility List, however there are a few drivers in Nexenta that are not part of OpenSolaris. Nexenta's stable device driver interface does arguably provide an advantage over Linux in that Nexenta users can still run ten-year-old Solaris drivers.

Nexenta inherits other interesting features of Solaris, such as the operating system-level virtualization zones, a feature resembling Linux-VServer but with the advantage that it's not a separate patch set but supported in the official kernel. Applications within a zone appear to be running on a standalone system. Processes in different zones are completely isolated from each other. But although each zone appears as a standalone operating system, in reality a single instance of the OpenSolaris kernel is running, which means that zones are lightweight. Nexenta has full support for zones.

Transactional upgrades

Another powerful feature that Nexenta inherits from the OpenSolaris kernel is the ZFS file system; the Nexenta developers have built some innovative tools around it. With the apt-clone tool, the user can upgrade or install packages with the capability of an easy rollback if the install goes wrong. For example, with a simple "apt-clone dist-upgrade" command, the system will be upgraded, but not before a checkpoint is made. If the upgrade returns an error, the user has the option to rollback the changes to the checkpoint. So, in a few seconds, the system is returned exactly to its previous state, without a reboot. These upgrades are called transactional ZFS upgrades.

Even if an upgrade is successful, the user can decide at any time to revert to the previous state. After a successful upgrade, a new checkpoint is made. The user then has the option to reboot and activate the checkpoint in GRUB or first activate the newly created checkpoint and then reboot. If the user decides later that he doesn't like or need the upgrade, he reboots and selects the previous checkpoint in GRUB, after which he can revert to the previous state. The new checkpoint is then deleted.

This checkpoint system is not only usable for upgrades, but also for installing applications. Think about an application with rather intrusive effects on the system, such as the Apache web server. A simple command creates a checkpoint and installs Apache 2. If the user later decides that he doesn't want Apache or if he has made a mess of the configuration, he can activate the previous checkpoint and then reboot to revert the system to the state before Apache's install.

GNU and not GNU

The default behavior of Nexenta is to prefer GNU utilities, which are installed in /usr/bin and /usr/sbin and so on. The Sun versions of these utilities are installed in /usr/sun/bin and /usr/sun/sbin. Nexenta uses a trick to be able to switch between a GNU and a SUN personality: if the environment variable SUN_PERSONALITY is set to one, the search paths /usr/sun/bin and /usr/sun/sbin take preference, even if the user executes the commands explicitly by their absolute path, e.g. /usr/bin/sed. This ensures that Solaris-based scripts work in Nexenta without modifications. Nexenta also uses this functionality in its SVR4 package commands. They can be used to install native Solaris packages in SVR4 format, calling alien to convert the package on-the-fly to a Debian package.

Although most utilities are GNU ones, a couple of basic commands like ps and top are the OpenSolaris versions, which is confusing but understandable as they are tied closely to the OpenSolaris kernel. This confusing mix between GNU and Sun comes back in other domains. For example, the user space is almost completely built using GCC. In contrast, the OpenSolaris side, which consists of a couple of hundred packages, is built using Sun Studio. Gulecha explains this decision:

Sun Studio is what the developers of OpenSolaris use and test. GCC is supported, but untested, and fails occasionally. We want Nexenta to be very stable, and thus the decision to use Sun Studio. Also a small part of OpenSolaris is distributed under a binary license, which Sun cannot provide the source for as they don't own the copyright.

A difference in approach between Debian GNU/kFreeBSD and Nexenta is that the former is using a port of GNU libc, while the latter is using Sun libc. However, there's also a Debian GNU/kOpenSolaris port in the pipeline, which uses a port of GNU libc to the OpenSolaris kernel. The Nexenta developers seem to welcome this development and are even thinking about two architectures: NexentaCore Sun/OpenSolaris and NexentaCore GNU/kOpenSolaris. The Nexenta project actually started as an attempt to port glibc to OpenSolaris, but when the developers realized the complexity of the task, they decided to use Sun libc.

Porting packages to Nexenta

The Nexenta developers would like to make Nexenta the official port of Debian to the OpenSolaris kernel, like Debian GNU/kFreeBSD is the official port of Debian to FreeBSD's kernel. This would surely reduce the number of patches needed for Debian packages to cleanly build on Nexenta, because currently many patches don't make it upstream. To help make collaboration with Debian happen, the Nexenta developers have provided public root access to Nexenta zones for Debian developers who want to test their packages on the gnusolaris.org build machine. Any Debian package maintainer can request login details.

The progress on becoming the official Debian or Ubuntu port to the OpenSolaris kernel has been slow, however, as Gulecha admits:

With the developer resources we have, which is five active developers (two working for Nexenta Systems) we haven't been able to work towards this. Moreover, OpenSolaris is not 100% open source, which causes concern for upstream.

The decision to base Nexenta on Ubuntu LTS instead of Debian was rather practical: "Upstream itself would be supporting packages for 3 years, which would make updates easier for us."

One of the big projects developed for NCP2 was the Autobuilder project by Tim Spriggs, one of the lead developers of Nexenta Core Platform. This is a package building toolkit, hosted at builder.tajinc.org. The autobuilder keeps track of packages in various repositories, including upstream Ubuntu. Build nodes can talk to the autobuilder and request jobs. This is how the developers ported over 13,000 packages automatically in a short time, making Nexenta the OpenSolaris distribution with the largest number of packages. The developers are planning an RSS feed with patches for each package on builder.tajinc.org, for upstream maintainers to be able to keep track of their package's patch feed.

The Nexenta developers guide new developers through the process of building packages. The process is simple: add the right deb-src lines from Nexenta and Ubuntu to the /etc/apt/sources.list, create GPG keys and install some development packages. Then get the sources of the not-yet-ported package and build the dependencies with "apt-get build-dep" and "apt-get source". Then modify the package version string to reflect that it is a Nexenta version and build the source with "dpkg-buildpackage". If all goes well, the package can be installed with "dpkg -i" and be tested. If it works, the porter can upload the generated deb files with "dput". If the Nexenta developers accept the package, it's added to the repository.

Of course a lot of Ubuntu packages will not build on Nexenta out-of-the-box. To help developers with porting Linux code to Nexenta, the project's website has a page that lists common porting issues and their solution. This varies from changing dependencies in the Dependency field of a Debian package or adding the right includes in C source to using special compiler flags.

Nexenta distributions

Not only is Nexenta very porter-friendly, the developers also encourage users to build custom distributions on Nexenta Core. The Nexenta Builder tool is a developer's kit that simplifies the process of creating a new distribution ISO, and it's the same tool as the Nexenta Core team is using to produce Nexenta Core images.

Recently, Andrew Stormont announced his project StormOS, an Xfce-based derivative of Nexenta Core Platform 2. This is a desktop operating system for people who like the ZFS/apt combo, but don't want to install the desktop packages themselves. Currently StormOS is a one-man project by Stormont, who is one of the Wine-Doors developers. The StormOS story started when Stormont was asked to create OpenSolaris packages for Wine-Doors, but he didn't like the package system IPS. So he decided to "fix" OpenSolaris and tried Nexenta, which he wanted to turn into a desktop operating system, hereby reviving the Nexenta project's original goal.

Conclusion

Nexenta Core Platform 2 is a solid operating system that will surely appeal to Linux users who like ZFS but don't want to learn OpenSolaris. The combination also comes with some clear advantages: Debian's APT is better and faster than IPS, and the apt-clone command gives users powerful transactional upgrades. This makes it a perfect system for a home server. Moreover, the community gives a lot of support to people trying to port packages to Nexenta. It will be interesting to see if and how the Nexenta community will work closer with their upstream distribution and with the GNU/kOpenSolaris project.

Comments (1 posted)

New Releases

Nexenta Core Platform 2 Released

Nexenta, the combination of an OpenSolaris kernel with an Ubuntu user land has announced the release of Core Platform 2. One of the more interesting Nexenta features is the use of the ZFS filesystem, which allows for snapshots and other advanced filesystem features. Look for a review of Nexenta Core Platform 2 by guest author Koen Vervloesem on this week's distribution page.

Comments (34 posted)

Linux Mint 7 released

The Linux Mint team has announced the release of Mint 7 "Gloria". "The 7th release of Linux Mint comes with numerous bug fixes and a lot of improvements. In particular the menu system, the application manager and the upload manager now provide new features such as "Suggestions", "Featured applications", "SCP and SFTP support". The underlying base of the operating system was also strengthened with a new adjustment mechanism which makes Linux Mint more robust and less vulnerable to Ubuntu package upgrades, and the introduction of virtual and meta packages which simplify upgrade paths and the installation of multiple desktop environments."

Comments (none posted)

The CentOS-5.3 i386 live CD

For those of you wanting that bleeding-edge CentOS experience on a live CD, the project has now made the 5.3 release available as an ISO image for i386 systems. The CD includes a number of tools intended to make it useful in the rescue mode. Your editor notes with disappointment, though, that emacs was removed due to space constraints.

Full Story (comments: 16)

Distribution News

Debian GNU/Linux

Bits from the Eee PC team, Spring 2009

Debian's EeePC team have reported good support for Lenny (5.0) and are now working on squeeze support. "Work is well underway on supporting all Eee models in Squeeze. For months, several team members have been experimenting with new kernels, producing support for them in eeepc-acpi-scripts. The current release of this key package (version 1.1.0) supports Linux 2.6.29 and contains enhancements for wifi, sound hotkeys, bluetooth, external displays and OSD."

Full Story (comments: none)

Fedora

Fedora flags policy reverted

As noted in the attached summary of a meeting of the Fedora Engineering Steering Committee: the recently announced policy banning flag images from most packages has been rescinded. The Fedora project will now examine the issue in more detail with an eye toward crafting a new policy - if, indeed, one is deemed to be needed.

Full Story (comments: 4)

FESCo election nominations now open

Nominations are open for the five open seats on the Fedora Engineering Steering Committee. "Any interested Fedora packager may run for FESCo, the only requirement is membership in the 'packager' group in FAS. Especially noteworthy is that 'provenpackager' or sponsor status is not required - this keeps the bar low for new members."

Full Story (comments: none)

Fedora Board Recap 2009-05-21

Click below for a brief recap of the May 21, 2009 meeting of the Fedora Advisory Board. Topics include export restrictions, toxicity proposal, sponsorship, and "What is Fedora?".

Full Story (comments: none)

SUSE Linux and openSUSE

Announcing the openSUSE Ambassadors Program

The openSUSE project has announced the openSUSE Ambassadors Program. "Want to help spread the word about the openSUSE Project and encourage more people to become part of the openSUSE Community? Are you ready to roll up your sleeves and spread the word about the openSUSE Project? Do you want to teach new users about Linux, speak about openSUSE at local events, help distribute openSUSE media, and mentor new contributors to the openSUSE Project? Then you're ready to become an openSUSE Ambassador!"

Full Story (comments: none)

Other distributions

Release of the CentOS Directory Server

The CentOS project has announced the first public release of the CentOS Directory Server (CDS). "The CentOS Directory Server is a rebuild of the Red Hat Directory server. It is LDAP server developed in the Fedora project and has a long history. It started as the Netscape Directory Server but it got purchased by Red Hat and they released it as free software."

Full Story (comments: none)

New list: CentOS-Mirror-Announce

The CentOS team has announced a new mailing list, CentOS-mirror-announce. "This list is for announcement from the CentOS team to public mirror admins, containing trivia like issues with our mirror network, new releases, changes in how we do mirroring and so on."

Full Story (comments: none)

New Distributions

Ekaaty Linux

Ekaaty Linux is a Brazilian project that aims to provide a free, robust, secure and friendly operating system based on Linux and developed by the community. It features the KDE desktop. The website and most documentation are available only in Brazilian Portuguese, although English and European Portuguese also have some support. (Found at KDE-apps.org)

Comments (none posted)

NixOS

NixOS is a Linux distribution based on Nix, a purely functional package management system. NixOS is an experiment to see if an operating system can be built in which software packages, configuration files, boot scripts and the like are all managed in a purely functional way. That is, they are all built by deterministic functions and they never change after they have been built. NixOS is continuously built from source in Hydra, the Nix-based continuous build system. (Thanks to hppnq)

Comments (none posted)

Distribution Newsletters

DistroWatch Weekly, Issue 304

The DistroWatch Weekly for May 25, 2009 is out. "Three weeks ago Mandriva Linux 2009.1 was released. This distribution has a well-earned reputation for being both user-friendly and flexible and this week we take what turns out to be a somewhat surprising first look at the latest Mandriva release. In the news section, Slackware Linux finally opens a 64-bit branch of its development tree, Moblin 2.0 impresses the reviewers with a refreshing user interface design, Ubuntu reveals a change in video architecture for the upcoming version 9.10, Debian changes its archive signing key, and Fedora considers mailing list moderation in response to some unruly behaviour of its users. Also in this issue, a round-up of news from vendors preparing Linux-based solutions for mobile devices and an interesting new way of installing Arch Linux - via an unofficial live CD. Finally, if you have a package that you think DistroWatch should track, don't miss your chance to suggest it - this week only! All this and more in this week's issue, enjoy the read!"

Comments (none posted)

Fedora Weekly News #177

The Fedora Weekly News for May 24, 2009 is out. "This week we offer a special collector's edition with the last ever Fedora Webcomic. PlanetFedora links Jeff Shelten's thoughts on "Why Students Should Get Involved in Open Source", Ambassadors reports on "Fedora en Mexico", Developments is getting "In a Flap Over Flags", QualityAssurance takes a look at a "Mozilla/Beagle Blocker Bug Proposal", Artwork says goodbye to itself but welcomes Design. SecurityWeek examines the problems of "Cloudy Trust". Translations notes that "Sections of the Fedora User Guide Cannot be Translated". SecurityAdvisories includes an ipsec-tools update. Virtualization shares details on the "Rawhide Virtualization Repository"."

Full Story (comments: none)

openSUSE Weekly News, Issue 72

This issue of the openSUSE Weekly News covers Community Week, Pascal Bleser : vnstat on openSUSE, SUSE Linux Enterprise in the Americas: KDE: Social Desktop Starts to Arrive, Forums: Why Are We Not Helping More in the Wiki?, compiz-fusion.org: Beryl back from the ashes, and more.

Comments (none posted)

Ubuntu Weekly Newsletter, Issue #143

The Ubuntu Weekly Newsletter for May 23, 2009 is out. "In this issue we cover: UDS Karmic Koala begins, Team Reporting, New Ubuntu Members, Ubuntu Forums Interview, Tutorial of the Week, Canonical AllHands, KDE Brainstorm hits 1000+ ideas, Edubuntu Meeting Minutes, Renewed enthusiasm for Edubuntu, Ubuntu Romanian Remix, Ubuntu Podcast #29, WorkWithU Vodcast #2, and much, much more!!"

Full Story (comments: none)

Distribution meetings

Debian @Linuxtag 2009 - Last call for help

Helpers are needed at the Debian booth at Linuxtag 2009 on June 24 - 27, 2009. Only two people have volunteered so far. "I think that two people are not enough to run a booth. If this call for help does not succeed I'll have to cancel the booth. So if you are in Berlin during June 24th - 27th please be so kind and participate to the Debian booth. You don't have to be a Linux/Debian expert, just a little bit motivated :)."

Full Story (comments: none)

DebConf9 reconfirmation period started ; ends 7 June

Participants in DebConf9 are asked to reconfirm their intent. "We are now starting the reconfirmation period for DebConf9. The reconfirmation period ends on Sunday 7 June. If you don't reconfirm your attendance before the deadline, you won't be counted for food, lodging, t-shirts, proceedings, etc. Also, you won't be eligible for food or lodging sponsorship."

Full Story (comments: none)

Distribution reviews

Linux Distros For Netbooks (InformationWeek)

InformationWeek test drives netbook-ready Puppy Linux, Ubuntu Netbook Remix, Xubuntu, gOS, and Moblin, and reports on how they stack up. "Most Linux distros are suitable for netbook use. But fine-tuning your particular flavor of netbook by finding an even friendlier Linux distro isn't too difficult. Matching up your rig's configuration: screen size, devices, and drivers -- and your own style of work and play -- with the most conducive Linux distro, you'll get the max out of your mini-laptop."

Comments (none posted)

Page editor: Rebecca Sobol

Development

Debating icon names and ad-hoc specifications

May 27, 2009

This article was contributed by Nathan Willis

Freedesktop.org's Icon Naming specification became a center of contention for several days in May courtesy of a heated discussion on the XDG mailing list. At issue was whether the specification should enumerate a minimum contingent of icons for system-wide use, a comprehensive list, or something in between. The debate also raised questions about how the specification is crafted and, perhaps, what it means in light of Freedesktop.org's founding principle not to legislate standards.

The Icon Naming specification is a hierarchical set of named icons meant to be provided by a desktop environment like GNOME, KDE, or Xfce. The standard named icons include common functions such as opening and closing documents, and reusable components like the desktop's help browser. The specification defines a set of eleven contexts into which icons are grouped: actions, animations, applications, categories, devices, emblems, emotes, international, MIME types, places, and status. An application can take advantage of the system by referencing the named icons rather than having to embed its own copy of each icon it needs.

A related specification, the Icon Theme specification, shows how artists can create complete implementations of the set to install alongside the system's default. The Desktop base directory specification describes where in the filesystem applications should look for the icons provided.

Seeking icon specification specifics

Like many Freedesktop.org specifications and projects, the Icon Naming specification is discussed on the XDG mailing list (the name of which reflects Freedesktop.org's original name: X Desktop Group). May's debate was prompted by a request from PulseAudio's Lennart Poettering to add four new icon names to the devices context, representing common audio hardware devices (audio-headset, audio-headphones, audio-speakers, and audio-handsfree). Rodney Dawes, maintainer of the specification, countered with a different proposal he said would better fit the specification's existing hierarchy and not introduce unnecessary icons: a new icon for headset, replacing audio-card with speaker, making headphones a secondary icon named speaker-headphones, and dropping the audio- prefix from all of the names. Poettering defended his initial request, arguing that the audio- prefix should stay, and that speakers, headsets, and headphones were distinct enough form factors that they deserved separate icon names. When several weeks passed without a reply from Dawes, Poettering announced that he was going to use his proposed icon names anyway, bypassing the specification to talk directly to the distros, and suggested that Dawes hand over maintainership to someone else — reigniting the debate.

The back-and-forth then shifted from the original topic into a discussion of how the specification is maintained and updated. Critics of Dawes accused him of rejecting all requests for new icon names, and not listening to the needs of application developers. Dawes contended that the developer community seemed unwilling to provide input into the process; because the specification is intended to represent a community consensus, accepting a single developer's request for new icons without discussion would rapidly cause it to break down under a mountain of single-purpose icons. He had asked for input from other audio developers, he said, but received none.

Jakob Petsovits suggested that the specification did not need a formal process or maintainer at all, and that "if an icon is used by projects across different desktops and has a semantically clean purpose then that pretty much makes it 'approved' on its own." Marius Vollmer argued for creating another specification describing an "extra icon set" that would be separately maintained. Dawes then replied that such an extension mechanism already exists in the form of addenda to the Icon Naming specification, and expressed frustration that the community did not know of or take part in it.

Finally, talk turned to the purpose of the specification itself, with Dawes arguing that it "is a base specification, designed to be the minimal list of icons that everything else can fall back to. It is not going to have every little icon that ever random developer thinks needs to be in the spec for their app to have that icon. It doesn't make sense." Brian Tarricone responded that "the 'goals of the specification' (whatever the hell that means; inanimate documents don't have goals) are irrelevant. The needs of the community and of the people who will use the specification are paramount" and that "the goals of the spec don't fit what the developers need."

Tarricone's position seemed to be that the specification should try and be as inclusive as possible, trusting that the developer community would not suggest conflicting, confusing, or an excessive number of icon names. Dawes repeatedly pointed out that he was not opposed to adding new icon names, but felt that the developer community needed to weigh in on the merits of suggested additions, or else the specification would not reflect the community's consensus.

Thus, in different ways, both viewpoints do want the specification to be shaped by the community itself. They disagree on how best to implement that goal. Several pointed out that part of the problem is that icon themes by their nature involve both programmers and artists, two groups with very little overlap. Even among developers, who actively participate in the XDG list, individual change requests rarely elicit much response. As Tarricone asked in a subsequent message, "how do you get proper consensus for something where only one person seems to be working on a particular problem and needs icons for it?"

Participants in the discussion agreed that the informal mailing-list-only process was inefficient and led to proposals ending up in long-term limbo. Some suggested using wiki and bug-tracking software to manage proposed changes, but there is not yet a concrete plan for how to smooth out the process.

Standards, specifications, and open source developers

The diverging opinions about the purpose of the Icon Naming specification are limited to that specification alone, but they reveal another stumbling block — that despite producing specifications, Freedesktop.org is not a "standards body" in the traditional sense. Participation in any Freedesktop.org project is voluntary, and there is no attempt to enforce adherence to any specification it produces.

The group was founded in 2000 by Havoc Pennington (then at Red Hat) as a neutral place where projects and companies working on X-based desktop environments could meet to collaborate on common tasks. It currently provides hosting for several important Linux projects, including D-Bus, HAL, the X.Org server, and Compiz. Its other main activity, however, is producing cross-project specifications for system and application behavior. There are currently 50 specifications listed on the site's wiki, covering topics from desktop launchers to X extensions.

The Freedesktop.org mission statement emphasizes that it is not interested in "blessing" or legislating formal standards — a claim important enough to bear repeating on its specifications page. The fact that it produces specifications but makes no attempt to dictate their adoption places the group in an ambiguous position. As Pennington commented, "Freedesktop is basically a place people can document any consensus they may reach on interoperability. No rough consensus and it pretty much just gets stuck."

That may be frustrating for the participants, but for a completely voluntary-membership organization, it is unavoidable. In contrast, the Linux Foundation's Linux Standard Base has a formal workgroup, specification process, and certification.

Ted Gould, who weighed in on the confusion over the Icon Naming specification, summed up the Freedesktop.org distinction. "The reality is that most 'Freedesktop Specs' aren't really anything like IEEE Specifications or anything that typically is described using the word 'specification.' Really they all are more agreements on the way things do work in the code on the various desktops. But, in a truly open source way, code rules, so the only thing that really matters is who's implemented it as a vote yeah or nay on the spec." As the Icon Naming specification discussion recently showed, that process may not always be smooth.

Comments (none posted)

System Applications

Clusters and Grids

rsplib 2.6.4 released

Version 2.6.4 of rsplib has been announced. "RSPLIB is the Open Source implementation (GPLv3) of the IETF's new framework for Reliable Server Pooling (RSerPool), which is described in RFC 5351 to RFC 5356. If you a looking for a Grid comput[at]ion solution which is simple, easy to setup and mostly self-configuring, you are probably looking for RSerPool!"

Full Story (comments: none)

Database Software

cx_Oracle 5.0.2 released

Version 5.0.2 of cx_Oracle has been announced, it includes a number of bug fixes. "cx_Oracle is a Python extension module that allows access to Oracle and conforms to the Python database API 2.0 specifications with a few exceptions."

Full Story (comments: none)

MySQL 6.0.11 Alpha released

Version 6.0.11 Alpha of MySQL has been announced. "6.0.11 will be the last release of 6.0. After this we will be transitioning into a New Release Model for the MySQL Server".

Full Story (comments: none)

PostgreSQL 8.4 Beta 2 released

Version 8.4 Beta 2 of PostgreSQL has been announced. "This beta release fixes a number of issues with the first 8.4 beta, especially issues with pg_standby, PL/pgSQL and encoding and collation handling. We need all users to test 8.4 Beta 2 as soon as possible in order to speed the final release of the new version."

Full Story (comments: none)

PostgreSQL Weekly News

The May 24, 2009 edition of the PostgreSQL Weekly News is online with the latest PostgreSQL DBMS articles and resources.

Full Story (comments: none)

SQLite release 3.6.14.2 announced

Version 3.6.14.2 of the SQLite DBMS has been announced. "Changes associated with this release include the following: * Fix a code generator bug introduced in version 3.6.14. This bug can cause incorrect query results under obscure circumstances. Ticket #3879."

Comments (none posted)

Device Drivers

LIRC 0.8.5 released

Version 0.8.5 of LIRC, the Linux Infrared Remote Control driver, has been announced. From the ChangeLog file: "ANNOUNCE, NEWS, configure.ac, doc/html-source/index.html, setup.sh: 0.8.5 release"

Comments (none posted)

Printing

CUPS 1.4b3 released

Version 1.4b3 of CUPS, the Common Unix Printing System, has been announced. "The third beta release of CUPS 1.4 includes many bug fixes, updated localizations for many languages, new logging features, and greatly improved Kerberos support."

Comments (none posted)

Desktop Applications

Desktop Environments

GNOME 2.26.2 released

Stable version 2.26.2 of the GNOME desktop environment has been announced. "This is the second update to GNOME 2.26. It contains many fixes for important bugs that directly affect our users, documentation updates and also a large number of updated translations. Many thanks to all the contributors who worked hard on delivering those changes in time. We hope it will help people feel better in their daily use of computers!"

Full Story (comments: none)

GNOME Software Announcements

The following new GNOME software has been announced this week: You can find more new GNOME software releases at gnomefiles.org.

Comments (2 posted)

KDE Software Announcements

The following new KDE software has been announced this week: You can find more new KDE software releases at kde-apps.org.

Comments (none posted)

Xorg Software Announcements

The following new Xorg software has been announced this week: More information can be found on the X.Org Foundation wiki.

Comments (none posted)

Electronics

GNU Radio Release 3.2 available

Version 3.2 of the GNU Software Radio toolkit, a control package for the GNU Radio software defined radio platform, has been announced. "Release 3.2 is the beginning of the new stable branch series 3.2.x. Users who develop their GNU Radio applications to the 3.2 C++ and Python APIs will not need to change their source code to work with any of the releases along this line, though recompilation of C++ code may be needed." The release notes don't mention any changes.

Full Story (comments: none)

Interoperability

Wine 1.1.22 announced

Version 1.1.22 of Wine has been announced. Changes include: "- More improvements to OLE copy/paste. - Beginnings of x86_64 exception handling. - Direct3D locking fixes. - ARB shaders improvements. - Better OpenGL pixel format support. - Various bug fixes."

Comments (none posted)

Mail Clients

SquirrelMail 1.4.19 released

Version 1.4.19 of SquirrelMail has been announced. "The security fix to map_yp_alias in 1.4.18 turned out to be incomplete. We also exp[eri]enced some regressions in the updated filter plugin. Both are addressed in this new release 1.4.19 which contains a few other small fixes as well. If you do not use map_yp_alias or the filters plugin there's no urgent need to upgrade now if you already installed 1.4.18."

Full Story (comments: none)

Music Applications

guitarix 0.04.4-1 released

Version 0.04.4-1 of guitarix has been announced. "guitarix is a simple Linux Rock Guitar amplifier for jack(Jack Audio Connektion Kit) with one input and two outputs. Designed to get nice thrash/metal/rock/blues guitar sounds. . . . Some new effect's in the amp section with new controllers, overworked tuner, overworked midi output (still it is experimental but can be useful for rhythm blues or jazz or. . . fun)"

Full Story (comments: none)

Video Applications

PiTiVi 0.13.1 announced

Version 0.13.1 of PiTiVi, a video editor based on the GStreamer multimedia framework, has been announced. "The PiTiVi team is proud to announce the first release in the unstable 0.13 PiTiVi series. This is the result of the past 6 months of intensive hacking. Over 700 commits of goodness awaits you."

Full Story (comments: none)

Languages and Tools

C

GCC 4.4.1 Status Report

The May 21, 2009 edition of the GCC 4.4.1 Status Report has been published. "The 4.4 branch is open under the usual release branch rules, a 4.4.1 release planned around June 21st. The branch seems to be in good shape and new bugs still get fixed quickly. There are a few bugs that I would block the 4.4.1 release for, including the reported ICE building SPEC on i?86. All seem to have patches though."

Full Story (comments: none)

Caml

Caml Weekly News

The May 26, 2009 edition of the Caml Weekly News is out with new articles about the Caml language.

Full Story (comments: none)

Haskell

Haskell Communities and Activities Report

The May, 2009 edition of the semi-annual Haskell Communities and Activities Report has been published. "This is the 16th edition of the Haskell Communities and Activities Report. There are a number of completely new entries and many small updates. As usual, fresh entries are formatted using a blue background, while updated entries have a header with a blue background. Entries on which no activity has been reported for a year or longer have been dropped. Please do revive them next time if you have news on them."

Comments (none posted)

Perl

Parrot 1.2.0 released

Version 1.2.0 of Parrot has been announced, it includes bug fixes, code cleanup, documentation work and more. "Parrot is a virtual machine aimed at running all dynamic languages."

Full Story (comments: none)

Python

Jython 2.5.0 Release Candidate 3 is out

Release Candidate 3 of Jython 2.5.0, an implementation of Python in Java, has been released. "This is the third release candidate of the 2.5 version of Jython. It partially fixes JLine on Cygwin and fixes some threading issues. Almost every release in the past year has been followed shortly by another release to fix a windows bug. Today I finally got off of my butt and installed Windows on a VM and spent the day testing, so hopefully this one will not follow that pattern."

Full Story (comments: none)

Python-URL! - weekly Python news and links

The May 22, 2009 edition of the Python-URL! is online with a new collection of Python article links.

Full Story (comments: none)

Debuggers

CodeInvestigator 0.12.0 released

Version 0.12.0 of CodeInvestigator is out with one new feature and bug fixes. "CodeInvestigator is a tracing tool for Python programs. Running a program through CodeInvestigator creates a recording. Program flow, function calls, variable values and conditions are all stored for every line the program executes. The recording is then viewed with an interface consisting of the code. The code can be clicked: A clicked variable displays its value, a clicked loop displays its iterations."

Full Story (comments: none)

Editors

Emacs 23.0.94 pretest released

Version 23.0.94 of the Emacs editor has been announced. "Pretesters: please send an email to me reporting success or failure on your build platform. In addition, please report bugs via M-x report-emacs-bugs, or send an email to emacs-pretest-bug@gnu.org."

Full Story (comments: none)

IDEs

Pydev 1.4.6 released

Version 1.4.6 of Pydev, an Eclipse plugin for Python, has been announced. Changes include: "* Auto-import for from __future__ import with_statement will add a 'with' token instead of a with_statement token * Globals browser improved (only for Eclipse 3.3 onwards, older versions will have the old interface): o Can filter with working sets o Can match based on module names. E.g.: django.A would match all the django classes/methods/attributes starting with 'A'"

Full Story (comments: none)

Version Control

bzr 1.15 final released

Version 1.15 final of the bzr version control system has been announced. "The smart server will no longer raise 'NoSuchRevision' when streaming content with a size mismatch in a reconstructed graph search. New command ``bzr dpush``. Plugins can now define their own annotation tie- breaker when two revisions introduce the exact same line."

Full Story (comments: none)

Page editor: Forrest Cook

Linux in the news

Recommended Reading

Ex-Microsoftie: Free Software Will Kill Redmond (ComputerWorld)

ComputerWorld interviews Keith Curtis, a former Microsoft employee and Linux convert. "Q:In what ways will free software be Microsoft's undoing? A:Free software will lead to the demise of Microsoft as we know it in two ways. First, the free software community is producing technically superior products through an open, collaborative development model. People think of Wikipedia as an encyclopedia, and not primarily software, but it is an excellent case study of this coming revolution. There are also many pieces of free software that have demonstrated technical superiority to their proprietary counterparts. Firefox is widely regarded by Web developers as superior to Internet Explorer. The Linux kernel runs everything from cellphones to supercomputers. Even Apple threw away their proprietary kernel and replaced it with a free one."

Comments (64 posted)

Trade Shows and Conferences

Canonical developers aim to make Android apps run on Ubuntu (ars technica)

Ars technica reports on the Ubuntu/Android integration project presented at the Ubuntu Developer Summit. "The developers have built a working prototype of the execution environment. They successfully compiled it against Ubuntu's libc instead of Android's custom libc and they are running it on a regular Ubuntu kernel. They intend to cut out Android-specific components that are not needed to make the software run on Ubuntu."

Comments (none posted)

Companies

Intel Adopts an Identity in Software (NY Times)

The New York Times (registration required) highlights Intel's increased interest in software, and Linux-based software in particular. "With animated icons and other quirky bits and pieces, Moblin looks like a fresh take on the operating system. Some companies hope it will give Microsoft a strong challenge in the market for the small, cheap laptops commonly known as netbooks. A polished second version of the software, which is in trials, should start appearing on a variety of netbooks this summer."

Comments (5 posted)

Alleged Nokia Linux smartphone plans exposed by leak (ars technica)

Ryan Paul speculates that Nokia may release a smartphone in 2010. "Nokia has been hard at work building Maemo 5, the next major version of its Linux-based mobile platform. This new version, which is codenamed Fremantle, brings a user interface overhaul and some compelling new capabilities. Although Maemo 5 is still at the beta stage of development and Nokia has not yet announced when it will ship on actual hardware, details are already emerging about the version that will come next, which is codenamed Harmattan."

Comments (9 posted)

Linux Adoption

Vancouver Opens Up (Linux Journal)

Linux Journal looks into open source software use by the City of Vancouver. "There are many interesting attributes to the City of Vancouver: it is regularly rated as one of the three best cities in the world to live, it trails only LA and NYC for films produced in North America, and will host the Winter Olympic Games in 2010, among many others. There is one new attribute to add to the list, however: Vancouver is now one of the growing number of governments implementing Open Source."

Comments (10 posted)

Linux at Work

WiiFit board speaks to Linux (cnet)

Over at cnet, Eric Franklin reports on a newly available Linux input device. "Case in point, Matt Cutts has connected a WiiFit balance board to a Linux box via Bluetooth. So far, all he can do is weigh himself in kilograms and move a red dot around by leaning in different directions on the balance board. [...] Not exactly exciting by any means and seriously, it's difficult for me to see how this could be applied to do something actually cool or useful. One commenter on his site speculated that one could conceivably set up the board in such a way that you could scroll down a screen, simply by by leaning back in your chair."

Comments (13 posted)

Legal

Justice Rules Police Can't Steal Other Kid's Toys (Linux Journal)

Linux Journal covers the story of Riccardo Calixte, the Boston College computer science student targeted by heavy-handed investigators for the capital offense of being a Linux user. "Though Calixte was forced to finish the rest of his semester without a computer -- a rather important tool for a computer science student -- or network access, which school officials saw fit to shut off without bothering to wait for the kangaroo court to conclude, he has now been vindicated, with the Massachusetts Supreme Judicial Court ordering the police to immediately return his property and cease all analysis of it."

Comments (none posted)

Red Hat Sues Switzerland Over Microsoft Monopoly (eWeek)

eWeek Europe reports that Red Hat (and 17 other vendors) have filed suit against the Swiss government in response to no-bid contracts which have been awarded to Microsoft. "'It's not just Switzerland who have been getting away with this kind of nonsense,' said Mark Taylor of the UK-based Open Source Consortium, adding that much of the credit for this action should go to the Free Software Foundation Europe, led by Georg Greve."

Comments (8 posted)

Interviews

Fedora 11: Virtual(ization) Reality (MadRhetoric)

The MadRhetoric blog has an interview with Daniel P. Berrange, Red Hat Virt Team Engineer and Fedora Virtualization guru. "The ideas for new features come from many sources, some from Fedora end-user experiences and consequent bug reports, some magically arrive on cue from upstream projects, while others are things that look to be important for future RHEL releases. With the PCI device passthrough feature in F11, the core support was all already done by the upstream KVM community. This is a important feature for future RHEL, so Red Hat put resources into a F11 feature to add support to libvirt for PCI passthrough with KVM and Xen and then expose this in virt-manager."

Comments (none posted)

The Sound of Fedora 11 (MadRhetoric)

The MadRhetoric blog has an interview with PulseAudio developer Lennart Poettering focused on what's new in Fedora 11. "A lot of the changes we introduced with PA are not directly visible to the user. For example the so called 'glitch-free' logic in PA is very important for a modern audio stack, however the normal user will never notice it -- except maybe because when we introduced it initially a lot of driver bugs got exposed that people were not aware of before because that driver functionality (usually timing related) was not really depended on by any application. In fact even now many of the older drivers expose broken timing that makes usage with PA not as much fun as it could be."

Comments (25 posted)

Resources

Install the GNU ARM toolchain under Linux (IBM developerWorks)

IBM developerWorks presents a tutorial by Bill Zimmerly on installing the GNU ARM toolchain. "Many tools are available for programming various versions of ARM cores, but one particularly popular set is the GNU ARM toolchain. Learn more about embedded development using the ARM core as well as how to install the GNU tools and begin using them."

Comments (none posted)

Reviews

Digital and Analog Circuit Simulation with Ksimus (Linux Journal)

Linux Journal reviews Ksimus. "Ksimus is a circuit simulator that allows you to build digital and analog circuits with discrete components and simulate them in real time. Ksimus does have its limitations though. Ksimus doesn't supply any of the larger circuits like addressable memory or 8-bit adders, but you can build one for yourself and package it up as a Ksimus module. Also, because Ksimus provides only discrete logic components, you're probably not going to be designing a quad-core microprocessor or anything moderately complex. That said, you certainly can use Ksimus to learn about computer logic design, and you even can use it to simulate basic logic circuits. But, best of all, it's just fun to play with!"

Comments (none posted)

Router platform runs OpenWRT Linux (LinuxDevices)

LinuxDevices takes a look at Ubicom's new OpenWRT router. "Ubicom is shipping a OpenWRT Linux-based router platform and reference design using the company's new Ubicom IP7100 Router Gateway Evaluation board. The Ubicom board incorporates its StreamEngine IP7100 series network RISC processor, and includes a gigabit WAN port and four gigabit LAN ports, says the company."

Comments (none posted)

Miscellaneous

Improving the translations workflow with Transifex

Og Maciel discusses the LXDE project's move to Transifex, a language translation utility. "So what exactly is Transifex you may ask? I guess the best way to describe it is as a bridge between source code that needs to be localized and people who know how to translate it. But that was a rather simple description of what this amazing tools does! I could go on and on about the cool features, but for this post I’ll try to keep it simple and go directly to the point. For the administrators: Nothing needs to be done! That’s right, nothing! No more local user accounts, ssh keys and all of that nonsense! Put your feet up and relax!" (Thanks to Rahul Sundaram).

Comments (none posted)

RO: Proprietary licence deal draws ire open source proponents (OSOR.EU)

OSOR.EU takes a quick look at the Romanian government's decision to spends millions of euros on Microsoft licenses. "The announcement came just five days ahead of the third national open source conference, eLiberatica, taking place in Bucharest. One of the conference organisers, Lucian Savluc, condemned the government's spend thrift. "The Romanian government is out of touch with reality. I hope that the European Union will protest this deal, for it is not in the best interest of the Romanian citizens.""

Comments (5 posted)

Page editor: Forrest Cook

Announcements

Non-Commercial announcements

EFF Launches 'Teaching Copyright' to correct entertainment industry misinformation

The Electronic Frontier Foundation has announced the release of the 'Teaching Copyright' Curriculum. "As the entertainment industry promotes its new anti-copying educational program to the nation's teachers, the Electronic Frontier Foundation (EFF) today launched its own "Teaching Copyright" curriculum and website to help educators give students the real story about their digital rights and responsibilities on the Internet and beyond."

Full Story (comments: none)

Wikipedia to change licenses

Last November, version 1.3 of the GNU Free Documentation License gave Wikipedia a special permission to switch to the Creative Commons attribution-sharealike license license. The Wikimedia Foundation has now announced the result of the community's vote on the license change; over 75% (of over 17,000 voters) voted in favor. So it's now official: Wikimedia's sites will switch away from the FDL on June 15.

Comments (5 posted)

Commercial announcements

Gruppo Amadori to roll out Linux-based desktops

IBM has announced a new Linux deployment by the Italian wholesale food distributor Gruppo Amadori. "About 1,000 of the company's 6,000 employees access PCs to help manage the production, processing, and delivery logistics of its poultry products for customers within Italy and internationally. In 2009, some of these employees will move to a Red Hat Enterprise Linux Desktop client operating system and IBM Lotus Symphony, open standards-based word processing, spreadsheets, and presentations. For its collaboration services, the company is moving from Microsoft Exchange to an IBM Lotus Notes and Domino environment hosted on Red Hat Enterprise Linux."

Comments (none posted)

New Books

The Geek Atlas--New from O'Reilly

O'Reilly has published the book The Geek Atlas by John Graham-Cumming.

Full Story (comments: none)

Resources

Linux Foundation Newsletter

The May, 2009 edition of the Linux Foundation Newsletter has been published. Topics include: "* Linux Foundation Launches Linux.com * Linux is in the Clouds * Moblin v2 Beta Released to Public * LF Announces LinuxCon Keynotes * New Training Courses Revealed * End User Summit Dates, Venue Announced * Linux Foundation in the News * Video from Collab Summit Available".

Full Story (comments: none)

Contests and Awards

Wietse Venema and Creative Commons win FSF awards

The annual Free Software Foundation Awards have been announced. "Creative Commons was honored with the Award for Projects of Social Benefit, and Wietse Venema was honored with the Award for the Advancement of Free Software. Presenting the awards was FSF founder and president Richard Stallman. The FSF Award for Projects of Social Benefit is presented annually to a project that intentionally and significantly benefits society by applying free software, or the ideas of the free software movement, in a project that intentionally and significantly benefits society in other aspects of life."

Full Story (comments: none)

Nominations open for White Camel Awards 2009 (use Perl)

Nomination are being accepted for the 2009 Perl White Camel awards. "Every year the White Camels are presented for service to the Perl community. If you look at the previous winners, you'll notice that these are mostly unsung heroes, like previous awardee Eric Cholet, the human moderator of so many Perl mailing lists, or Jay Hannah, one of the people running pm.org (if you ever created/maintained a pm group, chances are that Jay walked you through the process)."

Comments (none posted)

Surveys

Survey: Linux on the Desktop (Freeform Dynamics)

Freeform Dynamics has conducted a survey on desktop Linux usage in business. "Desktop Linux adoption is primarily driven by cost reduction. When asked during a recent online survey of over a thousand IT professionals with experience of desktop Linux deployment in a business context, over 70% of respondents indicated cost reduction as the primary driver for adoption. Ease of securing the desktop and a general lowering of overheads associated with maintenance and support were cited as factors contributing to the benefit. But deployment is currently limited, and challenges to further adoption frequently exist. The majority of desktop Linux adopters have only rolled out to less than 20% of their total PC user base at the moment, though the opportunity for more extensive deployment is clearly identified. In order for Linux to reach its full potential in an organisation, however, it is necessary to pay particular attention to challenges in the areas of targeting, user acceptance and application compatibility."

Comments (none posted)

Calls for Presentations

Linux-Kongress 2009 CFP

Linux-Kongress will return to Hamburg, Germany in 2009; the conference dates are September 22 through 25. The call for papers is out now, with submissions due by July 26. "Linux-Kongress is by far one of the most traditional Linux conferences with a focus on cutting edge development. In 2009 GUUG will organize the 16th edition of this event, first started 1994 in Heidelberg. Linux-Kongress made its trip through a variety of German cities, Netherlands and on the English isle (LinuxConf Europe). Since its start 15 years ago Linux-Kongress has been evolved into the most important meeting for Linux experts and developers in Europe." OSDevCon will be happening in Hamburg at the same time.

Comments (7 posted)

Montreal Linux Power Management Mini-Summit announced

The Montreal Linux Power Management Mini-Summit will take place on July 13, 2009 during the Linux Symposium. "This is an opportunity for members of the Linux Power Management development community to meet face-to-face to discuss the future." A call for agenda topics has been posted.

Full Story (comments: none)

Extended Deadline: openSUSE Conference 2009 Call for Papers

The call for papers for the 2009 openSUSE conference has been extended until June 5. The conference will be held September 17th through 20th in Nuremberg, Germany. The conference has multiple tracks covering desktops, servers, toolchains, community, and more. "The summit will be an opportunity to bring the openSUSE contributor community together to share ideas, experience, hack, and help guide the direction of the project." Click below for the full announcement.

Full Story (comments: 1)

Request for proposals: PostgreSQL Conference 2009 Japan

A request for proposals has gone out for the PostgreSQL Conference 2009 Japan. "This Conference will provide an excellent chance to get to know the Japanese PostgreSQL market. So, Please come to the international Conference on PostgreSQL held for the first time in Japan. PostgreSQL Conference 2009 Japan will be held on November 20 and 21, 2009 at AP Hamamatsucho (Tokyo)." Submissions are due by June 30.

Full Story (comments: none)

Upcoming Events

ICOODB 2009 announced

Registration is open for the 2009 International Conference on Object Databases (ICOODB). "The conference will take place on 1-3 July 2009 at ETH Zurich, in Zurich, Switzerland."

Full Story (comments: none)

LinuxCon speakers and presentations announced

LinuxCon has announced its speakers and presentations. The event will be held in Portland, OR on September 21-23, 2009.

Comments (none posted)

Events: June 4, 2009 to August 3, 2009

The following event listing is taken from the LWN.net Calendar.

Date(s)EventLocation
June 1
June 5
Python Bootcamp with Dave Beazley Atlanta, GA, USA
June 2
June 4
SOA in Healthcare Conference Chicago, IL, USA
June 3
June 5
LinuxDays 2009 Geneva, Switzerland
June 3
June 4
Nordic Meet on Nagios 2009 Stockholm, Sweden
June 6 PgDay Junín 2009 Buenos Aires, Argentina
June 8
June 12
Ruby on Rails Bootcamp with Charles B. Quinn Atlanta, GA, USA
June 10
June 11
FreedomHEC Taipei Taipei, Taiwan
June 11
June 12
ShakaCon Security Conference Honolulu, HI, USA
June 12
June 13
III Conferenza Italiana sul Software Libero Bologna, Italy
June 12
June 14
Writing Open Source: The Conference Owen Sound, Canada
June 13 SouthEast LinuxFest Clemson, SC, USA
June 14
June 19
2009 USENIX Annual Technical Conference San Diego, USA
June 17
June 19
Open Source Bridge Portland, OR, USA
June 17
June 19
Conference on Cyber Warfare Tallinn, Estonia
June 20
June 26
Beginning iPhone for Commuters New York, USA
June 22
June 24
Velocity 2009 San Jose, CA, USA
June 22
June 24
YAPC|10 Pittsburgh, PA, USA
June 24
June 27
LinuxTag 2009 Berlin, Germany
June 24
June 27
10th International Free Software Forum Porto Alegre, Brazil
June 26
June 28
Fedora Users and Developers Conference - Berlin Berlin, Germany
June 26
June 30
Hacker Space Festival 2009 Seine, France
June 28
July 4
EuroPython 2009 Birmingham, UK
June 29
June 30
Open Source China World 2009 Beijing, China
July 1
July 3
OSPERT 2009 Dublin, Ireland
July 1
July 3
ICOODB 2009 Zurich, Switzerland
July 2
July 5
ToorCamp 2009 Moses Lake, WA, USA
July 3
July 11
Gran Canaria Desktop Summit (GUADEC/Akademy) Gran Canaria, Spain
July 3 PHP'n Rio 09 Rio de Janeiro, Brazil
July 4 Open Tech 2009 London, UK
July 6
July 10
Python African Tour : Sénégal Dakar, Sénégal
July 7
July 11
Libre Software Meeting Nantes, France
July 13
July 17
(Montreal) Linux Symposium Montreal, Canada
July 15
July 17
Kernel Conference Australia 2009 Brisbane, Queensland, Australia
July 15
July 16
NIT Agartala FOSS and GNU/Linux fest Agartala, India
July 18
July 19
Community Leadership Summit San Jose, CA, USA
July 19
July 20
Open Video Conference New York City, USA
July 19 pgDay San Jose San Jose, CA, USA
July 20
July 24
2009 O'Reilly Open Source Convention San Jose, CA, USA
July 24
July 30
DebConf 2009 Cáceres, Extremadura, Spain
July 25
July 30
Black Hat Briefings and Training Las Vegas, NV, USA
July 25
July 26
EuroSciPy 2009 Leipzig, Germany
July 25
July 26
PyOhio 2009 Columbus, OH, USA
July 26
July 27
InsideMobile San Jose, CA, USA
July 31
August 2
FOSS in Healthcare unconference Houston, TX, USA

If your event does not appear here, please tell us about it.

Event Reports

RailsConf 2009 gives practical tools for success

O'Reilly presents an event report from the recent RailsConf 2009. "RailsConf 2009, the annual event for the Ruby on Rails community held May 4-7 in Las Vegas, gave new and experienced Rails users practical tools for staying agile and competitive in an industry being transformed by fast-paced innovation. For four intense days, developers engaged directly with more than 100 expert speakers, learning how to exploit the popular framework's newest features to solve problems and build businesses."

Full Story (comments: none)

Web sites

Racetrack 1.0 repository announced

VMware has announced the first public release of the Racetrack Repository. "Racetrack is a designed to store and display the results of automated tests. At VMware, over 2,000,000 test results have been stored in Racetrack Repository. Over 25 different teams use the repository to report results. It has a very simple data model, just three basic tables. ResultSet (stores information about a set of tests (Product, Build, etc.) Result, which stores information about the testcase itself, and ResultDetail, which stores the details of each verification performed within the test. ResultDetails also include screenshots and log files, make it easy for the triage engineer to determine the cause of the failure."

Full Story (comments: none)

Page editor: Forrest Cook


Copyright © 2009, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds