|
|
Log in / Subscribe / Register

LWN.net Weekly Edition for July 16, 2009

Is pre-linking worth it?

By Jake Edge
July 15, 2009

The recent problem with prelink in Fedora Rawhide has led some to wonder about what advantages pre-linking actually brings—and whether those advantages outweigh the pain it can cause. Pre-linking can reduce application start up time—and save some memory as well—but there are some downsides; not least the possibility of an unbootable system as some Rawhide users encountered. The advantages are small enough, or hard enough to completely quantify, that it leads to questions about whether it is justified as the default for Fedora.

Linux programs typically consist of a binary executable file that refers to multiple shared libraries. These libraries are loaded into memory once and shared by multiple executables. In order to make that happen, the dynamic linker (i.e. ld.so) needs to change the binary in memory such that any addresses of library objects point to the right place in memory. For applications with many shared libraries—GUI programs for example—that process can take some time.

The idea behind pre-linking is fairly simple: reduce the amount of time the dynamic linker needs to spend doing these address relocations by doing it in advance and storing the results. The prelink program processes ELF binaries and shared libraries in much the same way that ld.so would, and then adds special ELF sections to the files describing the relocations. When ld.so loads a pre-linked binary or library, it checks these sections and, if the libraries are loaded at the expected location and the library hasn't changed, it can do its job much more quickly.

But there are a few problems with that approach. For one thing, it makes the location of shared libraries very predictable. One of the ideas behind address space layout randomization (ASLR) is to randomize these locations each time a program is run—or library loaded—so that malicious programs cannot easily and reproducibly predict addresses. On Fedora and Red Hat Enterprise Linux (RHEL) systems, prelink is run every two weeks with a parameter to request random addresses to alleviate this problem, but they do stay fixed over that time period.

In addition, whenever applications or libraries are upgraded, prelink must be run again. The linker is smart enough to recognize the situation and revert to its normal linking process when something has changed, but the advantage that prelink brings is lost until the pre-linking is redone. Also, the kernel randomly locates the VDSO (virtual dynamically-linked shared object) "library", which, on 32-bit systems, can overlap one of the libraries, requiring some address relocation anyway. Overall, pre-linking is a bit of a hack, and it is far from clear that its benefits are substantial enough to overcome that.

Fedora and Red Hat Enterprise Linux (RHEL) enable pre-linking by default, while most other distributions make prelink available, but seem unconvinced that the benefits are substantial enough to make it the default. Because it is a very system-dependent feature, hard performance numbers are difficult to find. It certainly helps in some cases, but is it really something that everyone needs?

Matthew Miller brought that question up on the fedora-devel mailing list:

I see [prelink] as adding unnecessary complexity and fragility, and it makes forensic verification difficult. Binaries can't be verified without being modified, which is far from ideal. And the error about dependencies having changed since prelinking is disturbingly frequent.

On the other hand, smart people have worked on it. It's very likely that those smart people know things I don't. I can't find any good numbers anywhere demonstrating the concrete benefits provided by prelink. Is there data out there? [...]

Even assuming a benefit, the price may not be worth it. SELinux gives a definite performance hit, but it's widely accepted as being part of the price to pay for added security. Enabling prelink seems to fall on the other side of the line. What's the justification?

Glibc maintainer Ulrich Drepper noted that pre-linking avoids most or all of the cost of relocations, while also pointing out that the relatively new symbol table hashing feature in GCC reduces the gain for pre-linking. He also described an additional benefit: memory pages that do not require changes for relocations will not be copied (due to copy-on-write) and can thus be shared between multiple processes running the same executable. But his primary motivation may have more to do with his work flow: "Note, also small but frequently used apps benefit. I run gcc etc a lot and like every single saved cycle."

The effect of pre-linking can be measured by using the LD_DEBUG environment variable as Drepper described. Jakub Jelinek, who is the author of prelink, posted some results for OpenOffice.org Writer showing an order of magnitude difference in the amount of time spent doing relocations between pre-linked and regular binaries. Those results are impressive, but, at least for long-running programs, start up time doesn't really dominate—desktop applications, or often-used utilities, are the likely benefactors. As Miller puts it:

If I can get a 50% speed up to a program's startup times, that sounds great, but if I then leave that program running for days on end, I haven't actually won very much at all -- but I still pay the price continuously. (That price being: fragility, verifiability, and of course the prelinking activity itself.)

For 32-bit processors, though, which are those most likely to benefit from the memory savings, there is still the VDSO overlap problem. John Reiser did an experiment using cat and found that glibc needed to be dynamically relocated fairly frequently:

This means that glibc must be dynamically relocated about 10% of the time anyway, even though glibc has been pre-linked, and even though /bin/cat is near minimal in its use of shared libraries. When a GNOME app uses 50 or more pre-linked shared libs, as claimed in another thread on this subject, then runtime conflict and expense are even more likely.

There doesn't seem to be a large interest in removing the prelink default for Fedora, but one has to wonder, if the savings are as large and widespread as people seem to think, why other distributions have been reluctant to adopt it. Part of the reason may be the possibility of a prelink bug rendering systems unbootable or reluctance to rely upon something that requires modifying binaries and libraries, regularly, to keep everything in sync. The security issues may also play into their thinking, though Jelinek argues that security-sensitive programs should be position-independent executables (PIE) that are not pre-linked, and thus have ASLR done for every execution.

While not impossible, a problem like Rawhide suffered seems unlikely to occur in more polished, non-development releases. Though prelink does provide a benefit, it may be a bit hard to justify as time goes on. For some, who are extremely sensitive to start up time costs, it may make a great deal of sense, but it may well be that for the majority of users, the annoyance and dangers are just not worth it.

Comments (54 posted)

Maemo moving to Qt

July 15, 2009

This article was contributed by Nathan Willis

At the Gran Canaria Desktop Summit (GCDS) on July 4th, Nokia's Maemo marketing manager Quim Gil announced that, beginning with the Harmattan release expected in 2010, the company would adopt the Qt toolkit for Maemo's application and user interface layer, replacing the Hildon user interface and GTK components that have served that function since the platform's debut. The announcement was met by a mixed reaction from the Maemo development community, which is already faced with significant API incompatibilities scheduled for the still-to-come Fremantle release, but most agreed that the move was inevitable in light of Nokia's acquisition of Qt creator Trolltech.

Gil presented his talk during the combined GNOME and KDE portion of GCDS, starting with an overview of the current Maemo platform and explaining the upcoming changes in Fremantle before addressing the move towards Qt. "Fremantle" is the code name for Maemo 5.0, and although the software development kit (SDK) has been released, the software and the new hardware devices on which it will run have not. Fremantle will retain many of the same components on which earlier Maemo devices were built, but also introduces some new ones, including the PulseAudio sound server, an X.org X Window implementation, and the Upstart init replacement.

Fremantle will also support a community-maintained Qt toolkit, but the Maemo environment and Nokia applications will remain on GTK and Hildon. Starting with Harmattan, Gil explained, the GTK/Hildon and Qt frameworks will swap places: the core applications and interface will be written for Qt, and GTK/Hildon will be supported through a community-maintained stack. He went on to explain that Nokia will continue to use and contribute to numerous middleware components from the GNOME stack that have always been pieces of Maemo — including GConf, GVFS, and GLib, among others — and emphasized that the move to Qt would bring Maemo a step closer to being a traditional Linux platform, rather than a partially-compatible, niche variant.

Regarding the decision to move from GTK/Hildon to Qt, Gil's talk cited two factors. First, the Qt framework is available on desktop systems and handheld devices like Symbian-powered phones, making it easier for application developers to use one tool chain and one API across a variety of platforms (although GTK is available on desktop systems, Hildon is designed for mobile devices). Second, Nokia plans on furthering development of the Qt toolkit through its QtMobility project, which will develop entirely new APIs for mobile devices running Symbian, Maemo, or Windows CE.

Developer reaction

Responses to Nokia's announcement hit Maemo-related blogs and discussion boards almost immediately. Some expressed optimism about the switch, but many more expressed their reservations — not about Qt itself, but about the sudden switch from one toolkit to another, and the accompanying switch from C to C++ as the core language. In the Maemo Talk forum thread discussing the announcement, several posters expressed concern that neither Fremantle nor Harmattan would officially support both the old and the new toolkit, thus leaving third-party developers without a smooth transition plan.

Furthermore, the difficulty of the toolkit switch between Fremantle and Harmattan is compounded by the fact that Fremantle will break compatibility with the Maemo 4.x-series, thus forcing two consecutive rewrites onto developers. Others in the thread questioned the timing of the announcement, since Fremantle has yet to be released. According to forum member "pycage", "This story gives me the impression that Fremantle, from a developer's point of view, is already obsolete even before it was released."

Murray Cumming of embedded Linux company Openismus observed on his blog "it’s clearly a rather arbitrary and disruptive decision. I suspect that some managers are dictating the Nokia-wide use of Qt even in projects that don’t otherwise share code, without understanding the difficulty of the change. UI code is never just a thin layer that can be replaced easily, even if it looks like just another block in the architecture diagram. Likewise, hundreds of C coders cannot become capable C++ coders overnight. I expect great difficulties and delays as a result of the rewrites [...]"

Gil, however, defended the move to forum members, noting that "providing commercial support on both frameworks [GTK/Hildon and Qt] for both releases [Fremantle and Harmattan] implies an amount of work that we simply can't nor want to commit [to]." He responded to developers' fears of two consecutive releases breaking compatibility by assuring them that the transition would be smooth because of the consistent middleware layer, and noted that the advance timing of the announcement itself was an effort to give early warning so that developers could have adequate time to prepare. He also said that it was too early to draw conclusions about compatibility across releases, since many of the details of Harmattan are still unknown.

Nokia and Qt ... and Symbian

Uncertainty about transitioning between two user interface toolkits aside, no one seemed surprised by the announcement that Maemo would move to Qt, given that Nokia acquired Trolltech — subsequently renamed Qt Software — in January of 2008. As Gil alluded, moving Maemo to Qt allows Nokia to more efficiently repurpose its engineering resources toward the development of a single software stack. More importantly, however, by using Qt on Maemo Nokia will be "eating its own dogfood," and can thus more actively promote Qt as a commercial solution to application developers.

Several in the community noted that, in the market for Qt, Maemo is considerably smaller than desktop Linux, which itself is considerably smaller than the smartphone operating system Symbian, which Nokia acquired in June of 2008. Shortly after the acquisition, Nokia announced plans to release Symbian as open source software, and set up the Symbian Foundation to manage the code. The company released its first preview of Qt running on top of Symbian in October of that year, and has continued to develop it as a "technology preview," highlighting its cross-platform capabilities.

Maemo Talk forum member "eiffel" speculated that Nokia's plan might be to somehow merge Maemo and Symbian into a single OS, but Gil's talk presented a more straightforward plan: he described Maemo as the platform best suited to high-end mobile hardware like mobile Internet devices (MID), occupying the middle slot in between Symbian-powered phones and full Linux desktops. Maemo bloggers Daniel Gentleman and Jamie Bennett both observed that by acquiring Qt and Symbian, Nokia was better positioning itself to compete against the full range of handheld devices that are soon expected to be running Android.

Gil commented in the Maemo Talk discussion that Nokia plans to develop new Qt APIs specific to handheld devices under the QtMobility project. The QtMobility site lists three new APIs: connections management, contacts, and a service framework. Source code for all three is available from a public Git repository, although none have yet been bundled for stable release. Gil indicated that the new APIs will be shared across the platforms, but that Maemo and Symbian will not share other code. "The interest is to align on the API level. Then each platform will push its own identity and strengths based on the target users and form factors of the products released. This means that UI and pre-installed application might differ, and in some cases even significantly."

What's next?

Moving Maemo from GTK/Hildon to Qt may be painful in the short term — at least for some developers — but the long term benefits of a single toolkit for both Linux-based and Symbian-based platforms no doubt made the decision easy for Nokia. The big question remains — regardless of whether it uses GTK/Hildon or Qt — where does Nokia intend to take Maemo itself? The platform has plenty of fans in the open source community, but it remains a niche OS.

Since its debut, Maemo has shipped on only three Nokia devices: the 770, N800, and N810 Internet Tablets, the last of which was launched in 2007. Although a community effort exists to extend the platform's hardware support, and it can run on a BeagleBoard development motherboard, to date no non-Nokia consumer product has ever adopted Maemo as its operating system. Fremantle will reportedly launch on a new generation of device, running on OMAP3-based hardware that Nokia notably does not refer to as an Internet Tablet like its predecessors, opting instead to use generic terminology like "device" and "hardware."

It would require considerable reading between the lines to speculate that Nokia intends to ship Maemo on its high-end smartphones, especially considering that the company has continued to push Symbian as its platform of choice for its high-end N-series and E-series phones four years after launching Maemo. But unless Nokia plans to offer more products in the MID product category, it does seem strange to expend resources maintaining an entire operating system for a single device, especially while touting the multi-platform reach of Qt as one of its strengths.

Comments (10 posted)

Ksplice provides updates without reboots

By Jake Edge
July 10, 2009

While Linux systems generally have a good reputation for uptime, there are sometimes unavoidable reasons that a reboot is required. Typically, that is because of a kernel update, especially one that fixes a security hole. Companies that have long-running processes, or those who require uninterrupted availability, are not particularly fond of this requirement. A new company, Ksplice, Inc. has come up with a way to avoid these reboots by hot-patching a running kernel.

The technique used by the company, unsurprisingly called Ksplice, is free software, which we looked at last November on the Kernel page. (An earlier look, from April 2008, may also be instructive). The basic idea is that by doing a comparison of the original and patched kernels, one can build a kernel module that will patch the new code into the running kernel.

For simple code changes, the process is fairly straightforward. Each kernel is built with a special set of flags to simplify determining which functions have changed as a result of the patch. Those changes are packaged up into the module, and then applied when the module is loaded. Then there is the small matter of ensuring that the kernel is not currently executing any of the functions to be replaced. In order to do that, the kernel is halted while each thread is examined, if none are running the affected code—or have a return address into the code on their stack—the patch is made and the kernel can go on its way. Otherwise, Ksplice delays for a short time and tries again, eventually giving up if it cannot satisfy that condition.

There are several kinds of changes that are much more difficult to handle, particularly data structure changes. For those, someone needs to analyze the changes and write code to handle munging the data structures appropriately. Ksplice has an infrastructure that allows this data structure manipulation to be done while the kernel is halted, but the code itself is, or can be, non-trivial. To a great extent, it is the knowledge of how to do this with Ksplice that the company is offering as a service.

As a test of the technology, the Ksplice developers looked at all of the security problems listed for the kernel over a three-year period (May 2005 to May 2008). Of the 64 Common Vulnerabilities and Exposures (CVE) entries for the kernel that had an impact worse than denial of service, Ksplice was able to patch 56 without any additional code being written. The other eight could be handled with a small amount of code—an average of 17 lines per patch.

As a further demonstration of the Ksplice technique, Ksplice, Inc. is currently offering a free-beer service for Ubuntu 9.04 (Jaunty Jackalope) users. Ksplice Uptrack will allow those users to update their kernels without rebooting them. The Ksplice folks will be tracking the Ubuntu kernel git tree, turning those changes (security and bug fixes) into modules that can be retrieved and applied with the Uptrack client. As described in the FAQ, Uptrack will support the latest release of Ubuntu: 9.04 for now, switching over to 9.10 (Karmic Koala) when that is released.

As noted, Ksplice itself is free software, available under the GPLv2, and the Uptrack client is as well. That leads to a service-oriented free software business model for Ksplice, Inc. While their exact plans are not yet clear, providing similar updates for enterprise kernels (RHEL and SUSE), but charging for those, would seem an obvious next step. Other areas for expansion include other operating systems as well as user-space applications. In an interview, Waseem Daher, co-founder and COO of Ksplice, described the company's goal:

The long term vision is that, at the end of the day, all updates will be hot updates — updates that don't require a reboot or an application restart. This is actually a big problem because if you look at technology used in data centers, no-one has a good solution for software updates, from as low level as your router or SAN, up to your virtualization solution, the operating system, the database, and the critical applications. Right now, all these updates require you either to reboot the system or restart the service.

This is a big pain point for sysadmins because, on the one hand you have to apply the updates so that you can fix important security problems, but on the other if you don't then you're vulnerable. When you do apply them, though, there's downtime and that's lost productivity. There's a real cost associated with the downtime. We want to take the technology that we've developed and use it to make life easier in the data center. That's the broad vision for where we're going with the company, and we're starting with Linux.

That's a rather ambitious vision, but one that seems in keeping with where things are headed. No matter how fast booting gets, it is still a major annoyance, for servers or desktops. Even restarting applications, particularly things like database servers or desktop environments, leads to lost time and productivity. Whether Ksplice, Inc. can expand their offerings to reach that goal is an open question.

One of the problems that Ksplice will face is competition. In the Linux world, that could come from distributors deciding to start making Ksplice modules themselves, and either charging their customers for them, or adding that capability to their subscription-based support offerings. In the proprietary, closed source world, Ksplice will have to work with the vendors of operating systems and applications so that it can access the source code. Those vendors are most certainly going to want a piece of the pie for that access.

There may also be technical hurdles. One botched kernel update, which led to introducing a serious flaw—security or otherwise—could ruin the company's reputation. That, in turn, might make it much harder to convince new customers. Hot-patching is a subtle, difficult problem to solve completely.

On the other hand, Ksplice has an excellent pedigree; started by four MIT students based on co-founder Jeff Arnold's master's thesis. Ksplice also won MIT's $100,000 entrepreneurship competition—against some stiff competition, one would guess. Arnold's reasons for looking at the problem will resonate with system administrators everywhere: he delayed patching an MIT server to avoid downtime on a busy system, so an attacker took advantage of that window.

It will be interesting to watch both Ksplice and the general idea of hot-patching over the coming years. When Ksplice was first introduced, a Microsoft patent on the technique was noted on linux-kernel, along with protestations of rather old (PDP-11) prior art. How that plays into the fortunes of Ksplice and others who come along will be interesting—potentially sickening—as well.

Comments (18 posted)

Help Wanted

Are you able to sell web advertisements, knowledgeable about free software and LWN, and interested in taking on some work? If so, LWN.net would like to talk to you; please read this for details.

Comments (none posted)

Page editor: Jonathan Corbet

Security

Crying wolf over OpenSSH

By Jake Edge
July 15, 2009

In the security world, there is always tension between under and over-reporting on vulnerabilities. Not only between the "full" and "responsible" disclosure camps, but also for those trying to make sure that users are aware of the most recent attacks. Sometimes, that can lead to reports that eventually turn out to be incomplete, overstated, or flat out wrong. There is value even in incorrect reports, though; at a minimum they can raise the profile of the most vulnerable of services—reminding administrators to update and/or reconfigure the affected program—which may reduce the impact of the next exploit.

For many reasons, ssh vulnerabilities—or purported vulnerabilities—are treated differently than others. If this had been a report of yet another content management system cross-site scripting flaw or wireshark dissector bug, it would not have gotten much, if any, notice. But, ssh is one service that is turned on for nearly every server on the internet. Without ssh, many administrators couldn't access the server to handle important tasks—security updates for example.

In addition, many internet servers have just a few, trusted users, which may—unfortunately—make their administrators rather sanguine about patching for local privilege escalation flaws. That makes a way to subvert ssh and get that local access suddenly a much more dangerous flaw. In addition, many administrators allow root to log in remotely, so an ssh vulnerability might lead to root privileges without needing an additional privilege escalation flaw.

It is safe to say that exploitable ssh vulnerabilities are very high on the list of things that keep system administrators up at night. So that makes it rather easy to stir up a firestorm of publicity by reporting one. The Internet Storm Center (ISC) was one of the first to report on the rumored OpenSSH vulnerability (which we also passed along). The whole thing got started with a post to the full-disclosure mailing list that purported to show an ssh "zero-day" exploit compromising a server in New Zealand.

It wasn't very long before folks realized that it was likely the result of a "brute force" attack against a user password, but there was enough "chatter" of various sorts (see the updates on the ISC post) that it was difficult to be sure. In the end, we still aren't completely sure, but OpenSSH developer Damien Miller posted his belief that there was no ssh zero-day; ISC also posted a notice calling the vulnerability reports bogus. In the absence of any more information, those would seem to close the book on this vulnerability.

While it was a bit of a fire drill, it is likely that the reports led to some system administrators taking a look at their ssh installation to make sure it was up-to-date. They may also have tightened up their configuration in ways that might lessen the chances of a vulnerability affecting their systems. Disallowing root logins, requiring key-based instead of password-based logins, or restricting ssh access to certain IP addresses are all steps that administrators may have taken. Perhaps it was needless in this case, but a general tightening up of ssh configuration is likely to be helpful in fending off brute-force or other attacks down the road.

Comments (9 posted)

Brief items

DHCP server can take over client (The H)

The H warns of a DHCP client vulnerability which allows a hostile server to take over the system. "According to Marcus Meissner from SUSE, the vulnerability doesn't affect Red Hat and SUSE because their source code includes the FORTIFY_SOURCE feature. With it, the GNU Compiler Collection (GCC) knows how large the buffer is, including the maximum size. The glibc gets the buffer size information and uses a version of strcpy() that checks and makes sure that no more than 20 bytes are copied. If the buffer is greater, then the program is aborted."

Comments (35 posted)

New vulnerabilities

acroread: multiple vulnerabilities

Package(s):acroread CVE #(s):CVE-2009-1492 CVE-2009-1493
Created:July 13, 2009 Updated:July 15, 2009
Description:

From the Gentoo advisory:

Arr1val reported that multiple methods in the JavaScript API might lead to memory corruption when called with crafted arguments (CVE-2009-1492, CVE-2009-1493).

Alerts:
Gentoo 200907-06 acroread 2009-07-12

Comments (none posted)

apache: multiple vulnerabilities

Package(s):apache CVE #(s):CVE-2009-1890 CVE-2009-1891
Created:July 9, 2009 Updated:December 7, 2009
Description: Apache has two denial of service vulnerabilities. From the Mandriva alert: The stream_reqbody_cl function in mod_proxy_http.c in the mod_proxy module in the Apache HTTP Server before 2.3.3, when a reverse proxy is configured, does not properly handle an amount of streamed data that exceeds the Content-Length value, which allows remote attackers to cause a denial of service (CPU consumption) via crafted requests (CVE-2009-1890). Fix a potential Denial-of-Service attack against mod_deflate or other modules, by forcing the server to consume CPU time in compressing a large file after a client disconnects (CVE-2009-1891).
Alerts:
Mandriva MDVSA-2009:323 apache 2009-12-07
rPath rPSA-2009-0154-1 httpd 2009-11-24
Red Hat RHSA-2009:1580-02 httpd 2009-11-11
Fedora FEDORA-2009-8812 httpd 2009-08-20
Ubuntu USN-802-2 apache2 2009-08-19
CentOS CESA-2009:1205 httpd 2009-08-10
Red Hat RHSA-2009:1205-01 httpd 2009-08-10
Slackware SSA:2009-214-01 httpd 2009-08-03
Debian DSA-1834-2 apache2 2009-07-31
Mandriva MDVSA-2009:168 apache 2009-07-28
Debian DSA-1834 apache2 2009-07-15
Red Hat RHSA-2009:1156-01 httpd 2009-07-14
Ubuntu USN-802-1 apache2 2009-07-13
CentOS CESA-2009:1148 httpd 2009-07-14
Gentoo 200907-04 apache 2009-07-12
Red Hat RHSA-2009:1148-01 httpd 2009-07-09
Mandriva MDVSA-2009:149 apache 2009-07-09
rPath rPSA-2009-0142-1 httpd 2009-11-12
rPath rPSA-2009-0142-2 httpd 2009-11-12
CentOS CESA-2009:1580 httpd 2009-11-12
SuSE SUSE-SA:2009:050 apache2,libapr1 2009-10-26

Comments (none posted)

camlimages: integer overflow

Package(s):camlimages CVE #(s):CVE-2009-2295
Created:July 14, 2009 Updated:June 1, 2010
Description: From the Debian advisory: Tielei Wang discovered that CamlImages, an open source image processing library, suffers from several integer overflows which may lead to a potentially exploitable heap overflow and result in arbitrary code execution.
Alerts:
Gentoo 201006-02 camlimages 2010-06-01
Fedora FEDORA-2009-7491 ocaml-camlimages 2009-07-11
Fedora FEDORA-2009-7494 ocaml-camlimages 2009-07-11
Debian DSA-1832-1 camlimages 2009-07-13
Mandriva MDVSA-2009:286 ocaml-camlimages 2009-10-21

Comments (none posted)

dbus: policy bypass

Package(s):dbus CVE #(s):CVE-2009-1189
Created:July 14, 2009 Updated:May 3, 2011
Description: From the Ubuntu advisory: It was discovered that the D-Bus library did not correctly validate signatures. If a local user sent a specially crafted D-Bus key, they could spoof a valid signature and bypass security policies.
Alerts:
SUSE SUSE-SR:2011:008 java-1_6_0-ibm, java-1_5_0-ibm, java-1_4_2-ibm, postfix, dhcp6, dhcpcd, mono-addon-bytefx-data-mysql/bytefx-data-mysql, dbus-1, libtiff/libtiff-devel, cifs-mount/libnetapi-devel, rubygem-sqlite3, gnutls, libpolkit0, udisks 2011-05-03
Red Hat RHSA-2010:0018-01 dbus 2010-01-07
CentOS CESA-2010:0018 dbus 2010-01-08
Mandriva MDVSA-2009:256-1 dbus 2009-12-05
Mandriva MDVSA-2009:256 dbus 2009-10-06
Debian DSA-1837-1 dbus 2009-07-18
Ubuntu USN-799-1 dbus 2009-07-13

Comments (none posted)

dhcp: arbitrary code execution

Package(s):dhcp3 CVE #(s):CVE-2009-0692
Created:July 15, 2009 Updated:January 27, 2010
Description:

From the Red Hat advisory:

The Mandriva Linux Engineering Team discovered a stack-based buffer overflow flaw in the ISC DHCP client. If the DHCP client were to receive a malicious DHCP response, it could crash or execute arbitrary code with the permissions of the client (root). (CVE-2009-0692)

Alerts:
Ubuntu USN-803-2 dhcp3 2010-01-27
Mandriva MDVSA-2009:312 dhcp 2009-12-03
Fedora FEDORA-2009-8344 dhcp 2009-08-07
Debian DSA-1833-2 dhcp3 2009-08-25
Mandriva MDVSA-2009:151 dhcp 2009-07-15
CentOS CESA-2009:1154 dhcp 2009-07-15
Ubuntu USN-803-1 dhcp3 2009-07-14
SuSE SUSE-SA:2009:037 dhcp-client 2009-07-15
Slackware SSA:2009-195-01 dhcp 2009-07-15
Red Hat RHSA-2009:1136-01 dhcp 2009-07-14
Red Hat RHSA-2009:1154-02 dhcp 2009-07-14
Gentoo 200907-12 dhcp 2009-07-14
Debian DSA-1833-1 dhcp3 2009-07-14
Fedora FEDORA-2009-9075 dhcp 2009-08-27

Comments (none posted)

dhcp: arbitrary file overwrite

Package(s):dhcp CVE #(s):CVE-2009-1893
Created:July 15, 2009 Updated:July 16, 2009
Description:

From the Red Hat advisory:

An insecure temporary file use flaw was discovered in the DHCP daemon's init script ("/etc/init.d/dhcpd"). A local attacker could use this flaw to overwrite an arbitrary file with the output of the "dhcpd -t" command via a symbolic link attack, if a system administrator executed the DHCP init script with the "configtest", "restart", or "reload" option. (CVE-2009-1893)

Alerts:
CentOS CESA-2009:1154 dhcp 2009-07-15
Red Hat RHSA-2009:1154-02 dhcp 2009-07-14

Comments (none posted)

dhcp: denial of service

Package(s):dhcp3 CVE #(s):CVE-2009-1892
Created:July 15, 2009 Updated:December 4, 2009
Description:

From the Debian advisory:

Christoph Biedl discovered that the DHCP server may terminate when receiving certain well-formed DHCP requests, provided that the server configuration mixes host definitions using "dhcp-client-identifier" and "hardware ethernet". This vulnerability only affects the lenny versions of dhcp3-server and dhcp3-server-ldap. (CVE-2009-1892)

Alerts:
Mandriva MDVSA-2009:312 dhcp 2009-12-03
Fedora FEDORA-2009-8344 dhcp 2009-08-07
Debian DSA-1833-2 dhcp3 2009-08-25
Gentoo 200908-08 dhcp 2009-08-18
Mandriva MDVSA-2009:172 dhcp 2009-07-28
Fedora FEDORA-2009-9075 dhcp 2009-08-27
Debian DSA-1833-1 dhcp3 2009-07-14

Comments (none posted)

djbdns: unconstrained offsets

Package(s):djbdns CVE #(s):CVE-2009-0858
Created:July 14, 2009 Updated:July 15, 2009
Description: From the Debian advisory: Matthew Dempsky discovered that Daniel J. Bernstein's djbdns, a Domain Name System server, does not constrain offsets in the required manner, which allows remote attackers with control over a third-party subdomain served by tinydns and axfrdns, to trigger DNS responses containing arbitrary records via crafted zone data for this subdomain.
Alerts:
Debian DSA-1831-1 djbdns 2009-07-13

Comments (none posted)

libtiff: arbitrary code execution

Package(s):tiff CVE #(s):CVE-2009-2347
Created:July 14, 2009 Updated:March 8, 2011
Description: From the Ubuntu advisory: Tielei Wang and Tom Lane discovered that the TIFF library did not correctly handle certain malformed TIFF images. If a user or automated system were tricked into processing a malicious image, an attacker could execute arbitrary code with the privileges of the user invoking the program.
Alerts:
Gentoo 201209-02 tiff 2012-09-23
Oracle ELSA-2012-0468 libtiff 2012-04-12
Mandriva MDVSA-2011:043 libtiff 2011-03-08
rPath rPSA-2010-0064-1 libtiff 2010-10-17
Mandriva MDVSA-2009:169-1 libtiff 2009-12-03
SuSE SUSE-SR:2009:014 dnsmasq, icu, libcurl3/libcurl2/curl/compat-curl2, Xerces-c/xerces-j2, tiff/libtiff, acroread_ja, xpdf, xemacs, mysql, squirrelmail, OpenEXR, wireshark 2009-09-01
Gentoo 200908-03 tiff 2009-08-07
Mandriva MDVSA-2009:169 libtiff 2009-07-28
CentOS CESA-2009:1159 libtiff 2009-07-23
Fedora FEDORA-2009-7775 libtiff 2009-07-19
Fedora FEDORA-2009-7724 libtiff 2009-07-19
Red Hat RHSA-2009:1159-01 libtiff 2009-07-16
Debian DSA-1835-1 tiff 2009-07-15
Mandriva MDVSA-2009:150 libtiff 2009-07-13
Ubuntu USN-801-1 tiff 2009-07-13

Comments (none posted)

mumbles: unsafe shell usage

Package(s):mumbles CVE #(s):
Created:July 13, 2009 Updated:July 15, 2009
Description:

From the Red Hat bugzilla entry:

The Firefox plugin uses os.system in an insecure fashion.

Version-Release number of selected component (if applicable): mumbles-0.4-1.fc10

        def open_uri(self, uri):
                mime_type = gnomevfs.get_mime_type(uri)
                application = gnomevfs.mime_get_default_application(mime_type)
                os.system(application[2] + ' "' + uri + '" &')

This would be much better written to use the subprocess module and use an argument list like [application[2], uri], or else by using the shell's own substitution mechanism like this:

os.environ['URI'] = uri
os.system(application[2] + ' "$URI" &')  
Alerts:
Fedora FEDORA-2009-7498 mumbles 2009-07-11

Comments (none posted)

sork-passwd-h3: cross-site scripting

Package(s):sork-passwd-h3 CVE #(s):CVE-2009-2360
Created:July 13, 2009 Updated:September 14, 2009
Description:

From the Debian advisory:

It was discovered that sork-passwd-h3, a Horde3 module for users to change their password, is prone to a cross-site scripting attack via the backend parameter.

Alerts:
Gentoo 200909-14 horde 2009-09-12
Debian DSA-1829-2 sork-passwd-h3 2009-07-14
Debian DSA-1829-1 sork-passwd-h3 2009-07-11

Comments (none posted)

Page editor: Jake Edge

Kernel development

Brief items

Kernel release status

The current 2.6 development kernel is 2.6.31-rc3, released by Linus on July 13. There's lots of fixes, but this prepatch also includes a new driver for GPIO-based matrix keypads, the "osdblk" driver (which presents an object on an object storage device as a Linux block device), support for the loading of kernel module symbols in the performance counter tools, and the removal of the Intel Langwell USB OTG driver (because it depends on infrastructure which is not yet present in the mainline). The short-form changelog is in the announcement; full details can be found in the full changelog.

There have been no stable kernel updates over the last week.

Comments (none posted)

Kernel development news

Quotes of the week

To many kernel developers, the most scary ideas/patches are not the ones which are totally outlandish and crazy (such as making the kernel be able to parse and execute SQL-like statements as arguments to a modified readdir system call, in the grand but quixotic goal of unifying filesystems and databases), but the ones which are just sane enough to be initially appealing, but which ends up being a millstone making future development and backwards compatibility efforts extremely difficult. At least, this is the fear which I suspect drives the often harsh reception some have received (regardless of whether that reception was justified or not).
-- Ted Ts'o

This is why changing core MM is so worrisome - there's so much secret and subtle history to it, and performance dependencies are unobvious and quite indirect and the lag time to discover regressions is long.
-- Andrew Morton

Having this name is very convenient, people review my drivers like crazy.
-- Linus Walleij (Thanks to Alejandro Riveira Fernández)

Comments (none posted)

In brief

Listmanager. It appears that vger.kernel.org, the busy system which handles the bulk of the kernel-oriented mailing lists, is about to get a new mailing list manager. The new system is called "listmanager"; it is meant to be somewhat more efficient that the existing majordomo installation. The interface should stay about the same, though.

An early-stage version of the software has been posted for those who would like to play with it. Matti Aarnio, the author of the code, says: "Somebody will want to know if the sources will be available. Yes, but it is only my 3rd day of hacking at it yet..."

Btrfs and RAID. Speaking of early-stage software, David Woodhouse has posted an initial implementation of RAID5 and RAID6 support for Btrfs. It's not exactly functional yet, but a number of the pieces are in place. Some additional good news is that this work does not involve the addition of another RAID implementation to the kernel; instead, David has moved the MD implementation into common code so that it can be used in both places.

VFAT. Andrew Tridgell continues to work on the VFAT patent workaround patch. He has set up a directory on kernel.org which contains the latest version of the patch; there is also a README file which describes the interoperability problems which have been identified so far. Meanwhile, some developers are pushing for a return to the previous version of the patch, which simply took away the ability to create long file names. That patch removes some useful functionality, but it is also pretty well guaranteed not to cause interoperability problems.

Tridge does not appear to have given up on the long-name-only approach, though. Stay tuned; he may yet come up with a version which interoperates more universally.

Kmemleak. The kmemleak code was merged for 2.6.31; that means that testers are now beginning to post about memory leak reports. At this stage, it seems that kmemleak is still putting out a fair number of false reports - but it is also turning up real leaks. People who are interested in playing with kmemleak should (1) run 2.6.31-rc3 or later, and (2), look at these suggestions for evaluating leak reports posted by kmemleak author Catalin Marinas.

Comments (none posted)

Communicating requirements to kernel developers

By Jonathan Corbet
July 14, 2009
The 2009 kernel summit is planned for October in Tokyo. Over the years, your editor has observed that the discussion on what to discuss at the summit can sometimes be as interesting as the summit itself. Recently, the question of how user-space programmers can communicate requirements to the kernel community was raised. The ensuing discussion was short on definitive answers, but it did begin to clarify a problem in an interesting way.

For the curious, the entire thread can be found in the ksummit-2009-discuss archives. Matthew Garrett started things this way:

I've just run a session at the desktop summit in Gran Canaria on the functionality that userspace developers would like to see from the kernel. Some interesting things came out of it, but one of the major points was that people seemed generally unclear on how they could communicate those requirements to kernel developers. Worth discussing?

Dave Jones's response was instructive:

What exactly is the problem ? They know where linux-kernel@vger.kernel.org is, so why aren't they talking to us there?

To developers who are used to the ways of linux-kernel, and who are well established in that community, a question like this might make sense. If one were to poll developers who do not normally hang out in the kernel community, though, one might get an answer something like this:

  1. The volume on linux-kernel is far too high for ordinary people to cope with.

  2. Even if we could keep up with linux-kernel, the volume is still likely to bury anything we might post there.

  3. Kernel people speak their own language, making it hard to follow discussions, much less participate in them.

  4. If somebody does notice our request, they will probably flame it to a cinder without necessarily taking the time to understand it first.

  5. If they don't flame it, they will probably tell us to send a patch, but we're not kernel developers and thus not in a position to do that.

There can be no doubt that some communications problems can be easily blamed on the requesting side. If a feature request is phrased as a demand, it is unlikely to be received well. Kernel developers are beholden to demands from their employers, but from nobody else; like most other developers, they take a dim view of people who feel entitled to free work just because they want it. A classic example here would be the early Carrier Grade Linux specifications produced by OSDL; they read like a "to do" list handed to the kernel community, even if OSDL eventually claimed that it was not intended that way.

Another problem can be poorly-expressed requirements. Consider the early TALPA proposal, which posted a very clear set of low-level requirements. Unfortunately, they were too low-level, requiring features like the ability to intercept file close operations. Instead, TALPA (now fanotify) needed to express requirements like "we need a clean way to support proprietary malware-scanning software, and this is why." That disconnect set the project back significantly, and could well have killed a project whose developers showed less persistence or less willingness to learn. Clearly expressing requirements at the right level is never an easy task, but it's crucial.

Finally, some ideas just don't make sense in the kernel. Perhaps they cannot be implemented in a way which avoids security problems, does not break other features, and does not create long-term maintenance problems. Or perhaps there are better solutions in user space. A developer who goes to linux-kernel with this kind of request is likely to go away feeling like kernel developers are completely unwilling to listen to reason.

All of the above notwithstanding, there is some recognition that user-space developers have real difficulties in bringing requirements to the kernel community. Kernel developers tend to be busy, focused on their own projects, and not always entirely open to requests from outside the community. There is no mechanism for tracking feature requests, so it is very easy for them to be buried in the flood of email. The tone of discussions can be harsh, even though it truly has improved over the years. And so on.

This is not good. The kernel exists to provide for the needs of user space; if the kernel development community is not hearing what those needs are, it can only fail to satisfy them. So thinking about how to make it easier for user-space developers to communicate their requirements would seem to be worthwhile; chances are that space will be made at the summit for that topic. But there is no need to wait for the summit to start talking about how things could be improved.

Matthew Wilcox suggested the creation of a document on how user-space developers can interact with the kernel community. The idea makes sense (your editor may just try to help there), but this is not a problem which can be solved by documents alone.

James Bottomley described three broad categories of users needing changes to the kernel:

  1. Sophisticated developers who can write their own kernel extensions.
  2. Users who can get a kernel developer interested in their desired additions.
  3. Users who want features that no developers are interested in.

James points out that categories 1 and 2 can be helped with documentation and general outreach. He worries, though, that we have no way to help the third category of users, who are generally left with no way to get the kernel changed to meet their needs.

Ted Ts'o had a different taxonomy which he has put forward as a way to help understand the problem:

  1. Core kernel developers (or those who have access to such people). Core developers have the advantage of a high degree of trust in the community; that allows them to get features into the kernel with a relatively small amount of trouble. They are able to merge code which might well not pass muster if it came from a different source.

  2. Competent, but non-core kernel developers (and, again, people who have access to them). These developers have to work harder to justify their changes, but they are generally in a position to get changes merged as long as the work is good.

  3. Potentially competent developers with "patently bad design taste." Ted suggests that the frank nature of the kernel review process is intended mainly to weed out bad patches from this source.

  4. Users with no access to kernel development expertise, who must thus try to convince somebody else in the community to implement their desired feature for them. Ted divided this category into two subcategories, depending on whether there is an active kernel developer working in the user's area of interest or not.

Ted's thought is that this taxonomy can help users to understand why certain patches and ideas are treated the way they are. It can also be used to help develop ways to reach out to each specific group of users. Certainly the different groups need to hear different messages. One could argue that the existing documentation should be sufficient for people with kernel development skills, but there is relatively little help out there for those who must find a developer to do their work for them.

It is that last group which is most likely to be intimidated by the prospect of walking into linux-kernel and asking for features. The kernel community could really use a person who would take on the task of working with these users, helping them to clarify their requirements, connecting them with the appropriate developers, and tracking requests. The good news is that we do have such a person; the bad news is that it's Andrew Morton, who has one or two other things to do as well. The community would benefit from a person who worked something close to full time on the task of helping users and user-space developers get what they need from the kernel. That sort of position tends to be very hard to fund, though; as a result, it tends to stay vacant.

As was noted at the outset, this conversation did not produce much in the way of concrete conclusions. It is far from complete, though. If not before, it will be resumed in October at the full summit. Needless to say, your editor plans to be there; stay tuned.

Comments (15 posted)

Rootless X

By Jonathan Corbet
July 15, 2009
Complexity has long been known to be the enemy of security. A code base which is small and straightforward can be verified, with a reasonable degree of trust, to do what it is supposed to do and only that. As the code becomes more complex and harder to understand, that sort of verification becomes increasingly difficult. For this reason, developers try to separate code requiring privilege from that which doesn't. Any code which can be run in a nonprivileged mode is code which is relatively unlikely to create security problems, and the code which is left can, hopefully, be reviewed to a level sufficient to give the required degree of confidence in its security.

Now consider the X window system. It is a massive body of code with a relatively small development community. Some of the code is truly ancient and hasn't seen much real developer attention in years. As with most projects, some of the code is of higher quality than the rest. The X server performs a complex task - based on a complicated protocol - for almost every Linux user. Sometimes it is even exposed to the full Internet. And the X server runs as root, with full privileges. The actual number of security problems found in X over the years has been relatively small, but it is not hard to believe that it is more a matter of luck and a lack of attackers than the inherent quality of the X code base.

Worries about the X server are not new; that is why there has been discussion of running it in a nonprivileged mode for several years. Much of the work which has been done on display graphics has had this goal in mind. All that notwithstanding, Linux distributions still install X as a setuid program. It just has not been possible to enable a nonprivileged X server to get its job done without opening up the system as a whole.

Those days are just about done. The Moblin project is now claiming that it will be the first distribution to ship with a nonprivileged X server. Where Moblin goes, others will certainly follow.

Some details of how this work was done were posted to the xorg-devel list by Jesse Barnes at the beginning of July. According to Jesse, finishing out this multi-year job was "pretty easy." It seems that the pieces are in place now - at least for some graphics hardware - to the point that a few hours of work got the job done. It seems like an almost anti-climactic end to such a long-term challenge. But, of course, the work which has made this result possible has been ongoing for a long time.

The biggest piece is the kernel mode setting (KMS) code which was merged for 2.6.29. Prior to KMS, the X.org server was charged with finding the hardware and driving it directly from user space. Needless to say, this sort of access requires root privilege, since it can easily be used to compromise the system. KMS (and the associated graphical memory management code) turns graphical hardware into something closer to a normal device with a normal kernel driver - albeit a rather complex and specialized sort of "normal" device. The hardware manipulations requiring privilege have been isolated into a relatively small piece of kernel code; they are now separated from the rather larger body of code implementing the X protocol.

That means that X code accessing the hardware can now be run unprivileged as long as it has the ability to open the appropriate device file. The server must also have the ability to open related device files: input devices, the virtual console, and so on. But that is a problem that has been solved for years; the login process can easily change the ownership of those files so that an unprivileged server can access them but the world as a whole cannot.

What's left is some detail work. Some of the ioctl() calls for direct rendering are currently root-only; Jesse thinks that they can be made generally available in a safe way. There may be some small additions to the driver ABI to allow the final root-only operations to be pushed onto the kernel side. But that's about it.

Of course, user-mode X will currently only work with Intel chipsets, since those are the only ones with full KMS support at this time. Radeon drivers are acquiring that support quickly, though, and may be able to support no-root operation in the relatively near future. That leaves NVIDIA as the usual odd chipset out of the big three; the current Nouveau feature matrix suggests that it will be some time yet before the requisite features are available there.

It may also be a little while until we see no-root support in more general-purpose distributions. Moblin has a relatively narrow focus on Intel hardware, by virtue of the fact that it's still mostly Intel people who are doing the work. Distributors who need to make things work on whatever hardware happens to be present may approach a change of this magnitude with a bit more caution. Still, X without root is clearly in the future, and the near future at that.

Comments (16 posted)

A new way to truncate() files

July 15, 2009

This article was contributed by Goldwyn Rodrigues

Changes are happening the way the virtual filesystem and virtual memory subsystems interact. One of the goals of this work is to close a race existing in page_mkwrite() (called when a previously read-only page table entry (PTE) is about to become writable), namely to make sure that file blocks are properly zero-filled if a truncate operation increases the file size. As a part of improving page_mkwrite(), Jan Kara posted a set of patches. However, these patches introduced a new lock to resolve the problem. Nick Piggin thinks this can be done by using the page lock instead of a new lock. As a first step toward a resolution of the problem, he posted a set of patches to improve the truncate sequence. The new truncate sequence is simpler to understand, flexible in usage, and most important of all, handles errors gracefully.

The truncate() and ftruncate() system calls are used to set a file to the specified size. If the file is larger than the argument passed, the file size is truncated and the part of the file greater than the passed file size is lost. If the current file size is smaller than the passed file size argument, the file size is increased and the area greater than the previous file size is filled with zeros. The file size argument passed cannot be greater than the maximum possible file size on the filesystem.

A user space truncate() call is handled, inside the kernel, by do_sys_truncate(), which is responsible for weeding out all error cases (such as "the inode is a directory" or permission errors). It breaks leases of files locked with flock() and calls do_truncate(). do_truncate(), in turn, creates a new attribute structure with the new length of the file and calls notify_change() with the dentry and the new attributes under inode->i_mutex. notify_change() calls the generic inode_setattr(), either explicitly, or through the filesystem implementation of setattr(). Then, inode_setattr() calls vmtruncate() to set the inode size and unmap the pages mapped beyond the new file size. After unmapping the pages, the associated filesystem's truncate() operation is called to free the disk blocks associated with the file.

According to Nick, this approach has problems:

Big problem with the previous calling sequence: the filesystem is not called until i_size has already changed. This means it is not allowed to fail the call, and also it does not know what the previous i_size was. Also, generic code calling vmtruncate to truncate allocated blocks in case of error had no good way to return a meaningful error (or, for example, atomically handle block deallocation).

Nick's new truncate sequence introduces a way to better communicate error conditions and consolidates the checks which most filesystems currently perform individually. The original intention was to add a new truncate() operation in struct inode_operations which would be called directly for a truncate operation in inode_setattr(). Christoph Hellwig disagreed with the call sequence, stating that the new truncate function should be called from notify_sequence, and not from inode_setattr which is the default implementation for inode_operations.setattr. Nick felt that clearing ATTR_SIZE before calling generic setattr is not unusual (discussed later), so he decided to introduce his changes with a flag called new_truncate in struct inode_operations, and not using a new truncate function altogether. The new_truncate flag indicates that the truncate() function in the inode operations handles the new format. Nick admits that this is a nasty hack when he introduces the variable in inode_operations. However, it will be required until all filesystems transition to the new truncate sequence. Filesystem code which does not implement the new convention will automatically initialize new_truncate to zero, indicating that it has not transitioned yet.

The first patch in the patch series introduces new functions to facilitate the change. inode_newsize_ok() performs simple checks to check if the intended new file size is within limits defined by the filesystem or is not a swap file:

    int inode_newsize_ok(struct inode *inode, loff_t offset)

These checks are currently done by individual filesystems. Using this function results in cleanups in individual filesystem code.

The truncate_pagecache() function truncates the inode pages and unmaps the pages in the range beyond the new filesystem size:

    void truncate_pagecache(struct inode *inode, loff_t old, loff_t new);

truncate_pagecache() should ideally be called before the filesystem releases the data blocks associated with the inode. This way the page cache will always be in sync with the on-disk format and the filesystem will not have to deal with situations such as writepage() being called for a page that has had its underlying blocks deallocated.

The vmtruncate() function is consolidated for NUMA and non-NUMA architectures in mm/truncate.c. However, vmtruncate() is deprecated. Instead, truncate_pagecache() and inode_newsize_ok() introduced in the first patch should be used.

The third patch is the main patch of the series which uses the new truncate operation. It introduces simple_setsize(), which performs equivalent of vmtruncate(). simple_setsize() is called by inode_setattr() when ATTR_SIZE is passed. So filesystems implementing their own truncate code in setattr must clear ATTR_SIZE before calling the generic inode_setattr().

To follow the new standards of the truncate operation, individual filesystems must implement their own setsize() function, which performs the file size validation checks, truncates the page cache, and truncates the data blocks associated with the inode. Filesystems must not trim off blocks past i_size using vmtruncate(). Instead, they must handle the truncate in the filesystem code using truncate_pages(). This creates a better opportunity to catch errors. The inode_operations.new_truncate and inode_operations.truncate fields will go away once all filesystems are converted.

To demonstrate the change, the final patch in the series modifies the ext2 filesystem to use the new truncate interface. The patch introduces ext_setsize() to set the inode size of the file, truncate the pagecache, and, finally, trim the data blocks on the filesystem. If ATTR_SIZE is set, ext2_setattr() calls ext2_setsize() to perform the truncate and the ATTR_SIZE is unset so that inode_setattr() does not perform the operations again.

The new truncate patchset has gone through a fair share of review and is pretty likely to get merged. However, it would require the "nasty hack" until all filesystems have transitioned to the new way of truncating files, after which the hack will be removed. The patches are part of the improvements Nick wants to see in the VM layer. Based on the new truncate patches, Nick posted an RFC on how he would close a race condition in page_mkwrite when a file is truncated beyond the current file size. Closing races is a good thing, so expect this work to proceed apace.

Comments (1 posted)

Patches and updates

Kernel trees

Linus Torvalds Linux 2.6.31-rc3 ?
Thomas Gleixner 2.6.29.6-rt23 ?

Architecture-specific

Development tools

Device drivers

Filesystems and block I/O

Memory management

Security-related

Virtualization and containers

Miscellaneous

Page editor: Jonathan Corbet

Distributions

News and Editorials

Fedora: (another) proposal for extended support

By Rebecca Sobol
July 15, 2009

Fedora is a fast moving project. New releases are on a six month schedule and each release is supported for only 13 months. Every so often the topic of extending that support window is raised. LWN covered a lengthy thread on the fedora-devel mailing list last October. Now a new proposal from Jeroen van Meeuwen has cropped up on the mailing list.

Fedora's cutting edge desktop is attractive to many, even in corporate environments. For those corporations that run Red Hat Enterprise Linux (or a derivative thereof) on their servers, Fedora provides a more up-to-date, yet compatible, desktop. Many of these corporations are willing to update their desktops once a year so, on the face of it, the thirteen month support window seems like enough. One can run Fedora N for one year and have one month in which to chose to upgrade to Fedora N+1 or N+2 and remain supported. However, things happen. Every now and then you just can't upgrade during that one month window and that leaves you unsupported for as long as it takes to schedule that upgrade.

Having a slightly longer support window is attractive to many, so proposals keep cropping up, but it is hard to achieve in practice. Fedora Legacy was successful for a while, but eventually that project was abandoned. So one has to ask why another proposal would be successful now.

The Extended Life Cycle (ELC) wiki page lists some good reasons for the proposal, but is more vague on how it will be accomplished. The proposal targets Fedora 12 as the first ELC release and calls for an additional six months of security updates after EOS (End of Service), so F12 would receive security updates for a total of of nineteen months. This is about half of the time one would run a RHEL (or derivative) distribution, keeping the desktop much fresher. However the proposal also notes:

  • We do not guarantee binary compatibility with the versions of applications or libraries that were in the Fedora release before it became EOS [End of Support].
  • We do not guarantee a stable API or ABI to the applications and libraries that we provide security updates for.

Clearly those two points could create some problems. No one is suggesting that security fixes be backported, so some packages will break during those six months. If one of those packages happens to be Firefox or some other critical desktop component the whole ELC support falls apart. Of course different businesses will consider different applications to be "critical".

There are other practical issues such as mirror space, CVS commit access, bugzilla maintenance, and more, which are listed on the wiki.

Kevin Kofler notes on the mailing list:

We'd just need some minimal infrastructure effort, one person willing to do the pushes (like you're doing for the supported releases) and everything else would be "as is", if somebody wants something fixed, they'll have to push the fix, if nobody cares, it won't be fixed. It isn't supported after all. And no QA, if it breaks, you get to keep the pieces. Again, it's unsupported, that means what it means. I still think it's better than not getting any security fixes at all.

Kevin Fenzi adds:

I think it is worse. It causes people to have an expectation that something will get security updates, and when it doesn't happen and they get compromised, they will not be very happy.

According to the Fedora Objectives: "Fedora is not interested in having a slow rate of change, but rather to be innovative. We do not offer a long-term release cycle because it diverts attention away from innovation." Clearly any sort of ELC proposal goes against these stated objectives.

Jesse Keating takes a look at how this proposal differs from Fedora Legacy:

First off, I think this is different from Fedora Legacy, or has potential to. Legacy had a few very key fail points. 1) it was opt in. Users had to know about it and actively enable it. 2) it was completely done outside of the Fedora infrastructure. 3) Fedora's popularity was very hit and miss, the type of people that would best use a Legacy like service were too burned to give any Red Hat related offering a shot. 4) RHEL4 (and its clones) were new enough for most of the people that would use this service, and thus they went that way.

However he also notes (among other points) that there needs to be some clarification of what vulnerabilities will get security updates. Clearly a local denial of service is a far different beast than a remote privilege escalation. Updates need to be all or nothing. It can't be up to the developers to decide what applications are critical to all users.

Fedora infrastructure continues to evolve and it could possibly be made to work for this proposal without too many major changes. This proposal is less ambitious than its predecessors, which is a point in its favor. It is also clear that this topic will continue to come up periodically until some solution is achieved. Whether it is this proposal or not remains to be seen.

Comments (10 posted)

New Releases

Yellow Dog Linux v6.2 DVDs Now Available

Fixstars has announced the availability of Yellow Dog Linux version 6.2 on DVD. "This release offers an updated kernel v2.6.29 for 64-bit systems, OpenOffice 3.0, Firefox 3.0.6 and IBM Cell SDK v3.1.0.1, as well as the next generation of ps3vram, which now boasts speeds up to 50% faster than in YDL 6.1 and is automatically enabled as swap."

Full Story (comments: none)

Distribution News

Fedora

New meeting logging facility available

Fedora developers Kevin Fenzi and Jon Stanley have integrated the MeetBot into zodbot to help log IRC meetings. "This plugin was developed by our friends over at Debian, who are using it to record their meetings as well. We would like all Fedora meetings to be recorded using this mechanism, such that there's one format for all of the logs."

Full Story (comments: none)

Fedora Board Recap 2009-07-09

Click below for a recap of the July 9, 2009 meeting of the Fedora Advisory Board. Topics include Sponsorship follow-up, Review of security notification plan, gnaughty issue, Fedora target, Domain licenses, and List monitors.

Full Story (comments: none)

Fedora 9 End of Life

Fedora 9 has reached its end of life for all updates.

Full Story (comments: none)

Slackware Linux

Slackware for ARM

Slackware has a new official port for the ARM architecture. ARMedslack has recently released the port of Slackware version 12.2.

Comments (none posted)

Ubuntu family

Announcing OpenJDK 6 Certification for Ubuntu 9.04 (jaunty)

The Ubuntu Java development team has announced completed certification of OpenJDK 6 for Ubuntu 9.04. "This certification means that the OpenJDK 6 package included with Ubuntu 9.04 now passes the rigorous testing of the Java SE Test Compatibility Kit (TCK) and is compatible with the Java(TM) SE 6 platform on the amd64 (x86_64) and i386 (ix86) architectures."

Full Story (comments: none)

Minutes from the Ubuntu Technical Board meeting

Click below for the minutes of the July 14, 2009 meeting of the Ubuntu Technical Board. Among several other topics, the board agreed to increase its membership. Nominations are open until July 28, 2009.

Full Story (comments: none)

Other distributions

Hardening CentOS (Bofh Hunter)

Jim Perrin looks at hardening CentOS. "I've started a page on the CentOS wiki (Hardening CentOS) that will very likely turn into its own section in the future. This page uses Steve Grubb's RHEL hardening guide, as well as the NSA's RHEL 5 security guide as the basis for locking down the operating system in a reasonably secure fashion. I do not claim it to be all-powerful, nor will everything there apply to everyone in all instances. Take from it what you need to. What is covered on the page is the basis for locking down a system with very minimal impact to distribution standards or package changes. I'll continue to add to this page, and break out other pages to encompass securing the distribution provided httpd, mysqld, etc as I get time."

Comments (none posted)

New Distributions

Jolicloud preview (Cimi's Official Blog)

Cimi has a weblog post about a new distribution called jolicloud. "Basically Jolicloud is a [derivative] of Ubuntu Netbook Remix with wide Prism usage across the desktop environment: the majority of "applications" you have seen in the screenshots are small packages which provide an independent Prism session on a specific website: for example, if you install the twitter application you will get a new icon inside your application list, that icon will start a new fullscreen Prism session for twitter.com. Common desktop applications are also included, like Firefox or VLC, but it is highly focused on web services."

Comments (none posted)

Torinux

Torinux is an Italian distribution from the city of Turin. It is based on sidux (a Debian derivative) that aims to be an easy to use desktop featuring KDE 3.5.10. Torinux comes on an installable live CD.

Comments (none posted)

Distribution Newsletters

DistroWatch Weekly, Issue 311

The DistroWatch Weekly for July 13, 2009 is out. "In the news this week, Slackware finally adopts ARMedslack as the official port for the project, while Ubuntu founder Shuttleworth talks about Karmic Koala, the release scheduled for October this year. We also link to an interview with Jono Bacon, the project's Community Manager. Our feature this week takes a nostalgic look back at some great Linux distributions that failed to survive. Elsewhere in the free software world, Google has announced their own Linux based operating system for netbooks and the BSD Magazine survives some tough times to continue printing. Have a great Monday and the rest of the week!"

Comments (none posted)

Fedora Weekly News 184

The Fedora Weekly News for July 12, 2009 is out. "Here are a few highlights from this week's issue. This past week marked the end of life for Fedora 9, and the launch of a new logging tool to help facilitate reporting for Fedora IRC meetings. In news from the Fedora Planet, an overview of the development changes for Fedora 12, and several posts around Mono in light of Microsoft's recent Community Promise. In Ambassador news, coverage of recent Fedora release events in Vancouver, Washington, Malaysia and India. In Translation news, a new Fedora 11 Users' Guide is now available in Bosnian, changes in Transfix, and new members of the Fedora Localization Project. In Design news, details on a new Gallery test instance for development of in-process works by the Art Team. Also some new wallpapers, and more theming discussion around Fedora 12 'Constantine.' The issue rounds out with news from virtualization-related efforts, including news of more device support in virt-manager, announcement of a new list for discussion of "libguestfs/guestfish/virt-inspector discussion/development." These are but a sampling of this week's Fedora Weekly News -- we hope you enjoy it!"

Full Story (comments: none)

The Mint Newsletter - issue 88

This issue of the Mint Newsletter covers the release of Mint 7 KDE RC1, the June monthly stats, and more.

Comments (none posted)

OpenSUSE Weekly News/79

This issue of the OpenSUSE Weekly News covers the availability of openSUSE 11.2 Milestone 3, Sascha Manns : How to create an Userpage, Thomas Biege: SELinux on openSUSE 11.2, what will be?, openSUSE Forums: Top 3 Applications You Wish Existed in Linux, Linux.com/Rob Day: Howto for Kernelmodules Newbies, and more.

Comments (none posted)

Ubuntu Weekly Newsletter #150

The Ubuntu Weekly Newsletter for July 11, 2009 is out. "In this issue we cover: Ubuntu 6.06 LTS Desktop Edition reaches end-of-life, Community Council: Nominations, MOTU Council, Call for testing: KVM in Jaunty proposed, Ubuntu LoCo Systems admin lessons: Massachusetts LoCo, Ubuntu Forums Tutorial of the Week & Community Interview, The Ubuntu Museum, Karmic Wallpapers, Ubuntu Podcast #30, Ubuntu User Magazine, Ubuntu Netbook is released by Archos, Ubuntu Server Team Meeting Summary, and much, much more!"

Full Story (comments: none)

Interviews

Koala will be 'a definitive shift' for Ubuntu Linux (TechRadar)

TechRadar covers a Linux Format interview with Mark Shuttleworth. "MS: There's some extraordinary work that's been done [on Koala], mostly pioneered by the Intel/Moblin team, the X team, and the kernel team (kernel mode setting), so I think that's going to be a definitive shift for us. I'm really hopeful we get that in."

Comments (none posted)

Ubuntu's Jono Bacon: Open Source Code Priority Should Begin at Security, then Quality, then Performance

The Coverity weblog features an interview with Jono Bacon, Ubuntu community manager. "JB: Ubuntu is used in a variety of industries, from software to engineering to government to retail and more, and has become a general purpose Operating System that can offer competitive productivity and information management tools as well a wealth of specialized tools that are available for download. It's a definitive channel for software developers. Ubuntu has a solid desktop, a good server and mobile experience and an exceptionally large community that refines, extends and supports us. Ubuntu is one of the most popular Linux distributions available, and as such, is ideal for delivering applications."

Comments (none posted)

Distribution reviews

Distro Review: Fedora 11 (Adventures In Open Source)

Dan Lynch has posted a review of Fedora 11 on his weblog. "To sum up then, I think F11 is a good release and well worth a few days of anyone's time. I've never really felt comfortable with Fedora on my home desktop until now. I have everything set up as I need it and I'd be happy to stay here longer. The community has grown in strength, there's lots of help available out there. I think things are really looking up in the Fedora world. They have lots of innovative features and things that will no doubt end up in future releases of other distros. This is where they see themselves, the ground breakers or pioneers who explore new things on behalf of the rest of us. The developers have done a great job on Fedora 11 and I encourage you to take a look at it and let me know how it works for you."

Comments (none posted)

Review: Linux Mint 7 Is Glorious (OSnews)

OSnews has a review of Linux Mint version 7, "Gloria". "Linux Mint 7 is not only shiny and pretty, but well organized and easy to use. It has its slow moments while running on typical netbook hardware, but it still runs snappily for most netbook uses. It has some very viable advantages over Ubuntu in its ease of use and customized system applications, and I believe people who are new to Linux would find Mint much more desirable than Ubuntu in most circumstances. It could definitely use some power consumption optimization as well as some optimization to speed up what it still runs slow. As with most user-installed Linuxes, Mint doesn't exactly like the wireless card on the EeePC 1000 HE, and better wireless support for all distributions is a very major necessity to attract more users. All in all, I have a little crush on Linux Mint 7 and like it better than Ubuntu; I personally give it a 9/10. However, the way it runs on my netbook calls for a score nearer to 7/10, disappointingly. I do look forward to future releases and hope they may be more optimized for netbooks."

Comments (none posted)

Page editor: Rebecca Sobol

Development

Uzbl: a browser following the UNIX philosophy

July 15, 2009

This article was contributed by Koen Vervloesem

Over time, a great deal of functionality has been added to web browsers—extensions, download managers, PDF handling, and the like. This has caused some to become dissatisfied, fondly remembering when a browser was just a browser. Over the last few months, a new project, Uzbl (pronounced "usable"), has come along to create a browser that follows the [Uzbl Logo] UNIX philosophy: "Write programs that do one thing and do it well. Write programs to work together. Write programs to handle text streams, because that is a universal interface." The result is a refreshingly different browser, which has a minimal graphical interface, has minimal functionality and is controllable through external shell scripts with customizable key bindings.

Uzbl was born during a discussion about browsers in the Arch Linux user forum. At one point Dieter Plaetinck, a Belgian software developer and Arch Linux release engineer, voiced his opinion in the discussion:

I really like the unix philosophy, and I like simple storage backends which you can easily version/merge/synchronize. (eg plaintext). I was thinking about a browser we could write with these principles.

Other participants were excited by the idea and asked if he was really planning to write such a program. After an initial hesitation ("Maybe, if I find the time") and a promising experiment ("well I have been playing a bit with gtk and webkit. you may see some code online soon.. or maybe not"), Plaetinck released the first code on April 21.

In the next couple of days, other Arch Linux users jumped on the bandwagon. They were all dissatisfied with current web browsers, because these are frequent violators of the Unix philosophy. Browsers integrate too many things such as management tools for bookmarks, history and downloads, decreasing the options for the user to handle them the way they want. Current browsers also store data in too-fancy file formats, such as XML, RDF and SQLite, making the data hard to reuse in other scripts. Plaetinck and his Arch fellows decided to change this, and he explains in the FAQ why he invented another browser:

We did try a lot of browsers, and we do not suffer NIH. We believe that the approach taken by way too many browsers is wrong. We do not want browsers that try to do everything, instead we prefer a system where different applications work together, which gives plenty of advantages. We also like open source. We take a lot of things from other projects and we also try to contribute to other projects.

Uzbl follows the UNIX philosophy in the following aspects:

  • It has a minimal graphical interface. Users only see what they need: the webpage and little else.
  • It has minimal functionality. Tasks that are not central to browsing are not in Uzbl. The user has to handle things like bookmarks, URI changing, history, downloads, etc. through external shell scripts.
  • It is controllable through various means: the keyboard, stdin, pipes and socket files, etc.
  • It has a highly customizable keyboard-centric input with support for modes, modkeys, variables and the like. As a consequence, the interface can be tweaked to be vim-like, emacs-like or any other scheme the user wishes.
  • All data and configuration files are written in plain text.
[Uzbl]

Uzbl uses WebKit as the browser engine. This means that the user doesn't have to worry about Flash, JavaScript and all modern web standards: WebKit takes care of all this support (including Flash if the user has installed a Flash plugin such as Gnash or Adobe's) and even has a 100/100 score on the Acid3 test page.

Applications that follow the UNIX philosophy may require a fair amount of configuration in order to get them to a usable state; a large amount of that will impact the application's adoption. Your author installed Uzbl to put this to the test. There is no official release yet, but an alpha version can be downloaded for many distributions, and Arch Linux (which is the main development platform for Uzbl) has a PKGBUILD file. The adventurous user can pull the code from git and build Uzbl from source.

A freshly installed Uzbl includes fairly limited settings and is actually rather unusable. There is no status bar, the program has no key bindings, no handlers for the history or downloads, no buttons for navigation, and so on. The user can't even type anything into forms. However, there's a much more usable configuration file in the examples directory. Copy it to $XDG_CONFIG_HOME/uzbl/config (which expands to ~/.config/uzbl on most systems) and start Uzbl again, after which it really becomes usable, with a status bar and vim-like key bindings. Just as in vim, the user starts in command mode. Typing 'b' goes back, 'ZZ' quits the browser, etc. Typing something in a form can be done by first switching to insert mode ('i').

This UNIX-like rethinking of the interface has another strange consequence: Uzbl doesn't have the concept of a location bar. All changes to the currently loaded URI happen outside of the browser. So the user can't edit the current URI, can't load a URI from bookmarks, and so on. Uzbl comes with some example scripts that use dmenu to pick a URI from a history or bookmarks file. Just copy the scripts subdirectory in the examples directory to ~/.config/uzbl and read the example config file for the key bindings to activate the scripts. Users are even able to bind JavaScript code to a key.

While all other mainstream browsers support tabs, Uzbl doesn't like the concept and sticks to the "one page per instance" principle. Each instance of Uzbl is in essence just a small wrapper around WebKit. Although this will be a showstopper for a lot of people, there is some truth in the philosophy that multiple instances should be handled outside of the browser. In their FAQ, the developers point out that tabbing is something that can be handled by many window managers, such as xmonad and wmii. Another solution is to embed Uzbl instances in other Gtk applications, something that uzbl_tabbed.py has done, a Python script that made it into the example scripts of Uzbl. But the fact that Uzbl is focused on one page per instance means that everyone can write their own custom script for multiple instances management.

An active community

Although Uzbl is only a couple of months old, it already has a fairly active community, with a mailing list and an IRC channel (#uzbl on the Freenode network). The website also has a lot of information for users or developers that want to contribute. The wiki has a lot of community-provided information, such as configuration files, scripts for various tasks and screenshots. Browsing through all this, it becomes clear how powerful the UNIX philosophy is for a browser. For example, writing a script to send a link of the current web page to Twitter or to bookmark it to Delicious is dead simple.

According to Plaetinck, the project has 6 main developers and 28 contributors. Most of the contributors work on the core, programming in C. Surprisingly, Plaetinck himself is not that much of a coder:

Right now I hardly code anything myself for Uzbl. I just merge in other people's code, ponder a lot, and lead the discussions. This project leader role fits me better then a "code contributor" role. This is also why the initial announcement on the Arch Linux forums contained a lot of thought, but the code I showed was only a slightly modified version of the example application from the WebKit Gtk+ website.

There were lots of contributions right from the start. In April there were 28 commits per day, and every evening when Plaetinck came home from work he did nothing but merge code. The project also seems to attract a diverse group of people:

I've seen quite some Ubuntu and Debian people, many Arch and Gentoo users, several people who use BSD and a few who swear by Plan 9. Remarkably, I haven't seen anyone using RPM-based distributions like Fedora, Red Hat or openSUSE.

As interesting as the project may be, there are still some issues, and these are grave enough that Plaetinck doesn't eat his own dog food yet:

I'm not using Uzbl yet as my main browser. The primary reason is that cookies don't work yet as it should. For example, at the moment we spawn a Python process for each HTTP request to check if we have to send a cookie. This makes the browser slow if the cookie handler is enabled. But we're working on a daemonized cookie script we can communicate with through a socket.

In the longer term, the Uzbl developers have a lot of plans:

We have a proof of concept browser now, but the next things we want will take a lot of time and work. We are thinking about better configuration handling, but this can be done in many different ways. Some people want a simple FIFO/socket interface, others prefer a file system interface like wmii has. And other users even want a full blown scripting language like JavaScript for maximum flexibility. We are thinking about what we want and how to implement it.

Plaetinck says he also wants, in the long term, to split the browser into a rendering component and an interface component to help reusability. The key bindings should be improved too: at the moment users cannot bind the arrow keys, function keys, caps lock or mouse buttons.

For whom is Uzbl usable?

For users who find themselves regularly trying to optimize their "digital workspace", Uzbl is definitely a browser to consider, as Plaetinck explains: "I noticed that the best tools to solve my problems are the ones who are merely building blocks that allow me to implement my own methods." After swapping out bloated programs and replacing them by elegant, more UNIX-like programs, Plaetinck found out that pretty much all software for guys like him was out there: window managers like dwm and wmii of the suckless project, editors such as Vim, program launchers such as Bashrun, media players such as MPlayer and MPD, mail clients such as Mutt or Claws Mail, and so on. But until now, there was no web browser in the same spirit. Even "lightweight" browsers such as Midori or Lynx still do too much themselves and don't integrate well with their environment.

Your author is sympathetic to the idea of a browser following the UNIX philosophy, although that may be swimming against the stream. The core is simple and much less crash-prone than a complex browser with all kinds of management tools. And the developers are right to aim for a clean browser with all management tools outside. This makes it possible to adapt the browser to your own workflow and get a significant efficiency boost.

The end result is a lot more "usable" browsing experience. However, this comes with a significant price: a lot of custom scripting. This suggests that Uzbl will only appeal to people who have a very specific workflow, who are not satisfied with the behavior of current browsers, who like to memorize key bindings and who are not afraid of spending hours configuring their browser. Uzbl users should be confident to work with tools such as grep, awk, dmenu, zenity, wget and gnupg to build their own browser behavior. Users who want a browser that does everything and does it out-of-the-box, or who are addicted to a lot of Firefox extensions, will not like Uzbl. But then, looking at the speed of development of Uzbl (it's not even three months old), your author wouldn't be surprised if in one year the browser has an extensive set of helper scripts that can do many of the tasks that Firefox extensions do.

Comments (11 posted)

System Applications

Audio Projects

Rivendell 1.5.0 released

Version 1.5.0 of Rivendell, a radio station automation system, has been announced. "Changes: New RLM Plug-in. A new plug-in for the Innovonics model 713 RDS encoder has been added. Podcast System Enhancements. It is now possible to post new episodes via the remote web interface."

Full Story (comments: none)

Database Software

PostgreSQL Weekly News

The July 12, 2009 edition of the PostgreSQL Weekly News is online with the latest PostgreSQL DBMS articles and resources.

Full Story (comments: none)

Web Site Development

Hatta 1.3.2 wiki engine released

Version 1.3.2 of the Hatta wiki engine has been announced. "Hatta is a small wiki engine designed to run locally or via WSGI inside a directory in a Mercurial repository. All the pages are normal text or binary (for images and such) files, also editable from outside of the wiki -- the page history is taken from the repository."

Full Story (comments: none)

Plone 3.3 release candidate 4 released

Version 3.3 release candidate 4 of the Plone web content management system has been announced. "Version 3.3 is a feature release; it adds new features which will make it easier for integrators and developer to modify and extend Plone."

Full Story (comments: none)

Desktop Applications

Audio Applications

Ardour 2.8.1 released

Version 2.8.1 of Ardour, a multi-track audio workstation, has been announced. "Although this is primarily a bug-fix release, the bugs that it fixes are important enough that we hope all users, especially those on OS X, will choose to upgrade. Crashes when deleting AudioUnit plugins, and a very important potential data loss issue with session cleanup are now fixed. 2.8.1 also has a builtin "Chat" option to connect (via your web browser) to our IRC channel." The Ardour site also has a new article on working with Ambisonics (3D Audio) using Ardour.

Comments (none posted)

Desktop Environments

GNOME 2.27.4 released

Version 2.27.4 of the GNOME desktop environment has been announced. "This is already the 4th development release towards our 2.28 release that will happen in October 2009; this release comes just after the first joint GNOME / KDE conference, which was certainly great fun for all present people."

Full Story (comments: none)

GNOME Software Announcements

The following new GNOME software has been announced this week: You can find more new GNOME software releases at gnomefiles.org.

Comments (none posted)

Growth Metrics for KDE Contributors (KDEDot)

KDE.News documents the growth metrics of KDE contributors. "In 1996 when KDE was first announced, it had only a handful of developers and the project could manage the source code without using a revision control system. More and more developers have begun to contribute to KDE over the years, and while there has been some attrition, the total number of active developers working on KDE has been steadily growing. In order to get a pulse from the current developer community, Simon St. James and Arthur Schiwon produced and plotted two basic metrics that show the continued growth within the KDE community."

Comments (none posted)

KDE 4.3.0 RC2 released (KDEDot)

Version 4.3.0 RC2 of KDE has been announced. "The KDE Team has released another release candidate for KDE called "Canteras". It contains only few changes compared to RC1 which suggests that the 4.3 is stabilizing and shaping up well for the 4.3.0 release on 28th of July. KDE 4.3.0 will be followed up by a series of monthly bugfix and translation updates. Testers are asked to report bugs in this release so 4.3.0 becomes a release as smooth as possible."

Comments (none posted)

KDE Software Announcements

The following new KDE software has been announced this week: You can find more new KDE software releases at kde-apps.org.

Comments (none posted)

Xorg Software Announcements

The following new Xorg software has been announced this week: More information can be found on the X.Org Foundation wiki.

Comments (none posted)

Electronics

gerbv 2.3 released

Version 2.3 of gerbv, a viewer for Gerber RS-274X files, Excellon drill files, and CSV pick-and-place files, has been announced on OpenCollector.org, it includes bug fixes and other improvements.

Comments (none posted)

Games

Freecell Solver 2.34.0 released

Version 2.34.0 of Freecell Solver has been announced, it includes numerous improvements and some bug fixes. "Freecell Solver is an open-source (MIT/X11 Licensed) ANSI C-based library and some standalone command-line tools for solving Freecell and similar Solitaire variants such as Baker's Game, Seahaven Towers and Eight Off as well as Simple Simon."

Full Story (comments: none)

Graphics

Inkscape 0.47 pre 1 is out

Version 0.47 pre 1 of Inkscape a scalable vector graphics (SVG) editor, is available. "Slowly, but surely we are getting there. First prerelease of v0.47 is out. Please test and report any issues you run into. The plan is to gain 500 points for closing bugs. At the moment of publishing this news we are at 304 points."

Comments (none posted)

Multimedia

Moovida Media Center 1.0.5 released

Version 1.0.5 of Moovida Media Center has been announced, it includes a number of bug fixes. "Moovida, formerly known as Elisa, is a cross-platform and open-source Media Center written in Python. It uses GStreamer for media playback and pigment to create an appealing and intuitive user interface."

Full Story (comments: none)

Music Applications

Denemo 0.8.6 released - Music Notation Editor

Version 0.8.6 of Denemo, a music notation editor, has been announced. "Notable new features are - Downloading new commands and edit scripts between releases - MIDI out, Tempo and Volume changes and insertion of arbitrary MIDI messages at any point in the music. - Edit lyrics in text editor and see the syllable placement as you type. Multiple verses per voice allowed. - Pasting LilyPond text directly into the Denemo window. By pasting the actual music text a Denemo editable score can be created from almost any LilyPond file. - With JACK, the playback starts from the cursor or plays back the selection if there is one."

Full Story (comments: none)

Jackbeat 0.7.2

Version 0.7.2 of Jackbeat, a minimalistic sequencer, has been announced. "- Track solo controls have been added, OSC bindings included. - Minor file access and user interface bugs have been fixed. - Jackbeat now runs on Windows in addition to Mac OS X and Linux."

Full Story (comments: none)

xwax 0.5 released

Version 0.5 of xwax has been announced. "xwax is open-source vinyl emulation software for Linux. It allows DJs and turntablists to playback digital audio files (MP3, Ogg Vorbis, FLAC, AAC and more), controlled using a normal pair of turntables via timecoded vinyls."

Full Story (comments: none)

Office Applications

HylaFAX 6.0.3/4.4.5/4.3.8 releases

Three new releases of HylaFAX, a FAX modem control package, have been announced. "The HylaFAX development team is pleased to announce the maintenance releases of HylaFAX 6.0.3, 4.4.5 and 4.3.8."

Comments (none posted)

Science

Sage 4.1 released

Version 4.1 of Sage has been announced, it includes new capabilities, optimizations and bug fixes. "Sage is a free open-source mathematics software system licensed under the GPL. It combines the power of many existing open-source packages into a common Python-based interface. Mission: Creating a viable free open source alternative to Magma, Maple, Mathematica and Matlab."

Comments (none posted)

Languages and Tools

Caml

Caml Weekly News

The July 14, 2009 edition of the Caml Weekly News is out with new articles about the Caml language.

Full Story (comments: none)

Python

Announcing freebase-python 1.0

Version 1.0 of freebase-python has been announced. "freebase-python is a library for accessing the open data repository stored at http://freebase.com. Freebase is huge, user-edited database of over 100 million of facts about over 5 million topics, all under the Creative Commons CC-BY license. The freebase-python library 1.0 is now available! It introduces a new syntax for accessing the freebase api, it updates the available commands to reflect the entire web api, and it introduces some cool schema manipulation utilities. It's backwards compatible with previous versions of the library."

Full Story (comments: none)

Unladen Swallow 2009Q2 released

The Unladen Swallow 2009Q2 release is out. Unladen Swallow is an accelerated Python implementation; LWN covered it in May. It now uses LLVM to compile Python functions to machine code, and appears to be coming along nicely, even if the developers do not yet recommend its use in production situations. "Unladen Swallow 2009Q2 passes the tests for all the third-party tools and libraries listed on the Testing page. Significantly for many projects, this includes compatibility with Twisted, Django, NumPy and Swig."

Comments (34 posted)

Python-URL! - weekly Python news and links

The July 15, 2009 edition of the Python-URL! is online with a new collection of Python article links.

Full Story (comments: none)

IDEs

Padre 0.39 released

Version 0.39 of Padre, an IDE for Perl, has been announced. This release includes new capabilities, bug fixes and code cleanup.

Full Story (comments: none)

Pydev 1.4.7 Released

Version 1.4.7 of Pydev an Eclipse plugin for Python, Jython and Iron Python, has been announced. This release adds some new features and includes numerous bug fixes.

Full Story (comments: none)

Test Suites

oejskit 0.8.5 released

Version 0.8.5 of oejskit, a JavaScript in-browser testing and utility kit, has been released. "jskit contains infrastructure and in particular a py.test plugin to enable running unit tests for JavaScript code inside browsers. The plugin requires py.test 1.0 The approach also enables to write integration tests such that the JavaScript code is tested against server-side Python code mocked as necessary. Any server-side framework that can already be exposed through WSGI can play."

Full Story (comments: none)

Miscellaneous

grc 1.3 released

Version 1.3 of grc has been announced. "grc is a colouriser configured by regular expressions, including a simple command line wrapper for some commonly used unix commands. Changes in this version: - CRTL+C propagation fixed - single hyphen in unified diff files is colourised - search for configuration files in current working directory properly".

Full Story (comments: none)

Page editor: Forrest Cook

Linux in the news

Recommended Reading

Cooperation During the Gran Canaria Desktop Summit (KDE.News)

KDE.news takes a look at the cooperation between KDE and GNOME developers at the Gran Canaria Desktop Summit. "At the Gran Canaria Desktop Summit much cross-desktop work has been done. The days we have are being used for the Cross Desktop Tracks and during the talks there are KDE and Gnome developers mingling everywhere. Cross desktop sessions included bug triage, metadata sharing, instant messaging and sharing personal data cross-desktop with CouchDB."

Comments (3 posted)

The Open-PC Project Announced at GCDS 09 (KDE.News)

The open-pc project was announced at the recently-concluded Gran Canaria Desktop Summit as reported by KDE.News. Open-pc is an effort to produce a free software based PC, with hardware supported by free drivers, as well as applications that are well-tested to work on the device. "The project was initiated in response to the lack of quality in the Free Software-based hardware solutions currently on the market. As many reviewers and end-users have stated, the pre-installed software used by hardware vendors generated a bad image for Free Software with potentially interested end-users. Much of the software was buggy and not widely tested and device drivers were often unstable, non-free or not available at all."

Comments (40 posted)

Trade Shows and Conferences

Qt Labs America and other Akademy talks and sessions (KDEDot)

KDE.News continues its coverage of the 2009 Akademy. "One of the big announcements was for Qt Labs America. The team at OpenBossa are working with Qt Software to promote Qt development in Latin America, starting with Brazil. They want to find students to work on KDE as a means to learning development, similar to the methods tried by the university in Toulouse. They will sponsor KDE developer sprints, and are looking for KDE teams to invite out to Brazil."

Comments (none posted)

Vibrant Community Propels KDE Forward at Akademy 2009 (KDEDot)

KDE.News has a report from the Akademy side of the Gran Canaria Desktop Summit. "The KDE development platform gains support by Maemo's announcement to include Qt as supported platform in its next release. The semantic desktop gained traction with wider adoption of its underlying infrastructure. The Plasma team showed their upcoming netbook interface; a first version is expected to be available in the first quarter of next year. Cornelius Schumacher reports that KDE e.V., the organization behind KDE, is healthily growing and new projects to grow KDE's developer base and industry adoption are being spawned. A shift to next generation code hosting using the version control system git and its web interface Gitorious was announced, creating a way for more people to contribute to KDE in the future."

Comments (none posted)

The SCO Problem

SCO Files Notice of Cure Amounts Re Leases and Executory Contracts (Groklaw)

Groklaw picks apart a lengthy SCO bankruptcy filing. "It is to me one of the most fascinating document SCO has ever filed. It presents a picture of SCO's business that is very different from what they have presented to the courts. One fascinating thing is that it seems it was possible to get UNIX System V after 1995, despite SCO testimony at trial in SCO v. Novell that after that time period you could only get UNIX by licensing UnixWare. Forget that you don't think this sales plan will ever happen. It's an opportunity to look at the innards of SCO's business. I went through every page, looking for what new customers, or any updating customers, licensed after 1995. I was curious to see whether UnixWare took off and UNIX drifted down or suddenly stopped after 1995. I had a theory that perhaps SCO didn't want to sell UNIX after that, so as to avoid paying Novell royalties. But what I found instead surprised me greatly."

Comments (7 posted)

Companies

Behold, The Googlification Continues - Or Does It? (Linux Journal)

Linux Journal looks into Google's recent announcement of Google Chrome OS. "Everyone thought the biggest news to come out of the Googleplex yesterday was the announcement that four of the company's most popular offerings — Gmail, Google Docs, Google Calendar, and Google Talk — have left their long-held beta status behind, though the change appears to be mostly cosmetic, given the services well-established, if sometimes momentarily questioned, reliability. That news was, however, merely a foretaste of what was to come, as late last night the search giant revealed that it has even greater ambitions: to revolutionize the operating system."

Comments (8 posted)

Linux Adoption

USPS goes open-source with tracking system (GCN)

GCN reports on a move to SUSE Linux by a division of the US Postal Service. "Postal Service information technology officials have upgraded the 15-year-old, mainframe-based system to handle more transactions and lower the cost of operating the system. The work to upgrade PTS is part of a larger plan to standardize on the open-source and less expensive Linux operating system, said John Byrne, manager of application development and head of USPS’ Integrated Business Solutions Centers. The service is moving 1,300 Sun Solaris midrange servers to a Hewlett-Packard Linux environment."

Comments (35 posted)

Interviews

File System Evangelist and Thought Leader: An Interview with Valerie Aurora (Linux Magazine)

Linux Magazine has an interview with file systems hacker and LWN guest author Valerie Aurora. In it she discusses some of the topics she has written about here (chunkfs, union file systems) as well as providing her thoughts on btrfs vs. ZFS and distributed file systems. "All of this seemed pointless, though - how can Linux stay competitive in file systems without Linux developers being paid to work on file systems full-time? So, with Zach Brown and Arjan, I cobbled together the first Linux File Systems Workshop out of shoestring and duct tape, hoping that if the existing Linux file systems developers could just talk to each other, we could come up with a way to improve funding for Linux file systems development. I don’t know that it helped, but if it did, I consider this to be my most important contribution to Linux file systems." She has a bit more about the interview on her blog: "And if you think the title is a little overblown - it describes me as both 'evangelist' and 'thought leader' - you should have seen it before I begged Jeff to tone it down a bit. :) Other than the overly complimentary introduction, I think he's done a great job as an interviewer."

Comments (9 posted)

Shuttleworth about GNOME 3.0 (derStandard)

derStandard.at interviews Mark Shuttleworth. "And the really big news here is that we've been having very good discussions with the Debian release team. So the Debian release team has indicated that they are very open - not about a release date but a freeze date. That freeze date would be the time where we sit around and look at all the major components and decide what the major versions would be that we collaborate around. There is no pressure that we have to agree on everything, but just actually having the conversation is useful for any upstreams who care about this information."

Comments (none posted)

Resources

Keeping In Touch: A Guide To Linux Audio Comm Channels (Linux Journal)

Dave Phillips looks at ways to keep in touch with audio development. "Recently I asked readers for suggestions regarding Linux audio topics they'd like to read about in my articles. One response suggested a survey of the various Internet communications channels for Linux-based musicians. I liked the idea, so I considered my traditionally preferred channels, searched for and found interesting new connections, and wrote this guide to lead you on a tour of notable communications channels focused on Linux sound and music topics."

Comments (none posted)

Reviews

VirtualBox 3.0: An easy way to mix and match operating systems (Computerworld)

Computerworld reviews Sun's VirtualBox 3.0. "What is it? VirtualBox is an open-source virtualization program which lets you run guest operating systems with your native desktop operating system. For instance, if you need Windows to run Quicken, but prefer Linux for all your other work, VirtualBox enables you to bring up Windows and Quicken without leaving your Linux desktop. What you get is an adjustable window containing the guest operating system floating on the host system. So, for example, you could have a Windows XP guest instance entirely hiding its Linux host system."

Comments (14 posted)

Page editor: Forrest Cook

Announcements

Non-Commercial announcements

The 2008 GNOME annual report

The GNOME Foundation has put out a glossy, 35-page annual report, available as a 19MB PDF file. There's a lot of information on what has been going on in the GNOME community and budgetary details as well.

Comments (none posted)

The University of São Paulo to support Openmoko?

It would seem that Jon 'maddog' Hall has gotten the University of São Paulo interested in supporting the ongoing development of the Openmoko phone platform. Among other things, the university has a fancy surface-mount prototyping facility and a fair amount of telecommunications experience to offer. "Dr. Zuffo has discussed the Openmoko project with the Minister of Telecommunications of Brazil, and the Minister is very enthusiastic about the concept. Having the support of the government of the twelfth largest economy behind the project might really help us with various negotiations with vendors."

Full Story (comments: 1)

Commercial announcements

Ingres makes MySQL transition a breeze

Ingres has announced EasyIngres. "Ingres Corporation, the leading open source database company, announced the creation of EasyIngres, an Ingres developer community product that helps PHP developers easily create applications using Ingres and simplifies the transition from MySQL to Ingres. This development kit comes at a critical time when MySQL developers are on stand-by waiting to see the outcome of Oracle's purchase of Sun. EasyIngres allows those developers the opportunity to build on their existing knowledge of PHP and add a new set of mission-critical database development skills to their resumes."

Full Story (comments: none)

New release of Mozilla Lightning and SOGo

Inverse has announced new releases of Mozilla Lightning and SOGo. "SOGo provides a rich AJAX-based Web interface and supports multiple native clients through the use of standard protocols such as CalDAV, CardDAV and GroupDAV. It features a very tight integration with Mozilla Thunderbird and Lightning and enable mobile devices synchronization through the use of the Funambol middleware."

Full Story (comments: none)

MontaVista Achieves Ultra-Fast One Second Linux Boot Time in Embedded Industrial Applications

MontaVista Software has announced the achievement of ultra-fast boot times on MontaVista Linux for embedded industrial applications. "At the Virtual Freescale Technology Forum this week, MontaVista will be demonstrating a sample dashboard application fully operational in one second from cold boot, achieving a level of previously unmatched performance on any embedded Linux."

Comments (16 posted)

MontaVista celebrates 10 years of embedded Linux

MontaVista has announced its 10 year anniversary. "Founded in 1999 with a focus on delivering Linux for the embedded market, MontaVista has remained committed to furthering the development and innovation of commercialized embedded Linux through enabling a broad range of architectures, delivering commercial quality, and expanding the tools available for embedded developers, As a result of this focus and commitment, there are over 60 million devices in the market powered by MontaVista Linux today."

Full Story (comments: none)

New Books

Resources

FSFE Newsletter

The June, 2009 edition of the FSFE Newsletter is online with the latest Free Software Foundation Europe news. Topics include: "FSFE participates at "LinuxTag 2009", Berlin, 24-27 June, Social event and presentation of Spanish team, Miraflores de la Sierra, Spain, 20 June, The account from the first Fellowship representative at the GA, Two year Executive Summary (2007-2009), Fellowship meeting in Berlin, 11 June and It's time for the community to take charge of its brand."

Full Story (comments: none)

"International Free and Open Source Software Law Review" launched

A new, freely available journal covering international free software legal issues has produced its first issue. The "International Free and Open Source Software Law Review" (IFOSSLR) is a peer-reviewed, twice-yearly publication looking at the intersection of free software and the law. The journal is an outgrowth of the European Legal Network, established by the Free Software Foundation Europe (FSFE), and is supported by the Mozilla and NLNet foundations. From the announcement: "Free and Open Source Software has increasingly come to challenge traditional concepts of intellectual property and collaboration by allowing every user to use, study, share and and improve code, facilitating the creation of elegant and effective software that now lies at the heart of the mainstream technology industry. IFOSSLR aims to foster increased understanding and promote best practice for all parties engaging with this approach to licensing." (Thanks to Armijn Hemel)

Comments (4 posted)

Linux Foundation Newsletter

The July, 2009 edition of the Linux Foundation Newsletter has been published. "In this month's Linux Foundation newsletter: * Linux Foundation Appoints Executive to Support Initiatives in Europe * Japan Linux Symposium: Early Registration, Linus Torvalds Keynote * LinuxCon Program and Event Details Take Shape * LinuxCon Speakers Interviewed * Training Courses Set for LinuxCon * Linux Foundation in the News * From the Director".

Full Story (comments: none)

State of Text Rendering

Behdad Esfahbod recently presented a talk on the State of Text Rendering at the Gran Canaria Desktop Summit. The presentation slides [PDF] are also available. "Text is the primary means of communication in computers, and is bound to be so for the decades to come. With the widespread adoption of Unicode as the canonical character set for representing text a whole new domain has been opened up in a desktop system software design. Gone are the days that one would need to input, render, print, search, spell-check, ... one language at a time. The whole concept of internationalization (i18n) on which Unicode is based is "all languages, all the time"." (Thanks to Nicolas Mailhot).

Comments (19 posted)

Education and Certification

LPI develops partner program for schools in Portugal

The Linux Professional Institute has announced a partner program for Portuguese schools. "The ICT Academies program is part of the "Technological Plan for Education" promoted by the Portuguese Ministry of Education. As part of this initiative, LPI-Portugal will be certifying 10 instructors in 5 secondary schools, to provide Linux education and training towards the Linux Professional Institute Certification (LPIC) through LPI's Approved Academic Program (LPI-AAP). The ICT Academy program is expected to grow to include 30 secondary schools in Portugal in 2010."

Full Story (comments: none)

LPI announces new affiliate in South Africa

The Linux Professional Institute has announced its latest affiliate, ICDL South Africa. "ICDL South Africa is a non-profit organization established by the Computer Society of South Africa to promote and administer the International Computer Driving License (ICDL). The ICDL is administered by the ECDL Foundation; a non-profit organization established by the Council of European Professional Informatics Society (CEPIS)."

Full Story (comments: none)

Upcoming Events

BayPIGgies at OSCON

A BayPIGgies meeting will be held at OSCON. "The July BayPIGgies meeting will be held at OSCON in the San Jose Convention Center as one of the BoF (Birds of a Feather) sessions from 8pm to 9:30pm Thursday July 23. Everyone is welcome: you do NOT need to be an OSCON member to attend a BoF. Wesley Chun will have a newbie-oriented "What is Python?" BoF from 7-8pm in the same room as BayPIGgies".

Full Story (comments: none)

Announcing EuroBSDcon 2009

EuroBSDcon 2009 has been announced. "Friday 18th - Sunday 20th September, University of Cambridge, UK A day of tutorials followed by 2 days of conference talks covering a wide variety of BSD related topics. This is the European BSD Community's annual event to meet, share and interact across the projects and between friends."

Full Story (comments: none)

Announcing the third annual LLVM developers' meeting

The third annual LLVM developers' meeting has been announced. "The third annual LLVM Developers' Meeting will be held this on October 2, 2009 at Apple Inc.'s main campus in Cupertino, California, USA: http://llvm.org/devmtg/2009-10 As with previous meetings, this gathering serves as a forum for both developers and users of LLVM to get acquainted, to learn how LLVM is used, and to exchange ideas about LLVM and its potential applications. Beyond discussing the core LLVM compiler infrastructure, the meeting will also dedicate a significant amount of attention to Clang, LLVM's reusable, library-based frontend for C-based languages."

Full Story (comments: none)

LPC: Kernel/Userspace/User Interfaces Microconference

The Linux Plumbers Conference (LPC) has announced the lineup for the "Kernel/Userspace/User Interfaces" microconference. Three separate topics will be discussed as part of the microconference: Sarah Sharp will present USB 3.0, Matthew Garrett will look at dynamic power management, and Sukadev Bhattiprolu will cover checkpoint/restart in the kernel. LPC will be held September 23-25 (just after the co-located LinuxCon) in Portland, Oregon. "I am happy to announce that Jim Gettys, one of the original developers of the X Window System, past VP of Software at OLPC, now at Alcatel-Lucent Bell Labs, has selected a great lineup from a set of excellent submissions to the 'Kernel/Userspace/User Interfaces' microconference." Click below for the full announcement.

Full Story (comments: none)

LPC: security microconference announced

The Linux Plumbers Conference security microconference has been announced, LPC takes place on September 21-23 in Portland, OR. "The Linux Plumbers Conference is fortunate to have James Morris and Paul Moore as runners for the Security microconference. James and Paul are quite prominent in the Linux security community, James in his role as Linux kernel security subsystem maintainer, and Paul in a number of roles, including leader of the NetLabel network-security subsystem. The Security microconference is a double-length microconference this year, as is fitting given the importance of security in today's world of spammers, botnet controllers, and many other black-hat threats."

Full Story (comments: none)

openSUSE Hack Week IV approaches

The next openSUSE Hack Week has been announced. "Novell is once again sponsoring a Hack Week, from July 20 through July 24th. This is an opportunity for Novell's Open Platform Solutions developers to use their Innovation Time Off and hunker down and work on the projects that catch their fancy. Hack Week projects can be new features, new applications, or improvements to existing services and applications."

Full Story (comments: none)

Events: July 23, 2009 to September 21, 2009

The following event listing is taken from the LWN.net Calendar.

Date(s)EventLocation
July 20
July 24
2009 O'Reilly Open Source Convention San Jose, CA, USA
July 24
July 30
DebConf 2009 Cáceres, Extremadura, Spain
July 25
July 30
Black Hat Briefings and Training Las Vegas, NV, USA
July 25
July 26
EuroSciPy 2009 Leipzig, Germany
July 25
July 26
PyOhio 2009 Columbus, OH, USA
July 26
July 27
InsideMobile San Jose, CA, USA
July 31
August 2
FOSS in Healthcare unconference Houston, TX, USA
August 3
August 5
YAPC::EU::2009 Lisbon, Portugal
August 7
August 9
UKUUG Summer 2009 Conference Birmingham, UK
August 7 August Penguin 2009 Weizmann Institute, Israel
August 10
August 14
USENIX Security Symposium Montreal, Quebec, Canada
August 11
August 13
Flash Memory Summit Santa Clara, CA, USA
August 11 FOSS Dev Camp - Open Source World San Francisco, CA, USA
August 12
August 13
OpenSource World Conference and Expo San Francisco, CA, USA
August 12
August 13
Military Open Source Software Atlanta, Georgia, USA
August 13
August 16
Hacking At Random 2009 Vierhouten, The Netherlands
August 18
August 23
2009 Python in Science Conference Pasadena, CA, USA
August 22
August 23
Free and Open Source Conference (FrOSCon) St. Augustin, Germany
August 22
August 23
OpenSQL Camp St. Augustin, Germany
August 31
September 4
Ubuntu Developer Week Internet, Internet
September 1
September 4
JBoss World Chicago Chicago, IL, USA
September 1
September 4
Red Hat Summit Chicago Chicago, IL, USA
September 1
September 5
DrupalCon Paris, France
September 4
September 5
PyCon 2009 Argentina Buenos Aires, Argentina
September 7
September 11
XtreemOS summer school Oxford, UK
September 7
September 8
FRHACK.ORG IT Security Conference Besançon, France
September 8
September 12
DjangoCon '09 Portland, OR, USA
September 10
September 11
Fedora Developer Conference 2009 Brno, Czech Republic
September 12 Evil Robot Conference (Free Conference, Free Software) Raleigh, NC, USA
September 14
September 18
Django Bootcamp at the Big Nerd Ranch Atlanta, Georgia, USA
September 15
September 17
International Conference on IT Security Incident Management and IT Forensics Stuttgart, Germany
September 17
September 18
Internet Security Operations and Intelligence 7 San Diego, CA, USA
September 17
September 20
openSUSE Conference Nuremberg, Germany
September 18
September 19
BruCON Brussels, Belgium
September 18
September 20
EuroBSDCon 2009 Cambridge, UK
September 19 Atlanta Linux Fest 2009 Atlanta, Georgia, USA
September 19 Beijing Perl Workshop Beijing, China
September 19 Software Freedom Day Worldwide
September 20 SELinux Developer Summit 2009 @ LinuxCon Portland, Oregon, USA

If your event does not appear here, please tell us about it.

Event Reports

2009 PHP TestFest coverage

PHP.net covers the 2009 PHP TestFest. "So finally we are at the end of the 2009 PHP TestFest. It has been an outstanding success with the coverage increasing by about 2.5% overall and 887 new tests contributed in the TestFest SVN repository of which 637 have already been added to PHP CVS. User groups from all over the world have worked hard to make this happen and we thank each and every one of you for your contribution to PHP! You really made a difference to the PHP5.3 release quality."

Comments (none posted)

Page editor: Forrest Cook


Copyright © 2009, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds