|
|
Subscribe / Log in / New account

LWN.net Weekly Edition for March 18, 2010

Applications and bundled libraries

By Jake Edge
March 17, 2010

Package installation for Linux distributions has traditionally separated libraries and application binaries into different packages, so that only one version of a library would be installed and it would be shared by applications that use it. Other operating systems (e.g. Windows, MacOS X) often bundle a particular version of a library with each application, which can lead to many copies and versions of the same library co-existing on the system. While each model has its advocates, the Linux method is seen by many as superior because a security fix in a particular commonly-used library doesn't require updating multiple different applications—not to mention the space savings. But, it would seem that both Mozilla and Google may be causing distributions to switch to library-bundling mode in order to support the Firefox and Chromium web browsers.

One of the problems that distributions have run into when packaging Chromium—the free software version of Google's Chrome browser—is that it includes code for multiple, forked libraries. As Fedora engineering manager Tom "spot" Callaway put it: "Google is forking existing FOSS code bits for Chromium like a rabbit makes babies: frequently, and usually, without much thought." For distributions like Fedora, with a "No Bundled Libraries" policy, that makes it very difficult to include Chromium. But it's not just Chromium.

Mozilla is moving to a different release model, which may necessitate distribution changes. The idea is to include feature upgrades as part of minor releases—many of which are done to fix security flaws—which would come out every 4-6 weeks or so. Major releases would be done at roughly six-month intervals and older major releases would stop being supported soon after a subsequent release. Though the plan is controversial—particularly merging security and features into the minor releases—it may work well for Mozilla, and the bulk of Mozilla's users who are on Windows.

Linux distributions often extend support well beyond six-months or a year, though. While Mozilla is still supporting a particular release, that's easy to do, but once Mozilla stops that support, it becomes more difficult. Distributions have typically backported security fixes from newer Firefox versions into the versions that they shipped, but as Mozilla moves to a shorter support window that gets harder to do. Backporting may also run afoul of the Mozilla trademark guidelines—something that led Debian to create "Iceweasel". The alternative, updating Firefox to the most recent version, has its own set of problems.

A new version of Mozilla is likely to use updated libraries, different from those that the other packages in the distribution use. Depending on the library change, it may be fairly straightforward to use it for those other applications, but there is a testing burden. Multiple changed libraries have a ripple effect as well. Then there is the problem of xulrunner.

Xulrunner is meant to isolate applications that want to embed Mozilla components (e.g. the Gecko renderer) from changes in the Mozilla platform. But xulrunner hasn't really committed to a stable API, so updates to xulrunner can result in a cascade of other updates. There are many different packages (e.g. Miro, epiphany, liferea, yelp, etc.) that use xulrunner, so changes to that package may require updates to those dependencies, which may require other updated libraries, and so on.

The Windows/Mac solution has the advantage that updates to Firefox do not require any coordination with other applications, but it has its set of downsides as well. Each application needs some way to alert users that there are important security fixes available and have some mechanism for users to update the application. Rather than a central repository that can be checked for any pending security issues, users have to run each of their installed applications to update their system. Furthermore, a flaw in a widely used library may require updating tens or hundreds of applications, whereas, in the Linux model, just upgrading the one library may be sufficient.

It would appear that Ubuntu is preparing to move to the bundled library approach for Firefox in its upcoming 10.04 (Lucid Lynx) release. That is a "long-term support" (LTS) release that Ubuntu commits to supporting for three years on the desktop. One can imagine that it will be rather difficult to support Firefox 3.6 in 2013, so the move makes sense from that perspective. But there are some other implications of that change.

For one thing, the spec mentions the need to "eliminate embedders" because they could make it difficult to update Firefox: "non-trivial gecko embedders must be eliminated in stable ubuntu releases; this needs to happen by moving them to an existing webkit variant; if no webkit port exists, porting them to next xulrunner branch needs to be done." Further action items make it clear that finding WebKit alternatives for Gecko-embedders is the priority, with removal from Ubuntu (presumably to "universe") being the likely outcome for most of the xulrunner-using packages.

In addition, Ubuntu plans to use the libraries that are bundled with Firefox, rather than those that the rest of the system uses, at least partially because of user experience issues: "enabling system libs is not officially supported upstream and supporting this caused notable work in the past while sometimes leading to a suboptimal user experience due to version variants in the ubuntu released compared to the optimize version shipped in the firefox upstream tarballs." While it may be more in keeping with Mozilla's wishes, it certainly violates a basic principle of Linux distributions. It doesn't necessarily seem too dangerous for one package, but it is something of a slippery slope.

The release model for Chromium is even more constricting as each new version is meant to supplant the previous version. As Callaway described, it contains various modified versions of libraries, which makes it difficult for distributions to officially package in any way other than with bundled libraries. If that happens in Ubuntu for example, that would double the number of applications shipped with bundled libraries. Going from one to two may seem like a fairly small thing, but will other upstreams start heading down that path?

The Fedora policy linked above is worth reading for some good reasons not to bundle libraries, but there are some interesting possibilities in a system where that was the norm. Sandboxing applications for security purposes would be much more easily done if all the code lives in one place and could be put into some kind of restrictive container or jail. Supporting multiple different versions of an application also becomes easier.

It is fundamentally different from the way Linux distributions have generally operated, but some of that is historical. While bandwidth may not be free, it is, in general, dropping in price fairly quickly. Disk space is cheap, and getting cheaper; maybe there is room to try a different approach. The distribution could still serve as a central repository for packages and, perhaps more importantly, as a clearinghouse for security advisories on those packages.

Taking it one step further and sandboxing those applications, so that any damage caused by an exploit is limited, might be a very interesting experiment. The free software world is an excellent candidate for that kind of trial, in fact it is hard to imagine it being done any other way; the proprietary operating systems don't have as a free a hand to repackage the applications that they run. It seems likely that the negatives will outweigh the advantages, but we won't really know until someone gives it a try.

Comments (115 posted)

Archiveopteryx

By Jonathan Corbet
March 16, 2010
Your editor, like many LWN readers, deals in large quantities of electronic mail. As a result, tools which can help with the mail flood are always of interest. One tool which has been on the radar for some time is Archiveopteryx, a database-backed mail store which is meant to deal with high mail volumes. Archiveopteryx does not seem to have a hugely high profile, but it does have a dedicated user base and a steady development pace; Archiveopteryx 3.1.3 was released on March 10.

The idea behind Archiveopteryx is simple enough: build a mail store around the PostgreSQL database, then provide access to it through the usual protocols. Installation is relatively easy for a site which already has PostgreSQL in place; a simple "make install" does the bulk of the work. A straightforward configuration file allows for control over protocols, ports, etc., and there is an administrative program which can be used to set up users within the mail store.

On the protocol side, Archiveopteryx supports POP and IMAP for access to email. It can handle mail receipt directly through SMTP, but that is not normally how one would do things; there is still value in having a real mail transfer agent in the process. The preferred mode is to use the LTMP LMTP protocol to accept mail from the MTA; there is also a command-line utility which can be used for that purpose if need be. The installation instructions include straightforward recipes for configuring Archiveopteryx to work with a number of MTAs. Archiveopteryx also supports the Sieve filtering standard and the associated protocol for managing scripts.

Those who set up a large-scale mail store can be expected to have some archived mail sitting around. Archiveopteryx provides an aoximport tool for importing this email into the system. Your editor found it to be overly simple and inflexible, though. It is unable to create subfolders when importing an entire folder tree (they must already be in place or the import fails), and it failed to import the bulk of the messages when working with a Dovecot-managed maildir mailbox. The importer, perhaps, is like the Debian installer: users tend to only need it once, so it gets relatively little work once the basic functionality is in place.

Archiveopteryx works well as an IMAP server, and it is indeed fast when dealing with folders containing many messages. Operations like deleting or refiling groups of messages go notably faster than with Dovecot on the same server. On the other hand, your editor was unable to get the Sieve script functionality to work at all; this is probably more a matter of incomplete configuration than fundamental problems with Archiveopteryx itself, but it was still a discouraging development.

That ties into the biggest disappointment with Archiveopteryx, though, which is probably totally unjustified: your editor would like this tool to be something that it is not. If one is going to go to the trouble of storing all of one's email into a complex database, it would be nice to be able to do fast, complex searches on that email. That way, the next time it becomes necessary to, say, collect linux-kernel zombie posts, a quick search will do. Archiveopteryx seems to have a search feature built into it, but actually using that feature appears to be limited to exporting messages with the aoxexport tool. The IMAP protocol is not particularly friendly toward the implementation of fast, server-side searching, but it still seems like something better should be possible.

All that should not detract from what Archiveopteryx does well: store and serve email in large volumes using standard protocols. As a tool for ISPs and for others needing to make email available to lots of users, it seems highly useful; it is clearly meant to scale in ways that servers like Dovecot are not.

There is one remaining problem, though: the future of Archiveopteryx is not entirely assured. For years, this program has been developed by a company called Oryx, which offered commercial support for it. In June, 2009, though, the developers behind Oryx announced that the company was shutting down, with the final closure expected in October of this year. They say:

So we're gradually closing down Oryx, BUT NOT ARCHIVEOPTERYX. We'll relicense it using either the BSD or Apache 2 licenses and continue making new releases for years to come. We both feel obliged to keep the existing archives viable.

(The code is currently licensed under OSLv3).

A sense of obligation may keep Archiveopteryx going for a while, but if it's going to be something that people can count on for years into the future, it will have to develop a more active development community. Archiveopteryx has the look of a solidly company-controlled project - the project's git repository is overwhelmingly dominated by commits from the two principal developers. Such projects are always at a bit of risk if the backing company runs into trouble. But Archiveopteryx is free software, and highly useful free software at that; it seems like its user community should be able to carry it forward.

Comments (28 posted)

OpenTaxSolver solves taxes, openly

March 17, 2010

This article was contributed by Nathan Willis

OpenTaxSolver (OTS) takes on one of open source software's long-standing criticisms: the lack of a simple-to-use tax return preparation application on the level of TaxCut or TurboTax. Although OTS does not feature the step-by-step, question-driven interface popular in the proprietary products, it includes an optional graphical front-end, and enables the user to systematically fill out the most popular US federal income tax forms: 1040, Schedules A,B, C, and D, and eight US state income tax returns.

Over the years there have been several other open source tax preparation projects, but most tend to produce working solutions for only a few years, then fall out of maintainership or disappear altogether. Because the income tax code changes every year, the math and the interface must change every year — in unpredictable and sometimes complicated ways. Consequently, the fact that OTS has been making stable releases since 2004 makes it a stand-out. The team is composed largely of individuals who choose to support specific tax solvers, thus explaining the specific state returns — in past years a few different forms were supported, but not continued in subsequent annual releases. Understandably, work on the code is cyclical, with discussion picking up each year as tax time in the US draws near.

[OpenTaxSolver GUI]

OTS is written in C, and at its heart is a text-driven utility that reads data for input from an external file, "solves" the tax calculation, and writes its output to a separate file. Experienced users may still prefer this approach, but the project's site says most choose to use the bundled GUI instead. The GUI version reads in data from an example or template file, but allows the user to input the correct numbers, then performs the back-end calculations. Using OTS will not fill out your return for you, it will just perform the calculations you need to fill it out correctly yourself.

The latest release is version 7.05, updated March 9, 2010. It includes support for 2009 US Form 1040 (individual federal tax return), plus Schedule A (itemized deductions), Schedule B (Interest and Ordinary Dividends), Schedule C (Profit or Loss From Business), and Schedule D (Capital Gains and Losses), and state income tax returns for 8 of the 41 US states with a state income tax: California, Massachusetts, North Carolina, New Jersey, New York, Ohio, Pennsylvania, and Virginia. Packages are provided for Linux, Windows, and (new for this year), Mac OS X; each of which contains the appropriate binaries as well as the GPLv2-licensed source code.

The Linux package is a 421KB tarball, containing the command line and GUI versions of the program, example data files, and a build script that can be used to rebuild the binaries. The GUI is implemented in Open Tool Kit (Otk), a tiny cross-platform widget library that is entirely self-contained. There is no installation process required; one needs only to unpack the tarball to an appropriate directory and run the binaries.

Command-line usage

There is a separate binary for each state return, a binary for Schedule C, and a single binary that handles US 1040 and Schedules A, B, and D. To generate a return from the command line, first open up a template or example file for the appropriate form from the examples_and_templates directory. The only difference between the two is that "examples" are completely filled-in with test data, while the "templates" contain all zeroes in the numeric entries and blanks in the text entries.

The site's instructions say to create a copy of the template for each individual return being prepared, a helpful tip for those who do taxes for friends and family members. The templates use the .dat extension, but are plain text, and line-oriented. Each field from the official IRS form which you are expected to fill in is represented by a labeled line in the file, and comments both expand on the purpose of the line and give valid input, such as:

    Dependents     ??       {Number of Dependents, self=1, spouse, etc.}
    [...]
    D4		;	{ Short-term gain from 6252, gain or loss from Forms 4684, 6781, 8824. }

The input file does not include every line in the final form, of course; the idea is that the user fills in the basic data, and the solver calculates all of the intermediate lines with the relevant formula. This is where the examples come in handy. While the template provides a line for every required field, the filled-out example input is more helpful because it gives clues as to how to enter data for specific situations. For example, more than one source of interest income is entered for L8a:

    L8a                     {Interest 1099-INT}
              37.71           {Bank Savings}
              12.65           {Credit Union}
              16.85           {Savings Bank}
                    ;

When the input file is complete, simply execute the appropriate binary from the shell prompt, passing the input file as an argument, such as: ./bin/taxsolve_US1040_2009 my_2009_1040.dat. The solver will generate an output file named my_2009_1040.out, containing the correct number for every line of the form, including the final amount owed or to be refunded. For the 1040 solver, numbers for the various Schedules are included in the output in tab-offset blocks at the point just before the line where they are referenced in the main form.

From there, filling out the final forms (whether on paper or PDF) is as simple as copying the data from the output file. There has been talk in past years of adding additional output techniques, including automatically filling-in the editable PDF forms provided by the IRS, or of transforming OTS's output into TurboTax's Tax Exchange Format (TXF) files, but thus far neither technique has made it into a release. A discussion thread on the project's SourceForge site mentions several methods that an entrepreneurial hacker can use to transform OTS text output into a format that can be imported to a PDF form directly.

OTS_GUI

The final binary in the package is the OTS GUI application. Unlike the command-line solvers, though, it must be launched with the provided shell script, Run_taxsolve_GUI_Unix.sh. At launch, it presents a menu of the available tax solvers, and a button with which to select an input file. The input file is loaded in, pre-filling the form fields in the GUI. If the input file is already correct, hitting the "Compute Tax" button generates the output file automatically.

But the advantage of the GUI, of course, is that browsing through the fields and editing the input numbers is easier than editing the text file before hand. The GUI breaks up the long list of fields into convenient, page-sized chunks, including the line numbers and editable comments.

Otk is far from being a "flashy" user interface toolkit; it is very limited in layout and text options, and incorporates a look-and-feel that might even elicit sneers from Motif and Tcl/Tk scripters. Aesthetics aside, though, in practical usage the bare-bones text rendering can be difficult to read — horizontal and vertical scaling seem to be calculated as a percentage of the window dimensions, causing some fields and comments to be overly compressed, and others stretched out. Still, with a little trial and error, it is easy enough to step through all of the pages and produce an accurate output file, and that is ultimately the only goal.

Speaking of Tcl/Tk, there is an alternate, Tcl/Tk-based GUI available for download in a separate package. The timestamp on the latest release is from February of 2010. However, it is source code only, and depends on several external Tcl libraries; building it is not for the faint-of-Tcl-heart.

Technology and taxes

OTS keeps it simple, which is probably the key to its survival over this many years: TaxGeek (the second-most active project) has not been updated since early 2008, the once-promising Tax Code Software Foundation site now redirects to a holding page. The results are not much better for countries other than the US; a few dormant projects exist for UK, German, and Australian returns, but nothing is active.

A combination of factors are proposed to explain the lack of open source tax preparation software whenever the discussion comes up, including the level of legal expertise required to keep up with every-changing tax code, and the fact that most geeks do not find the arithmetic of filling out the paperwork difficult enough to warrant writing an application to do it for them.

Furthermore, the "correctness guarantee" question comes up in any discussion, despite the fact that other tax preparation services and programs only offer guarantees subject to their own list of restrictions and limitations. The OTS site argues that the only way to know for sure that a tax preparation program's numbers are correct is to examine the formulas it uses — something impossible to do with proprietary code. Searching on the web for "mistakes" and "TurboTax" indeed turns up a massive number of hits, many coming from the professional web sites of human tax preparers. OTS at least makes its math accessible, and it does log its steps to stdout and mark them carefully in the output file to assist in double-checking.

To some, the answer is that the government should provide free software enabling its citizens to prepare and file their returns electronically. In the US, there is at least a partial solution, which may also detract from the willpower of the open source community to produce its own alternative. The IRS provides all of its forms, instructions, and ancillary publications for free in PDF format. It has also started allowing individuals and small businesses to file returns electronically, at no charge — through the use of approved, regulated third-party companies.

But this option creates another set of problems for some people with free software leanings. As the OTS site observes, the third-party electronic filing services require entrusting a stranger with highly sensitive personal information, but they also impose other arbitrary restrictions: the return must be prepared and filed all in one session, there are income-limit and business-size restrictions, and each individual must file his or her own return only. To the OTS team, those are reasons enough to take the (tax) law into their own hands.

Any tax preparer will tell you that nothing trumps personal experience when it comes to getting the most deductions and advantages when filing your return. OTS does not attempt to do the professional tax preparer's job; it merely attempts to speed up the process of composing a normal return for a user who already knows more or less what that return should include. Then again, especially if you are the resident tax preparer for your family or friends, a tool to crank out those returns rapidly and systematically is still a win. Time is money.

Comments (17 posted)

Page editor: Jonathan Corbet

Security

Linux adds router denial-of-service prevention

By Jake Edge
March 17, 2010

The recently completed Linux 2.6.34 merge window included a patch to eliminate a type of denial-of-service attack against routers. The "Generalized TTL Security Mechanism" (GTSM) is described in RFC 5082 as a means to protect routers from CPU-utilization attacks—essentially overloading the router with bogus Border Gateway Protocol (BGP) packets. With the addition of a simple socket option, those attacks can be easily thwarted.

Time-to-live (or TTL) is an eight-bit field in an IP packet that is initially set to some value (by default 64) on the sending host. Each host that forwards the packet decrements it, and if it ever reaches zero, the packet is discarded. The idea is to eliminate the possibility of immortal packets that continue to be forwarded in some kind of Internet loop eventually consuming all of the bandwidth. Tools like traceroute and ping can change the TTL values of the packets they send to provide different kinds of information about the network.

Since TTL is already a part of IP, it can be extended in compatible ways. The idea behind GTSM is that two applications negotiate to use a minimum TTL value that they will accept, any packets that have a lower value will be discarded. Because routers that are communicating via BGP—the core Internet routing protocol—are typically adjacent (i.e. one hop from each other), and TTL spoofing is considered to be more-or-less impossible, the TTL value can be used to eliminate spoofed packets. By setting the minimum TTL value to 255, and sending their packets with a TTL of 255, two routers can ensure that they only process BGP packets from each other.

BGP sessions typically use an MD5-based signature to authenticate the sender. Prior to GTSM, an attacker could spoof IP packets to a router, which looked like they came from one of its peers. It would then do the MD5 calculation and find out that, in fact, the packet was bogus. But that takes CPU time. Enough spoofed packets may tie up the CPU such that real messages get lost. GTSM allows routers to drop the spoofed packets without ever calculating the MD5 hash.

The Linux patch is rather simple and the implementation is the same as that for BSD kernels. A new option (IP_MINTTL) is added that can be used with setsockopt() to change the minimum TTL for a socket. If set, the TCP code checks the value and discards packets that have smaller TTLs. The patch does not add support for various other protocols (e.g. UDP) nor for the IPv6 equivalent, which is IPV6_MINHOPLIMIT.

Applications would need to negotiate the use of GTSM via some higher-level protocol and, as the RFC points out, need to authenticate the peer before enabling GTSM. Another kind of denial-of-service could be performed if a bogus packet initiating IP_MINTTL is processed.

It is interesting to see a basic IP building-block like TTL being repurposed to stop these kinds of attacks. The idea has been around for a bit, with the first RFC being accepted in 2004. As with many Internet security techniques, it only came about after these CPU-utilization attacks became widespread. Each time attackers find a new hole, various folks find some kind of fix. It is a non-stop game of whack-a-mole, and one that isn't likely to end soon.

Comments (5 posted)

Brief items

SpamAssassin-milter has a remote root vulnerability

SpamAssassin-milter plugs SpamAssassin into mail agents which speak the "milter" protocol. It is, evidently, trivially easy to get this plugin to execute commands as root when it is used with Postfix in some configurations, and possibly with other mailers as well. There is a bug tracker entry where progress on a patch can be followed; the developers seem to not be in a great hurry, despite the fact that exploits are circulating. Sites using SpamAssassin-milter should probably just disable it for now. (Thanks to Christof Damian).

Comments (6 posted)

New vulnerabilities

dpkg: path traversal

Package(s):dpkg CVE #(s):CVE-2010-0396
Created:March 11, 2010 Updated:March 22, 2010
Description: From the Debian advisory:

William Grant discovered that the dpkg-source component of dpkg, the low-level infrastructure for handling the installation and removal of Debian software packages, is vulnerable to path traversal attacks. A specially crafted Debian source package can lead to file modification outside of the destination directory when extracting the package content.

Alerts:
Fedora FEDORA-2010-4344 dpkg 2010-03-13
Fedora FEDORA-2010-4371 dpkg 2010-03-13
Ubuntu USN-909-1 dpkg 2010-03-11
Debian DSA-2011-1 dpkg 2010-03-10

Comments (none posted)

drbd8: privilege escalation

Package(s):drbd8 CVE #(s):
Created:March 16, 2010 Updated:March 17, 2010
Description: From the Debian advisory:

Philipp Reisner fixed an issue in the drbd kernel module that allows local users to send netlink packets to perform actions that should be restricted to users with CAP_SYS_ADMIN privileges. This is a similar issue to those described by CVE-2009-3725.

Alerts:
Debian DSA-2015 drbd8 2010-03-15

Comments (none posted)

drupal: multiple vulnerabilities

Package(s):drupal6 CVE #(s):
Created:March 15, 2010 Updated:March 17, 2010
Description: From the Debian advisory:

Several vulnerabilities (SA-CORE-2010-001) have been discovered in drupal6, a fully-featured content management framework.

Installation cross site scripting

A user-supplied value is directly output during installation allowing a malicious user to craft a URL and perform a cross-site scripting attack. The exploit can only be conducted on sites not yet installed.

Open redirection

The API function drupal_goto() is susceptible to a phishing attack. An attacker could formulate a redirect in a way that gets the Drupal site to send the user to an arbitrarily provided URL. No user submitted data will be sent to that URL.

Locale module cross site scripting

Locale module and dependent contributed modules do not sanitize the display of language codes, native and English language names properly. While these usually come from a preselected list, arbitrary administrator input is allowed. This vulnerability is mitigated by the fact that the attacker must have a role with the 'administer languages' permission.

Blocked user session regeneration

Under certain circumstances, a user with an open session that is blocked can maintain his/her session on the Drupal site, despite being blocked.

Alerts:
Debian DSA-2016-1 drupal6 2010-03-13

Comments (none posted)

egroupware: multiple vulnerabilities

Package(s):egroupware CVE #(s):
Created:March 12, 2010 Updated:March 17, 2010
Description:

From the Debian advisory:

Nahuel Grisolia discovered two vulnerabilities in Egroupware, a web-based groupware suite: Missing input sanitising in the spellchecker integration may lead to the execution of arbitrary commands and a cross-site scripting vulnerability was discovered in the login page.

Alerts:
Debian DSA-2013-1 egroupware 2010-03-11

Comments (none posted)

kernel: denial of service

Package(s):kernel CVE #(s):CVE-2010-0623
Created:March 17, 2010 Updated:May 3, 2010
Description: The kernel prior to version 2.6.33-rc7 does not properly manage futex reference counts, enabling local users to force a kernel oops.
Alerts:
Oracle ELSA-2013-1645 kernel 2013-11-26
openSUSE openSUSE-SU-2013:0927-1 kernel 2013-06-10
Mandriva MDVSA-2010:088 kernel 2010-04-30
Pardus 2010-48 kernel 2010-04-09
SuSE SUSE-SA:2010:018 kernel 2010-03-22
Ubuntu USN-914-1 linux, linux-source-2.6.15 2010-03-17

Comments (none posted)

kernel: remote denial of service

Package(s):kernel CVE #(s):CVE-2010-0008
Created:March 17, 2010 Updated:July 5, 2011
Description: A maliciously-crafted SCTP packet can cause a kernel crash on the targeted system.
Alerts:
SUSE SUSE-SU-2011:0737-1 kernel 2011-07-05
SUSE SUSE-SU-2011:0711-1 kernel 2011-06-29
SUSE SUSE-SA:2011:026 kernel 2011-05-20
Red Hat RHSA-2010:0342-01 kernel 2010-04-06
CentOS CESA-2010:0147 kernel 2010-03-18
CentOS CESA-2010:0146 kernel 2010-03-17
Red Hat RHSA-2010:0149-01 kernel 2010-03-16
Red Hat RHSA-2010:0148-01 kernel 2010-03-16
Red Hat RHSA-2010:0147-01 kernel 2010-03-16
Red Hat RHSA-2010:0146-01 kernel 2010-03-16
Ubuntu USN-947-2 kernel 2010-06-04
Ubuntu USN-947-1 linux, linux-source-2.6.15 2010-06-03

Comments (none posted)

kernel: null pointer dereference

Package(s):kernel CVE #(s):CVE-2009-4271
Created:March 17, 2010 Updated:June 4, 2010
Description: The kernel can be forced to dereference a null pointer while executing a core dump, enabling a denial of service attack or possibly privilege escalation, depending on how the kernel is configured.
Alerts:
Ubuntu USN-947-1 linux, linux-source-2.6.15 2010-06-03
Ubuntu USN-947-2 kernel 2010-06-04
CentOS CESA-2010:0146 kernel 2010-03-17
Red Hat RHSA-2010:0146-01 kernel 2010-03-16

Comments (none posted)

kernel: null pointer dereference

Package(s):kernel CVE #(s):CVE-2010-0437
Created:March 17, 2010 Updated:June 4, 2010
Description: Due to a flaw in the IPv6 protocol implementation, a remote attacker might be able to force a null pointer dereference with hostile network traffic.
Alerts:
Red Hat RHSA-2010:0161-01 kernel-rt 2010-03-23
CentOS CESA-2010:0147 kernel 2010-03-18
Red Hat RHSA-2010:0149-01 kernel 2010-03-16
Red Hat RHSA-2010:0148-01 kernel 2010-03-16
Red Hat RHSA-2010:0147-01 kernel 2010-03-16
Ubuntu USN-947-2 kernel 2010-06-04
Ubuntu USN-947-1 linux, linux-source-2.6.15 2010-06-03

Comments (none posted)

libpng: resource consumption

Package(s):libpng10 CVE #(s):CVE-2010-0205
Created:March 16, 2010 Updated:October 6, 2010
Description: From the Red Hat bugzilla:

It was reported that libpng suffers from an issue where certain highly compressed ancillary chunks (zTxt, iTxt, iCCP) could cause libpng to stall or crash by consuming huge amounts of memory. This vulnerability is reported to affect all versions of libpng prior to 1.4.1, as well as versions of Firefox from 3.0. It is also possible that other gecko-based browsers are vulnerable as well, as well as all versions of pngcrush, ImageMagick, and GraphicsMagick.

Alerts:
Oracle ELSA-2012-0317 libpng 2012-02-21
Gentoo 201010-01 libpng 2010-10-05
CentOS CESA-2010:0534 libpng 2010-08-16
CentOS CESA-2010:0534 libpng 2010-07-21
Fedora FEDORA-2010-10833 libpng10 2010-07-06
CentOS CESA-2010:0534 libpng 2010-07-14
CentOS CESA-2010:0534 libpng 2010-07-21
Red Hat RHSA-2010:0534-01 libpng 2010-07-14
SuSE SUSE-SR:2010:012 evolution-data-server, python/libpython2_6-1_0, mozilla-nss, memcached, texlive/te_ams, mono/bytefx-data-mysql, libpng-devel, apache2-mod_php5, ncpfs, pango, libcmpiutil 2010-05-25
SuSE SUSE-SR:2010:011 dovecot12, cacti, java-1_6_0-openjdk, irssi, tar, fuse, apache2, libmysqlclient-devel, cpio, moodle, libmikmod, libicecore, evolution-data-server, libpng/libpng-devel, libesmtp 2010-05-10
SuSE SUSE-SR:2010:013 apache2-mod_php5/php5, bytefx-data-mysql/mono, flash-player, fuse, java-1_4_2-ibm, krb5, libcmpiutil/libvirt, libmozhelper-1_0-0/mozilla-xulrunner190, libopenssl-devel, libpng12-0, libpython2_6-1_0, libtheora, memcached, ncpfs, pango, puppet, python, seamonkey, te_ams, texlive 2010-06-14
Debian DSA-2032-1 libpng 2010-04-11
Pardus 2010-41 libpng-1.2.43-21-6 libpng-1.2.43-20-10 2010-03-29
Fedora FEDORA-2010-4616 libpng 2010-03-16
Fedora FEDORA-2010-4673 libpng 2010-03-16
Mandriva MDVSA-2010:064 libpng 2010-03-23
Mandriva MDVSA-2010:063 libpng 2010-03-22
Ubuntu USN-913-1 libpng 2010-03-16
Fedora FEDORA-2010-3414 libpng10 2010-03-03
Fedora FEDORA-2010-3375 libpng10 2010-03-03

Comments (none posted)

moin: multiple vulnerabilities

Package(s):moin CVE #(s):CVE-2010-0668 CVE-2010-0669 CVE-2010-0717
Created:March 12, 2010 Updated:October 19, 2012
Description:

From the Debian advisory:

CVE-2010-0668: Multiple security issues in MoinMoin related to configurations that have a non-empty superuser list, the xmlrpc action enabled, the SyncPages action enabled, or OpenID configured.

CVE-2010-0669: MoinMoin does not properly sanitize user profiles.

CVE-2010-0717: The default configuration of cfg.packagepages_actions_excluded in MoinMoin does not prevent unsafe package actions.

Alerts:
Gentoo 201210-02 moinmoin 2012-10-18
Ubuntu USN-911-1 moin 2010-03-11
Debian DSA-2014-1 moin 2010-03-12

Comments (none posted)

ncpfs: multiple vulnerabilities

Package(s):ncpfs CVE #(s):CVE-2010-0790 CVE-2010-0791
Created:March 12, 2010 Updated:June 14, 2010
Description:

From the Mandriva advisory:

sutil/ncpumount.c in ncpumount in ncpfs 2.2.6 produces certain detailed error messages about the results of privileged file-access attempts, which allows local users to determine the existence of arbitrary files via the mountpoint name (CVE-2010-0790).

The (1) ncpmount, (2) ncpumount, and (3) ncplogin programs in ncpfs 2.2.6 do not properly create lock files, which allows local users to cause a denial of service (application failure) via unspecified vectors that trigger the creation of a /etc/mtab~ file that persists after the program exits (CVE-2010-0791).

Alerts:
SuSE SUSE-SR:2010:012 evolution-data-server, python/libpython2_6-1_0, mozilla-nss, memcached, texlive/te_ams, mono/bytefx-data-mysql, libpng-devel, apache2-mod_php5, ncpfs, pango, libcmpiutil 2010-05-25
SuSE SUSE-SR:2010:013 apache2-mod_php5/php5, bytefx-data-mysql/mono, flash-player, fuse, java-1_4_2-ibm, krb5, libcmpiutil/libvirt, libmozhelper-1_0-0/mozilla-xulrunner190, libopenssl-devel, libpng12-0, libpython2_6-1_0, libtheora, memcached, ncpfs, pango, puppet, python, seamonkey, te_ams, texlive 2010-06-14
Mandriva MDVSA-2010:061 ncpfs 2010-03-11

Comments (none posted)

pango: denial of service

Package(s):pango CVE #(s):CVE-2010-0421
Created:March 16, 2010 Updated:March 2, 2011
Description: From the Red Hat advisory:

An input sanitization flaw, leading to an array index error, was found in the way the Pango font rendering library synthesized the Glyph Definition (GDEF) table from a font's character map and the Unicode property database. If an attacker created a specially-crafted font file and tricked a local, unsuspecting user into loading the font file in an application that uses the Pango font rendering library, it could cause that application to crash.

Alerts:
Ubuntu USN-1082-1 pango1.0 2011-03-02
Mandriva MDVSA-2010:121 pango 2010-06-22
SuSE SUSE-SR:2010:012 evolution-data-server, python/libpython2_6-1_0, mozilla-nss, memcached, texlive/te_ams, mono/bytefx-data-mysql, libpng-devel, apache2-mod_php5, ncpfs, pango, libcmpiutil 2010-05-25
SuSE SUSE-SR:2010:013 apache2-mod_php5/php5, bytefx-data-mysql/mono, flash-player, fuse, java-1_4_2-ibm, krb5, libcmpiutil/libvirt, libmozhelper-1_0-0/mozilla-xulrunner190, libopenssl-devel, libpng12-0, libpython2_6-1_0, libtheora, memcached, ncpfs, pango, puppet, python, seamonkey, te_ams, texlive 2010-06-14
SuSE SUSE-SR:2010:009 viewvc, krb5, pango, gimp, kdebase3, kde4-kdm 2010-04-14
Pardus 2010-40 pango-1.26.2-34-10 pango-1.21.3-28-8 2010-03-29
Debian DSA-2019-1 pango1.0 2010-03-20
CentOS CESA-2010:0140 pango 2010-03-16
Red Hat RHSA-2010:0140-01 pango 2010-03-15

Comments (none posted)

pulseaudio: denial of service

Package(s):pulseaudio CVE #(s):CVE-2009-1299
Created:March 16, 2010 Updated:February 10, 2014
Description: From the Debian advisory:

Dan Rosenberg discovered that the PulseAudio sound server creates a temporary directory with a predictable name. This allows a local attacker to create a Denial of Service condition or possibly disclose sensitive information to unprivileged users.

Alerts:
Gentoo 201402-10 pulseaudio 2014-02-08
Mandriva MDVSA-2010:124 pulseaudio 2010-06-23
SuSE SUSE-SR:2010:007 cifs-mount/samba, compiz-fusion-plugins-main, cron, cups, ethereal/wireshark, krb5, mysql, pulseaudio, squid/squid3, viewvc 2010-03-30
Debian DSA-2017-1 pulseaudio 2010-03-15

Comments (none posted)

tar, cpio: arbitrary code execution

Package(s):tar cpio CVE #(s):CVE-2010-0624
Created:March 16, 2010 Updated:December 1, 2013
Description: From the Red Hat advisory:

A heap-based buffer overflow flaw was found in the way tar and expand archive files. If a user were tricked into expanding a specially-crafted archive, it could cause the executable to crash or execute arbitrary code with the privileges of the user running it.

Alerts:
Ubuntu USN-2456-1 cpio 2015-01-08
Gentoo 201311-21 cpio 2013-11-28
Gentoo 201111-11 tar 2011-11-20
rPath rPSA-2010-0070-1 cpio tar 2010-10-27
SuSE SUSE-SR:2010:011 dovecot12, cacti, java-1_6_0-openjdk, irssi, tar, fuse, apache2, libmysqlclient-devel, cpio, moodle, libmikmod, libicecore, evolution-data-server, libpng/libpng-devel, libesmtp 2010-05-10
Pardus 2010-42 tar-1.21-18-4 cpio-2.9-9-5 cpio-2.9-9-4 tar-1.20-17-4 2010-03-29
Fedora FEDORA-2010-4306 tar 2010-03-12
Fedora FEDORA-2010-4302 cpio 2010-03-12
Mandriva MDVSA-2010:065 cpio 2010-03-23
CentOS CESA-2010:0143 cpio 2010-03-17
CentOS CESA-2010:0142 tar 2010-03-17
CentOS CESA-2010:0145 cpio 2010-03-17
Fedora FEDORA-2010-4321 cpio 2010-03-12
CentOS CESA-2010:0141 tar 2010-03-16
CentOS CESA-2010:0144 cpio 2010-03-16
Fedora FEDORA-2010-4309 tar 2010-03-12
Red Hat RHSA-2010:0145-01 cpio 2010-03-15
Red Hat RHSA-2010:0144-01 cpio 2010-03-15
Red Hat RHSA-2010:0143-01 cpio 2010-03-15
Red Hat RHSA-2010:0142-01 tar 2010-03-15
Red Hat RHSA-2010:0141-01 tar 2010-03-15

Comments (none posted)

viewvc: cross-site scripting

Package(s):viewvc CVE #(s):
Created:March 16, 2010 Updated:April 5, 2010
Description: From the viewvc changelog:

Version 1.1.4 security fix: escape user-provided query form input to avoid XSS attack.

Alerts:
Fedora FEDORA-2010-5507 viewvc 2010-04-01
Fedora FEDORA-2010-5524 viewvc 2010-04-01
Fedora FEDORA-2010-4326 viewvc 2010-03-12
Fedora FEDORA-2010-4295 viewvc 2010-03-12

Comments (none posted)

Page editor: Jake Edge

Kernel development

Brief items

Kernel release status

The current development kernel remains 2.6.34-rc1; no new prepatches have been released over the last week.

Stable updates: 2.6.32.10 and 2.6.33.1 were released on March 15. They are both massive, with 145 and 123 patches, respectively.

Comments (none posted)

Quotes of the week

May be I should start to stick posters with photos of modules entitled "I want to believe" everywhere in my flat. Or perhaps I'm going to buy electronic glasses that display modules advertizing in the street. I'm not sure yet but I'll find a way.
-- Frederic Weisbecker

I thought everyone learned the lesson behind SystemTap's failure (and to a certain degree this was behind Oprofile's failure as well): when it comes to tooling/instrumentation we dont want to concentrate on the fancy complex setups and abstract requirements drawn up by CIOs, as development isnt being done there. Concentrate on our developers today, and provide no-compromises usability to those who contribute stuff.

If we dont help make the simplest (and most common) use-case convenient then we are failing on a fundamental level.

-- Ingo Molnar

Jan suggests that we not surprise users by having delalloc enabled when ext3 is mounted with the ext4 driver. However there are other behavior differences as well, mballoc behavior comes to mind at least. What about the 32000 subdir limit? If we go back to ext3 is it ok with the subsecond timestamps and creation time etc? Maybe so... have we tested any of this?

At what point do we include the phase of the moon as worth considering when describing ext4.ko behavior?

-- Eric Sandeen

Comments (5 posted)

After the merge window closed...

By Jonathan Corbet
March 16, 2010
Toward the end of the 2.6.33 development cycle, Linus suggested that he might make the next merge window a little shorter than usual. And, indeed, 2.6.34-rc1 came out on March 8, twelve days after the 2.6.33 release. A number of trees got caught out in the cold as a result of that change, and that appears to be a result that suits Linus just fine.

That said, some trees have been pulled after the -rc1 release. These include the trivial tree, with the usual load of spelling fixes and other small changes. There was a large set of ARM changes, including support for a number of new boards and devices. The memory usage controller got a new threshold feature allowing for finer-grained control of (and information about) memory usage. And so on; all told, nearly 1,000 changes have been merged (as of this writing) since the 2.6.34-rc1 release.

When the final SCSI pull request came along, though, Linus found his moment to draw a line in the sand. Linus, it seems, is getting a little tired of what he sees as last-minute behavior from some subsystem maintainers:

I've told people before. The merge window is for _merging_, not for doing development. If you send me something the last day, then there is no "window" any more. And it is _really_ annoying to have fifty pull requests on the last day. I'm not going to take it any more.

So, Linus says, he plans to be even more unpredictable in the future. Evidently determinism in this part of the process leads to behavior he doesn't like, so, in the future, developers won't really be able to know how long the merge window will be. In such an environment, most subsystem maintainers will end up working as if the merge window had been reduced to a single week - an idea which had been discussed and rejected at the 2009 Kernel Summit.

Comments (6 posted)

Big reader locks

By Jonathan Corbet
March 16, 2010
Nick Piggin's VFS scalability patches have been a work in progress for some time - as is often the case for this sort of low-level, performance-oriented work. Recently, Nick has begun to break the patch set into smaller pieces, each of which solves one part of the problem and each of which can be considered independently. One of those pieces introduces an interesting new mutual exclusion mechanism called the big reader lock, or "brlock."

Readers of the patch can be forgiven for wondering what is going on; anything which combines tricky locking and 30-line preprocessor macros is going to raise eyebrows. But the core concept here is simple: a brlock tries to make read-only locking as fast as possible through the creation of a per-CPU array of spinlocks. Whenever a CPU needs to acquire the lock for read-only access, it takes its own dedicated lock. So read-locking is entirely CPU-local, involving no cache line bouncing. Since contention for a per-CPU spinlock should really be zero, this lock will be fast.

Life gets a little uglier when the lock must be acquired for write access. In short: the unlucky CPU must go through the entire array, acquiring every CPU's spinlock. So, on a 64-processor system, 64 locks must be acquired. That will not be fast, even if none of the locks are contended. So this kind of lock should be used rarely, and only in cases where read-only use predominates by a large margin.

One such case - the target for this new lock - is vfsmount_lock, which is required (for read access) in pathname lookup operations. Lookups are frequent events, and are clearly performance-critical. On the other hand, write access is only needed when filesystems are being mounted or unmounted - a much rarer occurrence. So a brlock is a good fit here, and one small piece (out of many) of the VFS scalability puzzle has been put into place.

Comments (4 posted)

Kernel development news

Who let the hogs out?

By Jonathan Corbet
March 16, 2010
As a normal rule of business, the kernel tries to avoid using more system resources than are absolutely necessary; system time is better spent running user-space programs. So Tejun Heo's cpuhog patch may come across as a little surprising; it creates a mechanism by which the kernel can monopolize one or more CPUs with high-priority processes doing nothing. But there is a good reason behind this patch set; it should even improve performance in some situations.

Suppose you wanted to take over one or more CPUs on the system. The first step is to establish a hog function:

    #include <linux/cpuhog.h>

    typedef int (*cpuhog_fn_t)(void *arg);

When hog time comes, this function will be called at the highest possible priority. If the intent is truly to hog the CPU, the function should probably spin in a tight loop. But one should take care to ensure that this loop will end at some point; one does not normally want to take the CPU out of commission permanently.

The monopolization of processors is done with any of:

    int hog_one_cpu(unsigned int cpu, cpuhog_fn_t fn, void *arg);
    void hog_one_cpu_nowait(unsigned int cpu, cpuhog_fn_t fn, void *arg,
			    struct cpuhog_work *work_buf);
    int hog_cpus(const struct cpumask *cpumask, cpuhog_fn_t fn, void *arg);
    int try_hog_cpus(const struct cpumask *cpumask, cpuhog_fn_t fn, void *arg);

A call to hog_one_cpu() will cause the given fn() to be run on cpu in full hog mode; the calling process will wait until fn() returns; at which point the return value from fn() will be passed back. Should there be other useful work to do (on a different CPU, one assumes), hog_one_cpu_nowait() can be called instead; it will return immediately, while fn() may still be running. The work_buf structure must be allocated by the caller and be unused, but the caller need not worry about it beyond that.

Sometimes, total control over one CPU is not enough; in that case, hog_cpus() can be called to run fn() simultaneously on all CPUs indicated by cpumask. The try_hog_cpus() variant is similar, but, unlike hog_cpus(), it will not wait if somebody else got in and started hogging CPUs first.

So what might one use this mechanism for? One possibility is stop_machine(), which is called to ensure that absolutely nothing of interest is happening anywhere in the system for a while. Calls to stop_machine() usually happen when fundamental changes are being made to the system - examples include the insertion of dynamic probes, loading of kernel modules, or the removal of CPUs. It has always worked in the same way as the CPU hog functions do - by running a high-priority thread on each processor.

The new stop_machine() implementation, naturally, uses hog_cpus(). Unlike the previous implementation, though (which used workqueues), the new code takes advantage of the CPU hog threads which already exist. That eliminates a performance bug reported by Dimitri Sivanich, whereby the amount of time required to boot a system would be doubled by the extra overhead of various stop_machine() calls.

Another use for this facility is to force all CPUs to quickly go through the scheduler; that can be useful if the system wants to force a transition to a new read-copy-update grace period. Formerly, this task was bundled into the migration thread, which already runs on each CPU, in a bit of an awkward way; now it's a straightforward CPU hog call.

The migration thread itself is also a user of the single-CPU hogging function. This thread comes into play when the system wants to migrate a process which is running on a given CPU. The first thing that needs to happen is to force that process out of the CPU - a job for which the CPU hog is well suited. Once the hog has taken over the CPU, the just-displaced process can be moved to its new home.

The end result is the removal of a fair amount of code, a cleaned-up migration thread implementation, and improved performance in stop_machine(). Some concerns were raised that passing a blocking function as a CPU hog could create problems in some situations. But blocking in a CPU hog seems like an inherently contradictory thing to do; one assumes that the usual response will be "don't do that". And, in fact, version 2 of the patch disallows sleeping in hog functions. Of course, the "don't do that" response will also apply to most uses of CPU hogs in general; taking over processors in the kernel is still considered to be an antisocial thing to do most of the time.

Comments (none posted)

Huge pages part 4: benchmarking with huge pages

March 17, 2010

This article was contributed by Mel Gorman

[Editor's note: this is part 4 of Mel Gorman's series on support for huge pages in Linux. Parts 1, 2, and 3 are available for those who have not read them yet.]

In this installment, a small number of benchmarks are configured to use huge pages - STREAM, sysbench, SpecCPU 2006 and SpecJVM. In doing so, we show that utilising huge pages is a lot easier than in the past. In all cases, there is a heavy reliance on the hugeadm to simplify the machine configuration and hugectl to configure libhugetlbfs.

STREAM is a memory-intensive benchmark and, while its reference pattern has poor spacial and temporal locality, it can benefit from reduced TLB references. Sysbench is a simple OnLine Transaction Processing (OLTP) benchmark that can use Oracle, MySQL, or PostgreSQL as database backends. While there are better OLTP benchmarks out there, Sysbench is very simple to set up and reasonable for illustration. SpecCPU 2006 is a computational benchmark of interest to high-performance computing (HPC) and SpecJVM benchmarks basic classes of Java applications.

1 Machine Configuration

The machine used for this study is a Terrasoft Powerstation described in the table below.

Architecture PPC64
CPU PPC970MP with altivec
CPU Frequency 2.5GHz
# Physical CPUs 2 (4 cores)
L1 Cache per core 32K Data, 64K Instruction
L2 Cache per core 1024K Unified
L3 Cache per socket N/a
Main Memory 8 GB
Mainboard Machine model specific
Superpage Size 16MB
Machine Model Terrasoft Powerstation

Configuring the system for use with huge pages was a simple matter of performing the following commands.

    $ hugeadm --create-global-mounts
    $ hugeadm --pool-pages-max DEFAULT:8G 
    $ hugeadm --set-recommended-min_free_kbytes
    $ hugeadm --set-recommended-shmmax
    $ hugeadm --pool-pages-min DEFAULT:2048MB
    $ hugeadm --pool-pages-max DEFAULT:8192MB

2 STREAM

STREAM [mccalpin07] is a synthetic memory bandwidth benchmark that measures the performance of four long vector operations: Copy, Scale, Add, and Triad. It can be used to calculate the number of floating point operations that can be performed during the time for the “average” memory access. Simplistically, more bandwidth is better.

The C version of the benchmark was selected and used three statically allocated arrays for calculations. Modified versions of the benchmark using malloc() and get_hugepage_region() were found to have similar performance characteristics.

The benchmark has two parameters: N, the size of the array, and OFFSET, the number of elements padding the end of the array. A range of values for N were used to generate workloads between 128K and 3GB in size. For each size of N chosen, the benchmark was run 10 times and an average taken. The benchmark is sensitive to cache placement and optimal layout varies between architectures; where the standard deviation of 10 iterations exceeded 5% of the throughput, OFFSET was increased to add one cache-line of padding between the arrays and the benchmark for that value of N was rerun. High standard deviations were only observed when the total working set was around the size of the L1, L2 or all caches combined.

The benchmark avoids data re-use, be it in registers or in the cache. Hence, benefits from huge pages would be due to fewer faults, a slight reduction in TLB misses as fewer TLB entries are needed for the working set and an increase in available cache as less translation information needs to be stored.

To use huge pages, the benchmark was first compiled with the libhugetlbfs ld wrapper to align the text and data sections to a huge page boundary [libhtlb09] such as in the following example.

   $ gcc -DN=1864135 -DOFFSET=0 -O2 -m64                     \
        -B /usr/share/libhugetlbfs -Wl,--hugetlbfs-align     \
        -Wl,--library-path=/usr/lib                          \
        -Wl,--library-path=/usr/lib64                        \
        -lhugetlbfs stream.c                                 \
        -o stream

   # Test launch of benchmark
   $ hugectl --text --data --no-preload ./stream	

[STREAM
benchmark result] This page contains plots showing the performance results for a range of sizes running on the test machine; one of them appears to the right. Performance improvements range from 11.6% to 16.59% depending on the operation in use. Performance improvements would be typically lower for an X86 or X86-64 machine, likely in the 0% to 4% range.

3 SysBench

SysBench is a OnLine Transaction Processing (OLTP) benchmark representing a general class of workload where clients perform a sequence of operations whose end result must appear to be an indivisible operation. TPC-C is considered an industry standard for the evaluation of OLTP but requires significant capital investment and is extremely complex to set up. SysBench is a system performance benchmark comprising file I/O, scheduler, memory allocation, threading and includes an OLTP benchmark. The setup requirements are less complicated and SysBench works for MySQL, PostgreSQL, and Oracle databases.

PostgreSQL was used for this experiment on the grounds that it uses a shared memory segment similar to Oracle, making it a meaningful comparison with a commercial database server. Sysbench 0.4.12 and Postgres 8.4.0 were built from source.

Postgres was configured to use a 756MB shared buffer, an effective cache of 150MB, a maximum of 6*NR_CPUs clients were allowed to connect. Note that the maximum number of clients allowed is greater than the number of clients used in the test. This is because a typical configuration would allow more connections than the expected number of clients to allow administrative processes to connect. The update_process_title parameter was turned off as a small optimisation. Options that checkout, fsync, log, or synchronise were turned off to avoid interference from I/O. The system was configured to allow the postgres user to use huge pages with shmget() as described in part 3. Postgres uses System V shared memory so pg_ctl was invoked as follows.

   $ hugectl --shm bin/pg_ctl -D `pwd`/data -l logfile start

For the test itself, the table size was 10 million rows, read-only to avoid I/O and the test type was “complex”, which means each operation by the client is a database transaction. Tests were run varying the number of clients accessing the database from one to four times the number of CPU cores in the system. For each thread count, the test was run multiple times until at least five iterations completed with a confidence level of 99% that the estimated mean is within 2% of the true mean. In practise, the initial iteration gets discarded due to increased I/O and faults incurred during the first run.

[SysBench
benchmark result] The plot to the right (click for larger version) shows the performance results for different numbers of threads with performance improvements ranging in the 1%-3.5% mark. Unlike STREAM, the performance improvements would tend to be similar on X86 and X86-64 machines running this particular test configuration. The exact reasoning for this is beyond the scope of the article but it comes down to the fact that STREAM exhibits a very poor locality of reference, making cache behaviour a significant factor in the performance of the workload. As workloads would typically have a greater degree of reference locality than STREAM, the expectation would be that performance gains across different architectures would be similar.

4 SpecCPU 2006

SpecCPU 2006 v1.1 is a standardised CPU-intensive benchmark used in evaluations for HPC that also stresses the memory subsystem. A --reportable run was made comprising “test”, “train”, and three “ref” sets of input data. Three sets of runs compare base pages, huge pages backing just the heap, and huge pages backing text, data, and the heap. Only base tuning was used with no special compile options other than what was required to compile the tests.

To back the heap using huge pages, the tests were run with:

    hugectl --heap runspec ...

To also back the text and data, the SPEC configuration file was modified to build SPEC similar to STREAM described above, then the --text --data --bss switches were also specified to hugectl.

[SpecCPU
benchmark result] This plot shows the performance results running the integer SpecCPU test (click for full size and the floating-point test results). As is clear, there are very large fluctuations depending on what the reference pattern of the workload was but many of the improvements are quite significant averaging around 13% for the Integer benchmarks and 7-8% for the floating-point operations. An interesting point to note is that for the Fortran applications, performance gains were similar whether text/data was backed or the heap. This heavily implies that the Fortran applications were using dynamic allocation. On older Fortran applications, relinking to back the text and data with huge pages may be required to see any performance gains.

5 SpecJVM (JVM/General)

Java is used in an increasing number of scenarios, including real time systems, and it dominates in the execution of business-logic related applications. Particularly within application servers, the Java Virtual Machine (JVM) uses large quantities of virtual address space that can benefit from being backed by huge pages. SpecJVM 2008 is a benchmark suite for Java Runtime Environments (JRE). According to the documentation, the intention is to reflect the performance of the processor and memory system with a low dependence on file or network I/O. Crucially for HPC, it includes SCIMark, which is a Java benchmark for scientific and numerical computing.

The 64-bit version of IBM Java Standard Edition Version 6 SP 3 was used, but support for huge pages is available in other JVMs. The JVM was configured to use a maximum of 756MB for the heap. Unlike the other benchmarks, the JVM is huge-page-aware and uses huge-page-backed shared memory segments when -Xlp is specified. An example invocation of the benchmark is as follows.

   $ java -Xlp -Xmx756m -jar SPECjvm2008.jar 120 300 --parseJvmArgs -i 1 --peak

[SpecJVM
benchmark result] This plot shows the performance results running the full range of SpecJVM tests. The results are interesting as they show performance gains were not universal, with the serial benchmark being spectacularly poor. Despite this, performance was improved on average by 4.43% with very minimal work required on behalf of the administrator.

6 Summary

In this installment, it was shown that with minimal amounts of additional work, huge pages can be easily used to improve benchmarks. For the database and JVM benchmarks, the same configurations could easily be applied to a real-world deployment rather than as a benchmarking situation. For other benchmarks, the effort can be hidden with minimal use of initialisation scripts. Using huge pages on Linux in the past was a tricky affair but these examples show this is no longer the case.

Comments (1 posted)

A critical look at sysfs attribute values

March 17, 2010

This article was contributed by Neil Brown

One of the many memorable lines from Douglas Adams's famous work The Hitchhiker's Guide to the Galaxy was the accusation, probably leveled by supporters of the Encyclopedia Galactica, that the Hitchhiker's Guide was "unevenly edited" and "contains many passages which simply seemed to its editors like a good idea at the time." With small modifications, such as replacing "edited" with "reviewed", this description seems very relevant to the Linux kernel, and undoubtedly many other bodies of software, whether open or closed, free or proprietary. Review is at best "uneven".

It isn't hard to find complaints that the code in the Linux kernel isn't being reviewed enough, or that we need more reviewers. The creation of tags like "Reviewed-by" for patches was in part an attempt to address this by giving more credit to reviewers and there by encouraging more people to get involved in that role.

However one can equally well find complaints about too much review, where developers cannot make progress with some feature because, every time they post a revision, someone new complains about something else and so, in the pursuit of perfection, the good is lost. Similarly, though it does not seem to be a problem lately, there have been times when lots of review would simply result in complaints about white-space inconsistency and spelling mistakes -- things that are worth correcting, but not worth burying a valuable contribution under.

Finding the right topic, the right level, and the right forum for review is not easy (and finding the time can be even harder). This article doesn't propose to address those questions directly, but rather to present a sample of review - a particular topic at a particular level on a particular forum, in the hope that it will be useful. The topic chosen, largely because it is something that your author has needed to work with lately without completely understanding, is "sysfs", the virtual filesystem that provides access to some of the internals of the Linux kernel. And in particular, the attribute files that expose the fine detail of that access.

The level chosen is a high-level or holistic view, asking whether the implementation matches the goals, and at the same time asking whether the goals are appropriate. And the forum is clearly the present publication.

Sysfs and attribute files

Sysfs has an interesting history and a number of design goals, both of which are worth understanding, but neither of which will be examined here except in as much as they reflect specifically the chosen topic: attribute files. The key design goal relating to attribute files is the stipulation - almost a mantra - of "one file, one value" or sometimes "one item per file". The idea here is that each attribute file should contain precisely one value. If multiple values are needed, then multiple files should be used.

A significant part of the history behind this stipulation is the experience of "procfs" or /proc. /proc is a beautiful idea that unfortunately grew in an almost cancerous way to become widely despised. It is a virtual filesystem that originally had one directory for each process that was running, and that directory contained useful information about the running process in various files.

There is clearly more that just processes that could usefully be put in a virtual filesystem, and, with no clear reason to the contrary, things started being added to procfs. With no real design or structure, more and more information was shoe-horned into procfs until it became an unorganised mess. Even inside the per-process directories procfs isn't a pretty sight. Some files (e.g. limits) contain tables with column headers, others (e.g. mounts) have tables without headers, and still others (e.g. status) have rows labeled rather than columns. Some files have single values (e.g. wchan) while others have lots of assorted and inconsistently formatted values (e.g. mountstats).

Against this background of disorganisation and the attendant difficulty of adding new fields without breaking applications, sysfs was declared to have a new policy - one item per file. In fact, in his excellent (though now somewhat out-dated) article on the Driver Model Core, Greg Kroah-Hartman even asserted that this rule was "enforced" (see the side bar on "sysfs").

It would not be fair to hold Greg accountable to what could have been a throw-away line from years ago, and I don't wish to do that. However that comment serves well in providing a starting point and a focus for reviewing the usage of attribute files in sysfs. We can ask if the rule really is being enforced, whether the rule is sufficient to avoid past mistakes, and whether the rule even makes sense in all cases.

As you might guess the answers will be "no", "no" and "no", but the explanation is far more enlightening than the answer.

Is it enforced?

The best way to test if the rule has been enforced is to survey the contents of sysfs - do files contain simple values, or something more? As a very rough assessment of the complexity of the contents of sysfs attribute file, we can issue a simple command:

 find /sys -mount -type f | xargs wc -w | grep -v ' total$'

to get a count of the number of words in each attribute file (the "-mount" is important if you have /sys/kernel/debug mounted, as reading things in there can cause problems).

Processing these results from your author's (Linux 2.6.32) notebook shows that of the 9254 files, 1189 are empty and 7168 have only one word. It seems reasonable to assume these represent only one value (though many of the empty files are probably write-only and this mechanism gives no information about what value or values can be written). This leaves 897 (nearly 10%) which need further examination. They range from two words (487 cases) to 297 words (one case).

While there are nearly 900 files, there are less than 100 base names. If we filter out some common patterns (e.g. gpe%X), the number of distinct attributes is closer to 62, which is a number that can reasonably be examined manually (with a little help from some scripting). Several of these multi-word attribute files contain non-ASCII data and so are almost certainly single values in some reasonable sense. Others contain strings for which a space is a legal character, such as "Dell Inc.", "i8042 KBD port" or "write back". So they clearly are not aberrations from the rule.

There is a small class of files were the single item stored in the file is of an enumerated type. It is common for the file in these cases to contain all of the possible values listed which still seems to hold true to the "one item per file" rule. However there are three variations on this theme:

  • In some cases, such as the "queue/scheduler" attribute of a block device, or the "trigger" attribute of an LED device, all of the possible options are listed, and the currently active one is enclosed in brackets, thus:
       noop anticipatory deadline [cfq]
    

  • In the second variation there are two files, one which contains the list of possibilities, as with "cpufreq/scaling_available_governors" and one which contains the currently-selected value, "cpufreq/scaling_governor".

  • Finally, and this could be just a special case of one of the above, we have "/sys/power/state" for which there is no current value, so it just contains a list of the possible values.

These are all examples of attribute files that do clearly contain just one value or item, but happen to use multiple words is various ways to describe those values. They are false-positives of our simplistic tool for finding complex attribute values.

However there are other multi-word attribute files that are not so easily explained away. /sys/class/bluetooth contains some class attributes such as rfcomm, l2cap and sco. Each of these contains structured data, one record per line with 3 to 9 different datums per record (depending on the particular file), the first datum looking rather like the BD address of a local blue-tooth interface.

This appears to be a clear violation of the "one item per file" policy. The files do appear to be very well structured and so easy to parse, so it is tempting to think that they should be safe enough. However sysfs attribute files are limited in size to one page - typically 4KB. If the number of entries in these files ever gets too large (about 70 lines in the l2cap file), accesses to the file will start corrupting memory, or crashing. Hopefully that will never happen, but "hope" is not normally an acceptable basis for good engineering. From a conversation with the bluetooth maintainer it appears that there are plans to move these files to "debugfs" where they can benefit from the "seq_file" implementation, also used widely in /proc, which allows arbitrarily large files.

Some other examples include "/sys/devices/system/node/node0/meminfo" which appears to be a per-node version of "/proc/meminfo" and is clearly multiple values, and the "options" attributes in /sys/devices/pnp*/* which appear to contain exactly the sort of ad hoc formatting of multiple values of multiple types that people find so unacceptable in /proc. The pnp "resources" files are similarly multi-valued, though to a lesser extent.

As a final example of a lack of enforcement, the PCI device directory for the (Intel 3945) wireless network in this notebook contains a file called "statistics" which contains a hex dump of 240 bytes of data, complete with ASCII decoding at the end of each line such as:

02 00 03 00 d9 05 00 00 28 03 00 00 45 02 00 00  ........(...E...
0d 00 00 00 00 00 00 00 00 00 00 00 d6 00 00 00  ................
b1 02 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
00 00 00 00 00 00 00 00 67 00 00 00 00 00 00 00  ........g.......
This is surely not the sort of thing that sysfs was intended to report. If anything, this looks like it should be a binary attribute, not a doubly-encoded ASCII file.

So to answer our opening question, "no", the one item per file rule is not enforced in any meaningful way. Certainly the vast majority of attribute files do contain just one item and that is good. But there are a number which contain multiple values in a variety of different ways. And this number is only likely to grow as people either copy the current bad examples, or find new use cases that don't seem to fit the existing patterns, so invent new approaches which don't take the holistic view into account.

Is the rule sufficient?

Our next question to ask is whether the stated rule for sysfs attributes is sufficient to avoid an increasingly unorganised and ad hoc sysfs following the unfortunate path of procfs. We have already seen at least one case where it isn't. We do not have a standardised way of representing an enumerated type in a sysfs attribute, and so we have at least two implementations as already mentioned. There is at least one more implementation (exposed in the "md/level" attribute of md/raid devices) where just the current value is visible and the various options are not. Having a standard here would be good for consistency and encourage optimal functionality. But we have no standard.

A similar issue arises with simple numerical values that represent measurable items such as storage size or time. It would be nice if these were reported using standard units, probably bytes and seconds. But we find that this is not the case. Amounts of storage are sometimes reported as bytes (/sys/devices/system/memory/block_size_bytes), sometimes as sectors (/sys/class/block/*/size), and sometimes as kilobytes (block/*/queue/read_ahead_kb).

As these particular examples show, one way to avoid ambiguity is to include the name of the units (bytes or kb here) as part of the attribute name, a practice known as Hungarian notation. However this is far from uniformly applied with the examples given above being more the exception than the rule.

Measures of duration face the same problem. Many times that the kernel needs to know about are substantially less than one second. However rather than use the tried-and-true decimal point notation for sub-unit values, some attribute files report in milliseconds (unload_heads in libata devices), some in microseconds (cpuide/state*/time), and some are even in seconds (/sys/class/firmware/timeout). As an extra confusion there are some (.../bridge/hello_time) which use a unit that varies depending on architecture, from centiseconds to mibiseconds (if that is a valid name for 1-1024th part of a second). It is probably fortunate that there is no metric/imperial difference in units for time else we would probably find both of those represented too.

And then there are truth values: On, on, 1, Off, off, 0.

So it would seem that the answer to our second question is "no" too, though it is harder to be positive about this as there is no clearly stated goal that we can measure against. If the goal is to have a high degree of uniformity in the representation of values in attributes, then we clearly don't meet that goal.

Does the requirement always make sense?

So the guiding principle of one item per file is not uniformly enforced, and it isn't really enough to avoid needless inconsistencies, but were it to be uniformly applied, would it really give us what we want, or is it too simplistic or too vague to be useful as a strict rule?

A good place to start exploring this question is the "capabilities/key" attribute of "input" devices. The content of this file is a bitmap listing which key-press events the input device can possibly generate. The bitmap is presented in hexadecimal with a space every 64 bits. Clearly this is a single value - a bitmap - but it is also an array of bits. Or maybe an array of "long"s. Does that make is multiple values in a single attribute?

While that is a trivial example which we surely would all accept as being a single value despite being many bits long, it isn't hard to find examples that aren't quite as clear cut. Every block device has an attribute called "inflight" which contains two numbers, the number of read requests that are in-flight (have been submitted, but not yet completed) and the number of write requests that are in-flight. Is this a single array, like the bitmap, or two separate values? There would be little cost to have implemented "inflight" as two separate attributes thus clearly following the rule, but maybe there would be little value either.

The "cpufreq/stats/time_in_state" attribute goes one step further. It contains pairs, one per line, of CPU frequencies (pleasingly in HZ) and the total time spent at that frequency (unfortunately in microseconds). This it is more of a dictionary than an array. On reflection, this is really the same as the previous two examples. For both "key" and "inflight" the key is an enumerated type that just happens to be mapped to a zero-based sequence of integers. So in each case we see a dictionary. In this last case the keys are explicit rather than implicit.

If we contrast this last example with the "statistics" directory in any "net" device (net/*/statistics) we see that it is quite possible to put individual statistics in individual files. Were these 23 different values put into one file, one per line with labels, it is unlikely that anyone would accept that there was just one item in that file.

So the question here is: where do we draw the line? In each of these 4 cases (capabilities/key, inflight, time_in_state, statistics) we have a 'dictionary' mapping from an enumerated type to a scalar value. In the first case the scalar value is a truth value represented by a single bit, in the others the scalar is an integer. The size of the dictionary ranges from 2 to 23 to several hundred for "capabilities/key". Is it rational to draw a line based on the size of the dictionary, or on the size of the value? Or should it be left to the developer - a direction that usually produces disastrous results for uniformity.

The implication of these explorations seems to be that we must allow structured data to be stored in attributes, as there is no clear line between structured and non-structured data. "One item per file" is a great heuristic that guides us well most of the time, but as we have seen there are numerous times where developers find that it is not suitable and so deviate from the rules with a disheartening lack of consistency.

It could even be that the firmly stated rule has a negative effect here. Faced with a strong belief that a collection of numbers really forms a single attribute, and the strongly stated rule that multi-valued attributes are not allowed, the path of least resistance is often to quietly implement a multi-valued attribute without telling anyone. There is a reasonable chance that such code will not get reviewed until it is too late to make a change. This can lead multiple developers to solve the same problem in different ways, thus exacerbating a problem that the rule was intended to avoid.

So to answer our third question, "no", the "one item per file" doesn't always make sense because it isn't always clear what "one item" is, and those places of uncertainty are holes for chaos to creep in to our kernel.

Can we do better?

A review that finds problems without even suggesting a fix is a poor review indeed. The above identifies a number of problems, here we at least discuss solutions.

The problem of existing attributes that are inappropriately complex or inconsistent in their formatting does not permit a quick fix. We cannot just change the format. At best we could provide new ways to access the same information, and then deprecate the old attributes. It is often stated that once something enters the kernel-userspace interface (which includes all of sysfs) it cannot be changed. However the existence of CONFIG_SYSFS_DEPRECATED_V2 disproves this claim. A policy that permits and supports deprecation and removal of sysfs attributes on an on-going basis may cause some pain but would be of long-term benefit to the kernel, especially if we expect our grandchildren to continue developing Linux.

The problem that there is a clear need for structured data in sysfs attributes is probably best addressed by providing for it rather than ignoring or refuting it. Creating a format for representing arbitrarily structured data is not hard. Agreeing on one is much more of a challenge. XML has been enthusiastically suggested and vehemently opposed. Something more akin to the structure initialisations in C might be more pleasing to kernel developers (who already know C).

Your author is currently pondering how best to communicate a list of "known bad blocks" on devices in a RAID between kernel and userspace. sysfs is the obvious place to manage the data, but one file per block would be silly, and a single file listing all bad blocks would hit the one-page maximum at about 300-400 entries, which is many fewer than we want to support. Having support for structured sysfs attributes would help a lot here.

The final problem is how to enforce whatever rules we do come up with. Even with a very simple rule that is easily and often repeated and is heard by many, knowing the rule is not enough to cause people to follow the rule. This we have just seen.

The implementation of sysfs attribute files allows each developer to provide an arbitrary text string which is then included in the sysfs file for them. This incredible flexibility is a great temptation to variety rather than uniformity. While it may not be possible to remove that implementation, it could be beneficial to make it a lot easier to build sysfs attributes of particular well supported types. For example duration, temperature, switch, enum, storage-size, brightness, dictionary etc. We already have a pattern for this in that module parameters are much easier to define when they are of a particular type - as can be seen when exploring include/linux/moduleparam.h. The moduleparam implementation focuses more on basic types such as int, short, long etc. For sysfs we are more interested in higher level types, however the concept is the same.

If most of sysfs were converted over to using an interface that enforces standardised appearance, it would become fairly easy to find non-standard attributes and then either challenge them, or enhance the standard interface to support them.

In Closing

It must be said that hindsight gives much clearer vision than foresight. It is easy to see these issues in retrospect, but would have been harder to be ready to guard against them from the start. While sysfs could possibly have had a better design, it could certainly have had a worse one. Creating imperfect solutions and then needing to fix them is an acknowledged part of the continuous development approach we use in the Linux kernel.

For entirely internal subsystems, we can and do fix things regularly without any concern for legacy support. For external interfaces, fixing things isn't as easy. We need to either carry unsightly baggage around indefinitely or work to remove that which doesn't work, and encourage the creation only of that which does. Is it wrong to dream that our grandchild might work with a uniform and consistent /sys and maybe even a /proc which only contains processes?

Comments (45 posted)

Patches and updates

Kernel trees

Greg KH Linux 2.6.33.1 ?
Thomas Gleixner 2.6.33-rt6 ?
Greg KH Linux 2.6.32.10 ?

Architecture-specific

Core kernel code

Development tools

Device drivers

Documentation

Filesystems and block I/O

Memory management

Networking

florian@mickler.org rfkill sysfs ABI ?

Security-related

Serge E. Hallyn Define CAP_SYSLOG ?

Virtualization and containers

Miscellaneous

Page editor: Jonathan Corbet

Distributions

News and Editorials

Elive 2.0: Where Debian meets Enlightenment

March 17, 2010

This article was contributed by Koen Vervloesem

After more than two and a half years of development, Elive 2.0 ("Topaz"), a Debian-based live CD with the Enlightenment E17 desktop environment, has been released. This is a major release, bringing Enlightenment lovers up-to-date. Under the hood lies Debian Lenny (5.0.3) with a Linux 2.6.30.9 kernel. Most users won't try Elive only for what it does, but also for what it looks like: it combines minimal hardware requirements with style and eye candy. The distribution works on a 100 MHz CPU with 64 MB of RAM, but a 300 MHz CPU with 128 MB RAM is recommended. Installing it requires 2 GB of disk space.

[Default theme]

First a word of warning: Elive is pretty much a one-man show: Samuel "Thanatermesis" F. Baggen is working full-time on the distribution. One of the consequences is that users can download the distribution for free, but they have to pay (Elive calls it a "donation") to install it to a hard drive. This makes more sense than the previous policy (that asked for a donation to even download Elive 1.0), but it's not clear for visitors to the Elive web site: there is no mention on the home page nor the download page. Even the installer doesn't tell users the full details until they are well into the installation process and get redirected to the payment web page. Only after they have paid at least $15 using PayPal will they receive a code that they have to enter at www.elivecd.org/installer-module. After that, they are sent the (seemingly closed source) "installer module" by email.

Although the Elive project has been asking for donations for years and this could be called "common knowledge", it would be much more honest if the developer told users before the install begins that they have to pay — and how much. In the Complaints section of the Elive forum, one user questions the business model of Elive, and the developer responds: "The donation is forced for the stable version *only*, and for now that's the plans for it...". In the project's FAQ, he explains this rather bluntly:

You know that free has no relation with cost. This payment is required to pay the development of Elive, that is the full time work of the Developer 'Thanatermesis' and also to pay external development and/or services. Think that more money is made and more development can be possible to pay and so, a better final product (Elive). But in any of the cases, you are not obliged to pay for Elive, nobody obliges you to use Elive. Without any cost, Elive would not be the same, at least not with all its features, usefriendly things, and the lot of work involved. By other side, if your problem is that you can't possibly pay for any personal reason, we don't want to prevent anybody from using Elive so we propose alternatives which are described in the payment process.

Users that really don't want to pay can download the free (but purportedly unstable) development version of Elive, although at the moment there isn't a development version. Of course, they can also install plain Debian, then add the Elive repository to their /etc/apt/sources.list and install the Enlightenment packages, but this will likely result in an unstable desktop. Users can also request an invitation code, which is free for those who write an article about Elive or need it in an educational environment.

An idiosyncratic installer

The distribution also comes with its own user-friendly, but somewhat chaotic, installer that has advanced features such as upgrade and migrate modes. The latter allows users to migrate any Linux system to an Elive system: it copies user accounts including their passwords and files along with various configuration files. In the first step, the user is asked to choose from different customization levels: "Auto" (mostly automated), "Easy mode" (asking only a few questions) and "Complete mode" (fully featured). After this, the installer shows a vague message that the user has to make a "small payment" to use Elive.

The partitioning step shows a few options: use the full disk, start gparted or cfdisk, show some information about a RAID setup or do nothing at all on an already partitioned system. After this, the user is asked to obtain the installer module and enter an identifier on the web site. After receiving the module and clicking on OK in the installer window, the module asks the user to enter a security code to be sure the user knows that the installer will erase the disk. Then the installation begins and, though interrupted by a couple of questions, shows a progress bar while installing all packages on the hard disk.

Although it does the job without problems, the installer has an idiosyncratic user interface with lots of windows popping up, and it's not always clear what it is doing. The installation itself doesn't take long, but after the first boot (which shows a nice looking splash screen and an animated login screen), Elive begins a lengthy and seemingly inefficient post-install process, where your author saw hald stopping and starting twice and the initramfs being generated eight times. There's still work to do here.

A user-friendly Debian

Elive is more than just Debian with an E17 interface. It adds a lot of tweaks to make a more user-friendly version of its mother distribution. For example, the context menu in the file manager Thunar shows commands to convert music files to Ogg or MP3, as well as commands to convert image files to another resolution. It also offers a lot of functionality out of the box. For example, Firefox is configured with the Flash 10.0 plug-in for YouTube videos and MPlayer browser plug-ins for DivX, QuickTime, RealPlayer 9, and Windows Media Player. Skype is also installed by default. USB sticks are automatically detected and mounted, with an icon placed on the desktop, and DVDs are automatically played. Even the kernel has some extra user-friendly features, such as TuxOnIce for hibernation.

There are different kernels available, and their source can be found in the Elive repository. The source of the Elive-specific applications and modules can be found on the Elive development web site. On a related note, it's not clear to your author how much Elive contributes back upstream to Enlightenment, but Baggen is active on the Enlightenment bug tracker and he is contributing patches.

The distribution has an aptly named nurse mode, which offers recovery and repair features. For example, users can recover the default Elive configuration if they have messed up their settings. It is also able to check whether the system contains all the packages that are installed by default in Elive: if a user has accidentally deleted some crucial packages, some features could be missing. Other things that the nurse mode can do includes installing newer or special kernels, freeing space on the disk, and hardware tests. Also interesting is that it offers to help solve graphical problems by reconfiguring the Xorg configuration or reinstalling graphics drivers.

In order to prevent incompatibility problems with the tweaked Elive desktop when upgrading the Debian base, the distribution doesn't use the official Debian mirrors in /etc/apt/sources.list. From time to time, the project creates a snapshot of the entire Debian repository and mirrors it. This official Elive mirror is used in /etc/apt/sources.list for Debian software, in addition to another repository for Elive-specific software. According to Baggen, the snapshot is updated when a package needs an update for security reasons.

Beauty is in the details

Elive is dressed up with some impressive eye candy that is difficult to find in other distributions. For example, when the login screen appears, the box with the user name and password falls from the top of the screen. The box with the time and date and the shutdown icon each do a walk around the screen before they find their place, while the box where the user chooses the desktop also falls from the top of the screen and then stops at the top left. After this, a lot of words describing Elive appear on the screen. Even if this sounds somewhat over the top, it doesn't get in the way of the user: the login box works right from the start, so the user doesn't have to sit through the entire animation.

The Enlightenment desktop itself is also beautiful. At the top right, there's a pager that leads the user to different virtual desktops, while the bottom right is a notification area with icons for the network, battery, CPU temperature, and so on. At the middle bottom, there's a panel with quick launchers for some applications. Hovering over the launchers makes them grow in size. Minimizing an application's window brings its high-resolution icon to the top left on the desktop.

[Lucax3 theme]

Enlightenment is known for its artful themes, and Elive 2.0 comes with four themes installed. The default "elive" theme comes with a non-intrusive light blue wallpaper that has some subtle twinkling white stars. Another theme, "Lucax3", has more personality: it has a dark blue wallpaper with energetically twinkling stars and black menus with purple arrows. When changing the Enlightenment theme, the user also gets invited to choose a Gtk+ 2.0 theme that matches. By the way, most users will only discover many subtle details in the style only after working with Elive for an extended time. For example, your author saw a scrolling window title in a title bar, but he has only seen happening it twice while writing this review.

Enlightenment is also fully customizable. Click on the wallpaper to open the menu, choose Settings and then Settings Panel to open the extensive Enlightenment settings. Here the user can change the look, the behavior of the windows, input settings, and so on. An interesting feature is that almost all settings windows have a "Basic" and an "Advanced version". In the default Basic mode, clicking on the Advanced button shows the user more options, and then clicking on the Basic button shows again the basic settings. In "Extensions - Modules", the user can pick various desktop modules, such as weather forecasts, a battery or CPU frequency monitor, but also more frivolous things like snow, fire, or rain on the desktop — or even walking penguins. However, be aware that some of these modules can be unstable.

Beautiful but disappointing

Elive 2.0 proves again that users can have a nice looking desktop without eating up all their computer's resources. That's mostly thanks to Enlightenment, which is refreshingly different from other desktop environments. The minimal hardware requirements make Elive a contender on netbooks. It's a pity that the commercial purpose of the distribution is covered up. Saying nothing about a requirement to pay on the home page or download page and then requiring it only in the middle of the install is deceptive. On the technical side, the installer and post-install process could use some work too. So all in all, while Elive 2.0 is a really nice showcase of an Enlightenment desktop, it's hard to see it becoming a wildly popular distribution.

Comments (5 posted)

New Releases

Mandriva Enterprise Server 5.1 released

Mandriva has announced the release of Mandriva Enterprise Server 5.1. "MES 5.1 main focus is set on virtualization. MES 5.1 improves integration of KVM technology (Kernel-based Virtual Machine) together with administration toold for a simple management in everyday life."

Comments (none posted)

openSUSE Build Service 1.7.2 Released

The openSUSE Build Service team has announced the availability of OBS 1.7.2. "This release brings beside bug fixes also some new features back ported from master branch. The new features makes the initial setup easier and offers optionally also authentification against a LDAP server."

Comments (none posted)

openSUSE 11.3 Milestone 3 is out

The third of seven scheduled milestone releases for 11.3 is available for testing. "Milestone 3 focuses on using GCC 4.5 as the default compiler, leaving a great deal of the work in the hands of the openSUSE Build Service after a few issues (such as kernel panics) were resolved."

Full Story (comments: none)

openSUSE LXDE and Xfce spins available

The openSUSE community has announced two new spins. Both the LXDE spin and the Xfce spin are available as live CDs.

Comments (none posted)

RC1 for Debian Edu lenny 5.0.4+edu1 released

The first release candidate for Debian-Edu/Skolelinux 5.0.4+edu1 is available for testing. "Please test these images as much as you can and report back feedback. Except for documentation and translation updates, this is intended to become the first point release! So please give this it go!"

Full Story (comments: none)

Distribution News

Debian GNU/Linux

Debian Project Leader Elections 2010: Candidates

Four candidates have been nominated for Debian Project Leader. They are Stefano Zacchiroli, Wouter Verhelst, Charles Plessy, and Margarita Manterola.

Full Story (comments: none)

Bits from the Release Team: What should go into squeeze?

Philipp Kern covers the status of the squeeze release. There are 400 bugs that need to fixed before squeeze can be released. "From a current point of view squeeze will release with kernel 2.6.32, eglibc 2.11, Python 2.6, X11R7.5, Gnome 2.30, qt 4.6 and KDE 4.4."

Full Story (comments: none)

Debian release manager Luk Claes resigns

Stating that "It's time to stop thinking I would be able to keep working as Release Manager in this climate", Luk Claes has stepped down from that position. Information about the problematic "climate" is mostly missing from the public lists; about all that can be found is a handful of complaints about transparency. The remainder of the release team is continuing to work toward the Squeeze release.

Full Story (comments: 3)

Bits from an FTP Master

In these bits Joerg Jaspert introduces a new member of the ftpteam, calls for additional volunteers, and includes a todo list. "I'm starting with a call for volunteers but will follow it with a (kind of) todo list which interested people can work on. And, while some of the jobs can only be done by team members, many can be done without joining the team, and a few can even be worked on by people who aren't Debian Developers (yet)."

Full Story (comments: none)

Fedora

Fedora's "stable release updates vision"

The Fedora board has, in response to ongoing discussions about updates to its releases (as covered in the March 11 Weekly Edition), adopted a "vision statement" on how Fedora releases should be maintained. "Stable releases should provide a consistent user experience throughout the lifecycle, and only fix bugs and security issues. Stable releases should not be used for tracking upstream version closely when this is likely to change the user experience beyond fixing bugs and security issues."

Full Story (comments: 26)

Fedora Board Meeting Recap 2010-03-11

Click below for a recap of the March 11, 2010 meeting of the Fedora Advisory Board where the update policy was discussed.

Full Story (comments: none)

Fedora Board SWG 2010-03-15 Meeting Recap

Click below for a recap of the March 15, 2010 meeting of the Fedora Advisory Board Strategic Working Group. Topics include target audience, spins, and the default distribution.

Full Story (comments: none)

Gentoo Linux

Gentoo Foundation Trustees 2010 election

The Gentoo Foundation Trustees election is over. There were 3 people running for 3 slots, therefore all 3 have been elected. The Foundation Trustees for 2010 are Roy Bamford, David Abbott, Joshua Jackson, Robin H Johnson, and Matthew Summers.

Full Story (comments: none)

Ubuntu family

Shuttleworth: 2 year cadence for major releases: some progress

Mark Shuttleworth claims some progress toward his goal of having distributions synchronize their major releases and calls for more distributors to join in. "I think this is a big win for the free software community. Many upstreams have said 'we'd really like to help deliver a great stable release, but which distro should we arrange that around?' Upstreams should not have to play favourites with distributions, and it should be no more work to support 10 distributions as to support one."

Comments (23 posted)

Ubuntu Global Jam: Time To Rock The House

Jono Bacon encourages people to get involved in Ubuntu Global Jam, which takes place March 26-28, 2010. "Ubuntu Global Jam events are simple events designed to get Ubuntu users and contributors in the same room to work together and contribute to Ubuntu. This can happen through any means: testing, documentation writing, working on a LoCo team, development or whatever else. The key focus here is on getting people together and having fun with Ubuntu."

Comments (none posted)

Minutes from the Developer Membership Board meeting 2010-03-16

Click below for the minutes from the March 16, 2010 meeting of the Ubuntu Developer Membership Board.

Full Story (comments: none)

Distribution Newsletters

DistroWatch Weekly, Issue 345

The DistroWatch Weekly for March 15, 2010 is out. "With the first development release of Fedora 13, the focus of the online Linux community has once again turned to this popular distribution. But, as emerged in an online report last week, the project's developer and user community is up in the arms over the project's update policy and its blatant disregard for end users' needs. In other news, the openSUSE community releases new live CDs with Xfce and LXDE desktop environments, OpenBSD announces the upcoming release of version 4.7, and Wolvix resumes the development of the Slackware-based distribution with a new development build. Also in this week's issue, a first look at Haiku, an operating system that strives to be a successor of BeOS, and a questions and answers section that looks at loopback devices. All this and more in this week's issue of DistroWatch Weekly - happy reading!"

Comments (none posted)

Fedora Weekly News 217

The Fedora Weekly News for March 14, 2010 is out. "In announcements, lots of exciting news related to Fedora 13, including details on last week's Alpha launch, slogan release, as well as freeze on the F13 release notes. In news from the Fedora Planet, thoughts on Fedora Spins, how to create a rocket using Inkscape, an excellent essay on "Open Source Philosophy" including a brief history of the movement, and much more. In the News summarizes an interview with Fedora Project leader Paul W. Frields on Fedora 12 and beyond. In Quality Assurance news, details from last week's Test Day on webcams, great coverage in the QA team weekly meetings and other activities, Fedora 13 Alpha and Beta updates, and details on a proposed draft of a package update policy. Translation reports details on last week's Transifex 0.74 upgrade, availability of a Fedora 13 image with the latest translations, and many new members of the Fedora Localization Project team. In Art/Design Team news, coverage of recent discussion on Fedora 13 beta artwork, and the design suite as a Fedora talking point. This issue wraps up with pointers to last week's security advisories for Fedora 11, 12 and 13. Enjoy!"

Full Story (comments: none)

openSUSE Weekly News/114

This issue of the openSUSE Weekly News covers Sascha Manns: Geeko wants you: Weekly News Team searches for new Translators, * Cornelius Schumacher: Are you up for a new challenge in the SUSE Studio team?, * Richard Bos: Build your own Google Earth rpm, * TuxRadar: The newbie's guide to hacking the Linux kernel, and * Andrew Wafaa: Community Discussion - Part1.

Comments (none posted)

Ubuntu Weekly Newsletter #184

The Ubuntu Weekly Newsletter for March 13, 2010 is out. "In this issue we cover: Lucid Kernel now Frozen, Ubuntu 10.04 beta 1 freeze now in effect, Intel, Eucalyptus and Canonical join forces to help user build cloud infrastructures confidently, Call for Testing: Cluster Stack â€" Load Balancing, Google Summer of Code 2010: Ubuntu application, New Ubuntu Members: Asia Oceanic Board & Americas Board, Request for input for Lucid Beta 1 technical overview, International Womens Day "How I Discovered Ubuntu" Winners, Ubuntu Global Jam(LoCo Style), Getting started with launchpadlib: Launchpad's Python library, Ubuntu Global Jam - what's it all about, New stuff for the Ubiquity slideshow(Proposed), Alan Pope: Why (I think) Ubuntu is Better Than Windows, Ubuntu hits HTC's Touch Pro2, is any Windows Mobile handset safe, and much, much more!"

Full Story (comments: none)

Newsletters and articles of interest

Health Check: Mandriva (The H)

The H takes a look at Mandriva's history. "Mandrake quickly became the most successful desktop Linux of its day, the Linux distribution that offered the most for the home user, the hobbyist or adventurer looking for a friendly and practical alternative to Windows, easy to install, easy to configure, and easy to use. The first release was numbered 5.1, after the Red Hat release it was based on."

Comments (none posted)

Interviews

Meet Ubuntu Linux's new CEO (Q&A) (CNET)

Over at CNET, Stephen Shankland has a fairly lengthy interview with Canonical's new CEO Jane Silber. "But is there more urgency about profit now? Silber: There is a sense of great opportunity right now. When we started Ubuntu in year one, we didn't put a strong push on trying to sell Canonical services, not because we were not interested, but it's hard to build a business around selling services around an operating system that nobody is using. We knew we needed to gain a user base and momentum before we could sell services. That user base is now there. There is urgency and momentum around that at a level we hadn't necessarily seen in the first couple years."

Comments (26 posted)

QA with Matt Asay: How Linux is Beating Apple and Much More (Linux.com)

Jennifer Cloer talks with Matt Asay, COO of Canonical. "Asay: We have the chance to turn the technology world upside down. At Canonical we have Google or Apple-sized ambition, because we have community that dwarfs both of them put together. Our task is to work with the community to fulfill that opportunity. I believe we can. That's what I signed up to accomplish."

Comments (185 posted)

Paul Frields on Fedora 12 and Beyond (Linux For You)

Linux For You has an interview with Fedora Project Leader Paul Frields. "Two months after the launch of Fedora 12, we spoke to Paul Frields, Fedora Project Leader at Red Hat, about how this release has been received by the community, and what is in store for the next. Though it started as a technical discussion on what Fedora 12 offers IT admins and developers, it graduated into a more serious conversation on the relationship between Fedora and Red Hat Enterprise Linux, and the distinction (if any) between commercial and community Linux."

Comments (none posted)

Page editor: Rebecca Sobol

Development

Fun with free maps on the free desktop

March 17, 2010

This article was contributed by Joe 'Zonker' Brockmeier.

Playing with open map data can be a fun pastime. Creating and editing open data can be not only fun, but also a boost for free data to go along with free software. Whether you want to view open map data or edit it, there's no shortage of applications that run on Linux and work well with OpenStreetMap (OSM) data. Here's a look at some useful mapping applications for displaying and editing open map data on the Linux desktop.

Emerillon

[Emerillon]

One of the newest map viewers for the free desktop is Emerillon. Announced last October, the Emerillon project is meant to be a simple, open, and extensible map viewer that allows users to browse open map data, search maps, and "placemark" (bookmark) locations for later reference. Even if the project itself is new, the name is laden with history. The name has a dual meaning, as "Émérillon" is a name for a type of falcon once used for falconry and it is the name of one of French explorer Jacques Cartier's boats.

Emerillon is still in early development. The project hasn't tagged any releases as "stable" yet, but tagged a development release in early January. If compiling the package doesn't sound appealing, Matt Trudel has put together an Ubuntu package of the 0.1.0 release for Ubuntu 9.10.

The interface is clean and easy to use, and the map rendering is very attractive. Once Emerillon is loaded, just use the mouse to drag to the location you'd like to see or search for a location. Zoom in and out using the mouse scroll wheel or magnifying glass icons on the top location bar. There's not a lot to the interface and it shouldn't take any time at all to start using Emerillon. It doesn't require manual intervention to get map data as it automatically uses OSM data. Emerillon can be a bit slow rendering, but not overly so. It might take five to 10 seconds to render the map when selecting a new location or zooming in or out. It's not quite as speedy as Marble, but it gets the job done.

Despite the relative newness of Emerillon, it is worth a look if the only requirement is a stable OSM viewer. It's a usable map viewer for browsing standard OpenStreetMap street views, public transportation maps, cycling maps, or terrain data. To get routing data, it is possible to copy the current location to Google Maps or Yahoo! Maps to generate a route. Emerillon doesn't have any features yet for printing or exporting maps.

Most of Emerillon features are derived from plugins. The functionality is fairly limited otherwise for now, but the plugins page includes a number of ideas for future development — including routing data, working with GPSes, and integration with Telepathy to allow sharing of location data.

Marble

[Venus]

Marble is one of the best-known map-viewers for Linux. Marble works with a number of different data sets, including OSM, and a number of custom maps including historical maps from the 1700s, satellite views of the Earth, temperature and precipitation, and even maps of the Moon and Venus. While the navigational value of having maps from the 1700s or of Venus is scant, there's a lot of educational value in being able to display other celestial bodies and maps from different eras.

Marble presents its maps, dubbed "themes" in the Marble interface, using a globe projection by default. Users can switch to a flat or Mercator projection if they prefer. In addition to displaying map data, Marble can also display photos tagged with geographic data or Wikipedia articles associated with a location if the Photos and Wikipedia plugins are enabled.

After opening Marble, it will zoom in on the user's "home" location if set, on the last theme used. The home location is set by right-clicking on the map view and choosing "Set Home Location." Navigating around the map is done by clicking and dragging, or using the mouse scroll button to zoom in and out on specific areas of the map. Marble features an interesting animation when performing searches. When using the Search sidebar, type in a city name or location and Marble will zoom out to a full Earth view and then zoom in again on the search location.

[Marble]

Marble can be used to measure the distance between two locations on the globe by adding "measure points" with the context menu. The total distance will be displayed in the map overlay. Users can copy and print map views from Marble for later use, but it doesn't provide any way to "save" the views for later.

By itself, Marble is primarily geared for educational use and displaying different parts of the globe. It's not designed for creating route maps or giving any kind of street directions. You can view maps designed for navigation, like the OSM Cycle Map, but Marble doesn't do point A to point B directions. However, Marble is also an embeddable widget that can be used in other Qt applications. This means that any application that needs to display map data can include Marble and rely on it for rendering.

Since Marble has been around for quite some time, most Major Linux distros include it in their package repositories. It's available under the LGPLv2, and additional maps are available from KDE's "Get Hot New Stuff."

Merkaartor

[Merkaartor]

If viewing maps isn't enough, why not try editing them? Merkaartor is an application for editing OSM maps. It actually works with several formats, beyond OpenStreetMap. Merkaartor will import OpenStreetMap, GPS Exchange Format, KML Files, Noni GPSPlot, and several others. The export formats are a bit more limited, with OpenStreetMap, GPS Exchange Format, and KML supported.

Merkaartor will open any of the supported formats, or download data from OSM. There's a limit on how much data Merkaartor will download at any given time, so it may take a few tries to get the request right and get all of the desired map data. Once it has data, users can edit map data and upload it to OpenStreetMap (assuming the user has an OpenStreetMap account).

It can also provide an interesting look into the existing OSM data, and will show which users uploaded specific data. Assuming a bit of experience with editing map data, Merkaartor seems easy to get started with and use. It's not for casual OpenStreetMap users, but should be a valuable tool for contributors. For viewing maps, Merkaartor isn't as usable as Marble or Emerillon and doesn't render the maps quite as attractively.

Merkaartor not only works with standard map formats, it will also render maps to SVG and Bitmap formats. This could be useful if you want to include OSM data in a publication, or just whip up a quick map with directions to your next party.

Conclusion

What would be nice to have is an application that makes it easy to simply punch in a few addresses and create a route. While a few free and open source desktop apps exist to work with GPSes or GPS data, they're not particularly intuitive or easy to get started with. The state of editing and viewing OSM data is pretty good on Linux, but there's still much to be desired for using that data to get from point A to point B.

Comments (11 posted)

Brief items

Amarok 2.3.0 is out

[Amarok] Version 2.3.0 of the Amarok music player has been released. "Areas such as podcast support and saved playlists have seen huge improvements, as has the support for USB mass storage devices (including generic MP3 players). With large parts of Amarok 2 becoming quite mature, it was also time to start looking forward again. Therefore, this release also contains a number of new features of a slightly more experimental nature. These include a new main toolbar and a rewritten and much simpler file browser."

Comments (8 posted)

Google's RE2 regular expression library

Google has announced the release of its RE2 library under a BSDish license. "At Google, we use regular expressions as part of the interface to many external and internal systems, including Code Search, Sawzall, and Bigtable. Those systems process large amounts of data; exponential run time would be a serious problem. On a more practical note, these are multithreaded C++ programs with fixed-size stacks: the unbounded stack usage in typical regular expression implementations leads to stack overflows and server crashes. To solve both problems, we've built a new regular expression engine, called RE2, which is based on automata theory and guarantees that searches complete in linear time with respect to the size of the input and in a fixed amount of stack space." More information can be found on the RE2 project page.

Comments (32 posted)

Monotone 0.47 released

Version 0.47 of the monotone version control system is out. There's a number of fixes, some significant performance improvements, and some changes to how certain subcommands operate; see the NEWS file for details.

Full Story (comments: none)

Parrot 2.2.0 released

Version 2.0 2.2.0 of the Parrot virtual machine is out. There's a number of changes listed in the announcement ("Most internal allocations now use the GC, RNG non-randomness fixes, Elimination of much dead code, ..."), but most of them do not appear to be major.

Full Story (comments: 22)

passwdqc 1.2.0 released

Solar Designer has announced the release of passwdqc 1.2.0. Passwdqc is a toolkit for password strength checking and policy enforcement; this release includes a number of new features. "The random passphrases offered by pam_passwdqc, pwqgen, as well as by the passwdqc_random() function in libpasswdqc, will now encode more entropy per separator character and per word, increasing their default size from 42 to 47 bits. The size of 42 bits was adequate to withstand not-too-powerful attacks against bcrypt hashes that we use on Owl, but it was inadequate with weaker hashes that many other systems use."

Full Story (comments: none)

PostgreSQL 2010-03-15 cumulative bug-fix release

The PostgreSQL project has put out a bug-fix update for versions 8.4.3, 8.3.10, 8.2.16, 8.1.20, 8.0.24, and 7.4.28 of the system. "This release provides a workaround for some third-party SSL libraries, as well as multiple fixes for minor uptime and data integrity issues. All database administrators are urged to update your version of PostgreSQL at your next scheduled downtime." Also noted is the fact that versions 7.4 and 8.0 will not receive updates after June, so sites using those releases should be thinking hard about upgrading.

Full Story (comments: none)

PyPy 1.2 released

Version 1.2 of PyPy - an alternative implementation of the Python interpreter - has been released. "This version 1.2 is a major milestone and it is the first release to ship a Just-in-Time compiler that is known to be faster than CPython (and unladen swallow) on some real-world applications (or the best benchmarks we could get for them). The main theme for the 1.2 release is speed." It's still not quite ready for production use, but it appears to be getting a lot closer.

Comments (1 posted)

SeaMonkey 1.x goes unsupported

The developers behind SeaMonkey have announced that there will no longer be support for the 1.x versions of the browser suite. "As the SeaMonkey 1.x series no longer receives security updates, due to resource constraints, the SeaMonkey team strongly urges users of that series to upgrade. Additionally, the team continues to strongly urge people still using the old Mozilla Suite or Netscape 4, 6 or 7 to upgrade to the new SeaMonkey 2.0 version. All these older software packages suffer from a large, and steadily increasing, number of security vulnerabilities because they are no longer being maintained."

Full Story (comments: none)

Newsletters and articles

Newsletters published in the last week

Comments (none posted)

Piël: Benchmark of Python Web Servers

Here is an extensive set of performance benchmark results from 14 Python web application servers, done by Nicholas Piël. "The top performers are clearly FAPWS3, uWSGI and Gevent. FAPWS3 has been designed to be fast and lives up the expectations, this has been noted by others as well as it looks like it is being used in production at Ebay. uWSGI is used successfully in production at (and in development by) the Italian ISP Unbit. Gevent is a relatively young project but already very successful. Not only did it perform great in the previous async server benchmark but its reliance on the Libevent HTTP gives it a performance beyond the other asynchronous frameworks."

Comments (3 posted)

Linux Arpeggiators, Part 1 (Linux Journal)

Dave Phillips looks at arpeggiators for Linux. "An arpeggio is a musical technique whereby the notes of a chord are played in succession rather than all at once. The order of the chord notes in this succession may follow a strict set of rules or they may be played in purely random sequence. A device that acts upon a chord in this manner is known as an arpeggiator."

Comments (16 posted)

Page editor: Jonathan Corbet

Announcements

Commercial announcements

Ingres DB now available within SUSE

Ingres and Novell have announced that the Ingres database is available within SUSE Studio as part of the SUSE Appliance Program. "Both companies have entered into a cooperative agreement to make it easier and more cost-effective for independent software vendors (ISVs) and system integrators (SIs) to build appliances that deliver business critical software applications which require an enterprise-class database. As part of the agreement, Novell and Ingres will jointly support and market the SUSE Studio Appliance Template for Ingres Database to a large ecosystem of ISVs who are seeking a simplified appliance infrastructure."

Full Story (comments: none)

Articles of interest

Building an open source business (opensource.com)

Over at opensource.com, OpenNMS's Tarus Balog looks at the process of starting an open source business. This article covers much of the same material as his recent SCALE 8x keynote. "You might think that I was motivated by some sort of idealistic love of open source software. Nothing could be further from the truth. At the time, I was still running a Windows desktop. I undertook the OpenNMS project because I believed one thing: in the area of network management, open source represents the best business solution."

Comments (5 posted)

Hackable Linux clamshell goes on sale for $99 (LinuxDevices)

LinuxDevices looks at the Ben NanoNote, a small, open machine produced by Qi Hardware. "The Ben NanoNote offers OpenWRT Linux pre-installed, and the device can also boot over USB. (OpenWRT is a small footprint distribution commonly found on routers.) Other components in the distribution include the Uboot boot-loader, although one of the many project pages on Qi Hardware notes that the eventual plan is to move to the lightweight Qi boot-loader."

Comments (20 posted)

Simon Phipps elected as OSI director (The H)

The H reports that the Open Source Initiative (OSI) has elected Simon Phipps, formerly Sun's Chief Open Source Officer, to the board of directors. "As a director, Phipps hopes to help the organisation change so that it becomes more member-oriented, more active in promoting open source in education, in policy development and possibly in organisational support for open source projects; "My goal as a Director will be to facilitate that change, a change that is already well under way following recent face to face discussions and the great work that Andrew Oliver and Danese Cooper have already put in"."

Comments (none posted)

Resources

ODBMS.ORG new section on NoSQL Data Stores.

ODBMS.org (Object Database Management Systems) has added a section on NoSQL data stores, "where you will be able to download Free Software, Articles, Papers, Presentations and Tutorials."

Full Story (comments: none)

Interviews

Interview: Eben Moglen - Freedom vs. The Cloud Log (The H)

The H has an interview with Eben Moglen. "And so, basically, what I am proposing is that we build a social networking stack based around the existing free software we have, which is pretty much the same existing free software the server-side social networking stacks are built on; and we provide ourselves with an appliance which contains a free distribution everybody can make as much of as they want, and cheap hardware of a type which is going to take over the world whether we do it or we don't, because it's so attractive a form factor and function, at the price."

Comments (8 posted)

Contests and Awards

Mozilla Launches Firefox Mobile Add-On Challenge (Linux.com)

Nathan Willis covers Mozilla's contest to provide add-ons for the Firefox for Mobile browser. "Mozilla has launched a contest to spur on development of add-ons for its recently-released Firefox for Mobile browser. Between now and April 12, developers are encouraged to create extensions or other add-ons tailored for the mobile browser. The top ten submissions (as judged by Mozilla's Add-ons and Mobile teams) will each be awarded a package containing a Mozilla t-shirt, phone case, and a brand-new Nokia N900 phone -- which runs the Maemo mobile Linux operating system and was the very first device to support Firefox for Mobile."

Comments (none posted)

Education and Certification

LPI promotes Linux certification within Spanish public schools

The Linux Professional Institute (LPI) affiliate in Spain has partnered with Proyecto Universidad Empresa (PUE) to promote LPI certification and training within Spain's public schools. "Under this initiative with PUE, LPI-Spain will promote the LPI Approved Academic Partnership (LPI-AAP) program with public sector education programs in Spain to ensure high quality Linux training and certification. This will also include the provision of discounted LPI exam labs throughout the country at public academic institutions."

Full Story (comments: none)

Calls for Presentations

LinuxCon Japan Call for Participation

LinuxCon Japan, formerly known as the Japan Linux Symposium, has announced its call for participation (CFP). This Linux Foundation sponsored conference will be held in Tokyo September 27-29. The CFP lists a number of topic areas that are of particular interest including desktop Linux, embedded and mobile Linux, Linux adoption, and so on; it closes on May 14. "LinuxCon Japan is the premiere Linux conference in Asia that brings together a unique blend of core developers, administrators, users, community managers and industry experts. It is designed not only to encourage collaboration but also to support future interaction between Japan and other Asia Pacific countries and the rest of the global Linux community. The conference includes presentations, tutorials, birds of a feather sessions, keynotes, sponsored mini-summits."

Comments (none posted)

Debconf 10: Call for Contributions

The Debconf team has announced that they are now accepting proposals for contributions to this year's Debian conference. The deadline for proposals is May 1, 2010. DebConf 10 will be held August 1-7, 2010 in New York City. "There are many ways you can contribute, you could present a technical paper, host a panel discussion, put on a tutorial, do a performance, an art installation, a debate, host a meeting (BoFS, or Birds of a Feather Session), or other possibilities that you devise. This year we are also accepting proposals for tracks-a thematic grouping around a particular subject, and people to coordinate those tracks. If you are looking for ideas of things that you could contribute, or have ideas for things that you would like to see happen at Debconf, have a look at the Contribution Brainstorm page."

Full Story (comments: none)

EuroPython 2010 - Open for registration and reminder of participation

Registration is open for EuroPython 2010. The conference will be held July 17-24, 2010 in Birmingham, UK. "EuroPython is a conference for the Python programming language community, including the Django, Zope and Plone communities. It is aimed at everyone in the Python community, of all skill levels, both users and programmers." Register before May 10, 2010 for the Early Bird rate. The Call for Participation is open until April 30, 2010.

Full Story (comments: none)

GUADEC 2010 cfp deadline nearing

GUADEC, the GNOME Users' And Developers' European Conference, 2010 will be held in The Hague, the Netherlands July 24-30, 2010. The call for papers ends March 20, 2010.

Full Story (comments: none)

Call for Presentations - Flash Memory Summit

The call for presentations for the Flash Memory Summit is open until May 7, 2010. The conference runs August 17-19, 2010 in Santa Clara, California.

Comments (none posted)

Upcoming Events

Embedded Linux Conference 2010 Program is available

This year's Embedded Linux Conference, which will be held in San Francisco April 12-14, has announced that its program is now available. The keynote speakers will be Greg Kroah-Hartman ("Android: a Case Study of an Embedded Linux Project") and Matt Asay ("Embedded in 2010: an End to the Entropy?") along with a whole slate of over 50 presentations, tutorials, and BoFs. "This is your chance to meet leading developers from the embedded Linux community, and learn about the latest changes in Linux. Also, you can talk to engineers working on real products at some of the largest CE companies in the world, describing how they solved real issues in their own development projects." Click below for the full announcement.

Full Story (comments: 6)

Linux Foundation Collaboration Summit keynotes announced

The Linux Foundation has announced the program for the Collaboration Summit to be held April 14-16 in San Francisco. This is an invitation-only event, though invitations can still be requested. Highlights include a full-day session on Meego, the Linux kernel roundtable, keynotes by Josh Berkus, Dr. Daniel Frye, Jim Zemlin, and others, a cloud computing roundtable, and more. "The Linux Foundation Collaboration Summit is the only event where a true cross-section of leaders from the Linux developer, industry and end user communities meet face-to-face to tackle today’s most pressing issues facing Linux, including technical development, legal topics, ISV porting and end user requirements."

Comments (none posted)

GNOME Foundation and KDE e.V. to Co-Host Conferences in 2011

There will be a second Desktop Summit in 2011 and the bidding is open for a location. "Following the successful Gran Canaria Desktop Summit in 2009, the GNOME Foundation and KDE e.V. Boards have decided to co-locate their flagship conferences once again in 2011, and are taking bids to host the combined event. The Desktop Summit 2011 will be the largest free desktop event ever."

Full Story (comments: 2)

Register now for the Netbook Summit

Registration is open for the Netbook Summit, May 24-25, 2010 in San Francisco, California.

Comments (none posted)

Events: March 25, 2010 to May 24, 2010

The following event listing is taken from the LWN.net Calendar.

Date(s)EventLocation
March 22
March 26
CanSecWest Vancouver 2010 Vancouver, BC, Canada
March 23
March 25
UKUUG Spring 2010 Conference Manchester, UK
March 25
March 28
PostgreSQL Conference East 2010 Philadelphia, PA, USA
March 26
March 28
Ubuntu Global Jam Online, World
March 30
April 1
Where 2.0 Conference San Jose, CA, USA
April 9
April 11
Spanish DebConf Coruña, Spain
April 10 Texas Linux Fest Austin, TX, USA
April 12
April 15
MySQL Conference & Expo 2010 Santa Clara, CA, USA
April 12
April 14
Embedded Linux Conference San Francisco, CA, USA
April 14
April 16
Linux Foundation Collaboration Summit San Francisco, USA
April 14
April 16
Lustre User Group 2010 Aptos, California, USA
April 16
April 17
R/Finance 2010 Conference - 2nd Annual Chicago, IL, US
April 16 Drizzle Developer Day Santa Clara, CA, United States
April 23
April 25
FOSS Nigeria 2010 Kano, Nigeria
April 23
April 25
QuahogCon 2010 Providence, RI, USA
April 24
April 25
OSDC.TW 2010 Taipei, Taiwan
April 24
April 25
BarCamb 3 Cambridge, UK
April 24 Festival Latinoamericano de Instalación de Software Libre Many, Many
April 24
April 25
Fosscomm 2010 Thessaloniki, Greece
April 24
April 25
LinuxFest Northwest Bellingham WA, USA
April 24 Open Knowledge Conference 2010 London, UK
April 24
April 26
First International Workshop on Free/Open Source Software Technologies Riyadh, Saudi Arabia
April 25
April 29
Interop Las Vegas Las Vegas, NV, USA
April 28
April 29
Xen Summit North America at AMD Sunnyvale, CA, USA
April 29 Patents and Free and Open Source Software Boulder, CO, USA
May 1
May 2
OggCamp Liverpool, England
May 1
May 4
Linux Audio Conference Utrecht, NL
May 1
May 2
Devops Down Under Sydney, Australia
May 3
May 7
SambaXP 2010 Göttingen, Germany
May 3
May 6
Web 2.0 Expo San Francisco San Francisco, CA, USA
May 6 NLUUG spring conference: System Administration Ede, The Netherlands
May 7
May 9
Pycon Italy Firenze, Italy
May 7
May 8
Professional IT Community Conference New Brunswick, NJ, USA
May 10
May 14
Ubuntu Developer Summit Brussels, Belgium
May 17
May 21
Fourth African Conference on FOSS and the Digital Commons Accra, Ghana
May 18
May 21
PostgreSQL Conference for Users and Developers Ottawa, Ontario, Canada

If your event does not appear here, please tell us about it.

Page editor: Rebecca Sobol


Copyright © 2010, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds