LWN.net Weekly Edition for March 26, 2009
Easing software localization with Transifex
Translating text strings into other languages, called "localization" or "l10n", is a critical part of extending the reach of free software. But it is equally important that those translations make their way upstream, so that the translation work is not duplicated, and that all future versions can benefit. Making all of that easy is the goal of Transifex, which is a platform for doing translations that is integrated with the upstream version control system (VCS). The project recently released Transifex 0.5—a complete rewrite atop the Django web framework—with many new features
Transifex came out of work done in the 2007 Google Summer of Code for the Fedora project. Dimitris Glezos worked on a project to create a web interface to ease localization for Fedora. In the year and a half since then, Transifex has grown greatly in capabilities, and is now used as the primary tool for Fedora translations. One of the key aspects, as can be seen in the SoC application is a focus on being upstream friendly.
People who are able to translate text into another language—for good or ill, most software is developed with English text—are not necessarily developers, so their knowledge of VCS systems may be small. In addition, they are unlikely to want to have multiple accounts with various projects who might need their services. Transifex abstracts all of the VCS-specific differences away, so that it presents a single view to translators. This allows those folks to concentrate on what they are good at.
Transifex interfaces with multiple different VCS systems that a development project might choose to hold its source code. The five major VCS packages used by free software projects: CVS, Subversion, Bazaar, Mercurial, and Git; are all handled seamlessly by Transifex. A translator doesn't have to know—or care—what the project chose, and their translations will be properly propagated into the repository.
This stands in contrast to Canonical's Rosetta, which is also a web-based translation tool, but it is tightly integrated with Launchpad. That requires that projects migrate to Launchpad to take advantage of the translations made by Ubuntu users. Many projects are skittish about moving to Launchpad, either due to its required use of Bazaar, or due to the non-free nature (at least as yet) of the Launchpad code. No doubt there are also projects who are happy with their current repository location and are unwilling to move.
Because of the centralized nature of Rosetta, translations tend to get trapped there, leading some to declare it a poor choice for doing free software translations. Perhaps when Launchpad opens its code, and support for more VCS systems is added, it may be a more reasonable choice. For now, Transifex seems to have the right workflow for developers as well as translators.
The 0.5 release adds a large number of new features to make it even easier to use and to integrate with various projects. The data model has been reworked to allow for arbitrary collections of projects (i.e Fedora 11 or GNOME), with multiple branches for each project. A lot of work has also gone into handling different formats of localization files (such as PO and POT formats), as well as supporting variants of languages for specific countries or regions (e.g. Brazilian Portuguese).
For users, most of whom would be translators, 0.5 has added RSS feeds to follow the progress of translations for particular projects. User account management has been collected into its own subsystem, with features like self-service user registration and OpenID support for authentication. In addition, the VCS and localization layers are easily extensible to allow for supporting other varieties of those tools. Transifex 0.5 has the look of a very solid release.
Glezos and others from the Transifex team have started a new company, Indifex to produce a hosted version of Transifex (at Transifex.net) that will serve the same purpose as Wordpress.com does for Wordpress blogs. Projects that don't want to host their own Transifex installation can work with Indifex to set up an localization solution for their code. Meanwhile, Indifex employees have been instrumental in the 0.5 rewrite and will be providing more development down the road. Glezos outlined their plans in a blog post in December.
Because of its openness, and its concentration on upstream-friendliness, Transifex has an opportunity to transform localization efforts for free software projects. There are a large number of willing translators out there, but projects sometimes have difficulty hooking up with them. Transifex will provide a place for translators and projects to come together. That should result in lots more software available in native languages for many more folks around the world.
An afternoon among the patent lawyers
Sometimes, even the best job can call for extraordinary sacrifices. Even grumpy editorial jobs. Let it never be said that your editor is unwilling to take one for his readers; why else would he choose to spend four hours in the company of around 100 lawyers gathered to talk about software patents? This event, entitled Evaluating software patents, was held on March 19 at the local law school. The conversation was sometimes dry and often painful to listen to, but it did provide an interesting view into how patent attorneys see the software patent regime in the U.S. The following is a summary of the high points from the four panels held at this event.
Should software patents exist?
It should come as little surprise that a panel full of patent lawyers turns out to be supportive of the idea of software patents. Of all the panellists present, only Jason Mendelson was truly hostile to patenting software, and even he stopped short of saying that they should not exist at all. The first speaker, though, was John Duffy, who cited language in a 1952 update to the patent code stating that "a patentable process includes a new use of an old machine." That language, he says, "fits software like a glove." So there is, he says, no basis for any claims that software patents are not allowed by current patent law.
Beyond that, he says, the attempts to prevent the patenting of software for many years did a great deal of damage. Keeping the patent office away from software prevented the accumulation of a proper set of prior art, leading to the current situation where a lot of bad patents exist. Software is an engineering field, according to Duffy, and no engineering field has ever been excluded from patent protection. That said, software is unique in that it also benefits from copyright protection. That might justify raising the bar for software patents, but does not argue against their existence.
Damien Geradin made the claim that there's no reason for software patents to be different from any other kind of patent. The only reason that there is any fuss about them, he says, is a result of the existence of the open source community; that's where all the opposition to patents comes from. But he showed no sign of understanding why that opposition exists; there is, he says, no real reason why software patents should be denied.
Kevin Luo, being a Microsoft attorney, could hardly come out against software patents. He talked at length about the research and development costs at Microsoft, and made a big issue of the prevalence of software in many kinds of devices. According to Mr. Luo, trying to make a distinction between hardware and software really does not make a whole lot of sense.
Beyond their basis in legislation, patents should, according to the US constitution, serve to encourage innovation in their field. Do software patents work this way? Here there was more debate, with even the stronger patent supporters being hard put to cite many examples. One example that did come up was the RSA patent, cited by Kevin Luo; without that patent, he says, RSA Security would not have been able to commercialize public key encryption. Whether this technique would not have been invented in the absence of patent protection was not discussed.
Mr. Geradin noted that software patents are often used to put small innovators out of business, which seems counter to their stated purpose. But, he says, they can also be useful for those people, giving them a way to monetize their ideas. Without patents, innovators may find themselves with nothing to sell.
Jason Haislmaier claimed, instead, that software patents don't really create entrepreneurship; people invent because that is who they are. And he noted that software patents are especially useless for startup companies. It can currently take something like seven years to get a patent; by that time, the company has probably been sold (or gone out of business) and the inventors are long gone. Jason Mendelson, who does a lot of venture capital work, had an even stronger view, using words like "worthless" and "net negative." He claimed that startups are frequently sued for patent infringement for the simple purpose of putting them out of business.
What's wrong with the patent system?
In general, even the panellists who were most supportive of the idea of software patents had little good to say about how the patent system works in the US currently.
For example, Michael Meurer, co-author of Patent Failure, has no real interest in abolishing software patents, but he argues that they do not work in their current form. Patents are supposed to be a property right, but they currently "perform poorly as property," with software patents being especially bad. That, he says, is why software developers tend to dislike patents, something which distinguishes them from practitioners of almost every other field. Patents are afflicted by vague language and "fuzzy boundaries" that make it impossible to know what has really been patented, so they don't really deliver any rewards to innovators.
Mr. Meurer also noted that software currently features in about 25% of all patent applications. That is a higher percentage than was reached by other significant technologies - he cited steam engines and electric motors - at their peak.
Mark Lemley talked a bit about the effect of software patents on open source software. Patents are a sort of arms-race game, and releasing code as open source is, in his words, "unilateral disarmament." He talked about defending open source with the "white knight" model - meaning either groups like the Open Invention Network and companies like IBM. He also noted that patents provide great FUD value for those opposed to open source.
A related topic, one which came up several times, is "inadvertent infringement." This is what happens when somebody infringes on a patent without even knowing that it exists - independent invention, in other words. John Duffy said that the amount of inadvertent infringement going on serves as a good measure of the health of the patent system in general. In an environment where patents are not given for obvious ideas, inadvertent infringement should be relatively rare. And, in some fields (biotechnology and pharmaceuticals, for example), it tends not to be a problem.
[PULL QUOTE: Actual copying of patented technology is only alleged in a tiny fraction of software patent suits. In other words, most litigation stems from inadvertent infringement. END QUOTE] In the software realm, though, inadvertent infringement is a big problem. Mark Lemley asserted a couple of times that actual copying of patented technology is only alleged in a tiny fraction of software patent suits. In other words, most litigation stems from inadvertent infringement. Michael Meurer added that there is a direct correlation between the amount of money a company spends on research and development and the likelihood that it will be sued for patent infringement. In most fields, he notes, piracy (his word) of patents is used as a substitute for research and development, so one would ordinarily see most suits leveled against companies which don't do their own R&D. In software, the companies which are innovating are the ones being sued.
The other big problem with the patent system is its use as a way to put competitors out of business. Rather than support innovation, the patent system is actively suppressing it. Patent litigator Natalie Hanlon-Leh noted that it typically costs at least $1 million to litigate a patent case. John Posthumus added that no company with less than about $50 million in annual revenue can afford to fight a patent suit; smaller companies will simply be destroyed by the attempt. Patent lawyers know this, so they employ every trick they know to stretch out patent cases, making them as expensive as possible.
Variation between the courts is another issue, leading to the well-known problem of "forum shopping," wherein litigators file their cases in the court which is most likely to give them the result they want. That is why so many patent suits are fought in east Texas.
What is to be done about it?
Michael Muerer made the claim that almost every industry in the US would be better off if the patent system were to be abolished; in other words, patents serve as a net drain on the industry. But, being a patent attorney, he does not want to abolish the patent system; instead he would like to see reforms made. His preferred reforms consist mostly of tightening up claim language to get rid of ambiguities and to reduce the scope of claims. He would like to make the process of getting a patent quite a bit more expensive, putting a much larger burden on applicants to prove that they deserve their claims.
Mr. Muerer went further and singled out the independent inventor lobby as being the biggest single impediment to patent reform in the US. In particular, their efforts to block a switch from first-to-invent to first-to-file priority (as things are already done in most of the rest of the world) has held things up for years. What the lobby doesn't realize, he says, is that if the patent system works better for "the big guys," they will, in turn, be willing to pay more for patents obtained by the "little guys." This sort of trickle-down patent theory was not echoed by any of the other panelists, though.
Part of the problem is that the US patent and trademark office (PTO) is overwhelmed, with a backlog of over 1 million patent applications. So patent applications take forever, and the quality control leaves something to be desired. Some panellists called for funding the PTO at a higher level, but this is unlikely to happen: the number of patent applications has fallen in recent times, and there is a possibility that some application fees will be routed to the general fund to help cover banker bonuses and other equally worthy causes. The PTO is likely to have less money in the near future.
And, in any case, does it make sense to put more money into the PTO? Mark Lemley is against that idea, saying that the money would just be wasted. Most patents are never heard from again after issuance; doing anything to improve the quality of those patents is just a waste. Instead, he (along with others) appears to be in favor of the "gold-plated patent" idea.
Gold-plated patents are associated with another issue: the fact that, in US courts, patents have an automatic presumption of validity. This presumption makes life much easier for plaintiffs, but, given the quality of many outstanding patents, some people think that the presumption should be revisited and, perhaps, removed. Applicants who think they have an especially strong patent could then apply for the gold-plated variety. These patents would cost a lot more, and they would be scrutinized much more closely before being issued. The idea is that a gold-plated patent really could have a presumption of validity.
Others disagree with this idea. Gold-plated patents would really only benefit companies that had the money to pay for them; everybody else would be a second-class citizen. Anybody who was serious about patents would have to get them, though; they would really just be a price hike in disguise.
There was much talk of patent reform in Congress - but little optimism. It was noted that this reform has been held up for several years now, with no change in sight. There was disagreement over who to blame (Mark Lemley blames the pharmaceuticals industry), but it doesn't seem to matter. John Duffy noted that the legislative history around intellectual property is "not charming"; he called the idea that patent law could be optimized a "fantasy." Mark Lemley agreed, noting that copyright law now looks a lot like the much-maligned US tax code, with lots of specific industry rules. Trying to adapt slow-moving patent law to a fast-moving industry like software just seems unlikely to work.
What Mark suggests, instead, is to reform patent law through the courts. Indeed, he says, that is already happening. Recent rulings have made preliminary injunctions much harder to get, they have raised the bar for obviousness, restricted the scope of business-model patents, and more. Most of the complaints people have had, he says, have already been fixed.
John Duffy, instead, would like to "end the patenting monopoly." By this he means the monopoly the PTO has on the issuing of patents. Evidently there are ways to get US-recognized patents from a few overseas patent offices now, and those offices tend to be much faster. He also likes the idea of having private companies doing patent examination; this work would come with penalties for granting patents which are later invalidated. Eventually, he says, we could have a wide range of industry-specific patent offices doing a much better job than we have now.
Conclusion
There was a brief discussion of the practice of not researching patents at all with the hope of avoiding triple damages for "willful infringement." The participants agreed that this was a dangerous approach which could backfire on its practitioners; convincing a judge of one's ignorance can be a challenge. But it was also acknowledged that there is no way to do a full search for patents which might be infringed by a given program in any case.
All told, it was a more interesting afternoon than one might expect. The discussion of software patents in the free software community tends to follow familiar lines; the people at this event see the issue differently. For better or worse, their view likely has a lot of relevance to how things will go. There will be some tweaking of the system to try to avoid the worst abuses - at least as seen by some parts of the industry - but wholesale patent reform is not on the agenda. Software patents will be with us (in the US) for the foreseeable future, and they will continue to loom over the rest of the world. We would be well advised to have our defenses in place.
A look at Parrot 1.0
The Parrot project released version 1.0 of its dynamic language interpreting virtual machine last week, marking the culmination of seven years of work. Project leader Allison Randal explains that although end users won't see the benefits yet, 1.0 does mean that Parrot is ready for serious work by language implementers. General developers can also begin to get a feel for what working with Parrot is like using popular languages like Ruby, Lua, Python, and, of course, Perl.
The evolution of Parrot
Parrot originated in 2001 as the planned interpreter for Perl 6, but soon expanded its scope to provide portable compilation and execution for Perl, Python, and any other dynamic language. In the intervening years, the structure of the project solidified — the Parrot team focused on implementing its virtual machine, refining the bytecode format, assembly language, instruction formats, and other core components, while separate teams focused on implementing the various languages, albeit working closely with the core Parrot developers.
The primary target for 1.0 was to have a stable platform ready for language implementers to write to, and a robust set of compiler tools suitable for any dynamic language. The 1.4 release, tentatively set for this July, will target general developers, and next January's 2.0 should be ready for production systems.
The promise of Parrot is tantalizing: rather than separate runtimes for
Perl, Python, Ruby, and every other language, a single virtual machine that
can compile each of them down to the same instruction set and run them.
That opens the possibility of applications that incorporate code and call
libraries written in multiple languages. "A big part of development
these days isn't rolling everything from scratch, it's combining existing
libraries to build your product or service,
"
Randal said. "Access to multiple languages expands your available
resources, without making you learn the syntax of a new language. It's also
an advantage for new languages, because they can use the libraries from
other existing languages and get a good jump-start.
"
The Parrot VM itself is register-based, which the project says better mirrors the design of underlying CPU hardware and thus permits compilation to more efficient native machine language than the stack-based VMs used for Java and .Net. It provides separate registers for integers, strings, floating-point numbers, and "polymorphic containers" (PMCs; an abstract type allowing language-specific custom use), and performs garbage collection. Parrot can directly execute code in its own native Parrot Bytecode (PBC) format, and uses just-in-time compilation to run programs written in higher-level host languages. In addition to PBC, developers and compilers can also generate two higher-level formats: Parrot Assembly (PASM) and Parrot Intermediate Representation (PIR). A fourth format, Parrot Abstract Syntax Tree (PAST), is designed specifically for compiler output. The differences between them, including the level of detail exposed, is documented at the Parrot web site.
Parrot includes a suite of core libraries that implement common data types like arrays, associative arrays, and complex numbers, as well as standard event, I/O, and exception handling. It also features a next-generation regular expression engine called Parser Grammar Engine (PGE). PGE is actually a fully-functional recursive descent parser, which Randal notes makes it a good deal more powerful than a standard regular expression engine, and a bit cleaner and easier to use.
The project plans to keep the core of Parrot light, however, and extend its functionality through libraries running on the dynamic languages that Parrot interprets. Keeping the core as small as possible will make Parrot usable on resource-constrained hardware like mobile devices and embedded systems.
Language experts wanted
The "getting started" documentation includes sample code written in PASM and PIR, but it is the high level language support that interests most developers. The project site maintains a list of active efforts to implement languages for the Parrot VM. As of today, there are 46 projects implementing 36 different languages. Three of the most prominent are Rakudo, the implementation of Perl 6 being developed by the Perl community, Cardinal, an implementation of Ruby, and Pynie, an implementation of Python. Among the rest there is serious work pursuing Lua and Lisp variants, as well as work on novelty languages such as Befunge and LOLCODE. Not all are complete, but Randal said development has accelerated in recent months after the 1.0 release date was announced, and she expects production ready releases of the key languages soon.
Language implementers come from within the Parrot project and from the
language communities themselves. As Randal explained it, "we see it
as our responsibility as a project to develop the core of the key language
implementations, and to actively reach out to the language
communities.
"
1.0 includes a set of parsing utilities called the Parrot Compiler Tools (PCT) to help implement dynamic languages on the Parrot VM. PCT includes the PGE parser, as well as classes to handle the lexical analyzer and compiler front-end, and to create the driver program that Parrot itself will call to run the compiler. Owing to its Perl heritage, PCT uses a subset of Perl 6 called Not Quite Perl (NQP). Developer documentation for NQP and all of the PCT components is available with Parrot 1.0 as well as on the Parrot Developer Wiki.
Parrot packages have been available for many Linux distributions and BSDs for much of its development cycle, but now that it has reached 1.0, Randal expects to see it ship by default in upcoming releases. For now, however, developers and language implementers interested in testing and running Parrot 1.0 can download source code releases from the project's web site or check out a copy from its Subversion repository. Building Parrot requires Perl, a C compiler, and a standard make utility.
Parrot has been a long time in coming, but now that 1.0 is out of the gate, the real work can begin, as the major language projects make their own stable releases and developers start to use the Parrot VM as a runtime environment. Although the technical work continues at full pace, Randal said the project is also pushing forward on the education and outreach front, with a book soon to be published through Onyx Neon Press, and Parrot sessions planned for upcoming open source conferences and workshops as well.
Security
Linux botnets
It will come as no surprise to long-time readers of this page (or others who have followed embedded device security), but recent reports of the "first Linux botnet" are making the subject of router/modem security more visible to the general public. As we have reported previously, embedded, network-facing devices make tempting targets. It appears that a botnet herder noticed that and is trying to take advantage of Linux-based devices.
Perhaps the most surprising part about the attack is the simplicity of the vulnerability it is exploiting. As far as anyone has found "psyb0t", as the botnet is known, just brute forces username/password pairs over telnet, ssh, or http. The earliest research [PDF] of the botnet was from January; at that time it was only known to be exploiting a particular ADSL modem (Netcomm NB5) that, at one time, had non-existent authorization on its WAN-facing administrative web interface.
More recently, DroneBL found more infected routers when investigating a distributed denial of service (DDOS) against its servers. The botnet is targeting Linux devices using the mipsel (MIPS little-endian) architecture, which includes many Linux-based home routers. OpenWRT, DD-WRT, and other projects all provide Linux-mipsel firmware for a variety of potentially vulnerable devices.
Once the infecting program gets access to the device, it downloads the botnet code and disables access to the device via telnet, ssh, or http.
While its method of getting access is simple, the botnet code itself is very capable. It connects to a command and control IRC channel (#mipsel) on a particular host under the control of the botnet herder. Commands on that channel can order the botnet nodes to do various denial of service attacks, scan for vulnerable MySQL and phpMyAdmin sites and subvert them, port scan particular hosts, update the botnet code, and more. The IRC channel has shut down with a message indicating that psyb0t was strictly a research project by someone known as DRS. The message also claimed that no DDOS or phishing was done and that the botnet reached 80,000 nodes.
While it may well be that the danger of this particular threat has passed, the more general issue of router, especially home router, security persists. A fully capable, always-on Linux device is a very attractive target for botnet herders or other types of attackers. Trying to put together a botnet of Linux desktops and servers might be a much more difficult task as there is a much wider diversity of distributions and kernel versions, as well as different architectures and configurations. To a great extent, the Linux-based home router landscape is much more homogeneous, as psyb0t has shown.
Clearly default and/or weak passwords are a serious problem—not just for Linux-based devices—but it would not be surprising to find that other vulnerabilities (such as authentication bypass) are available on many of these devices. Unlike a simple password change, those kinds of flaws require an update to the router firmware, which, in turn, requires users to know about the problem and understand where to get—and how to apply—the code to fix it. This is certainly a problem we have not seen the last of.
New vulnerabilities
bugzilla: multiple vulnerabilities
| Package(s): | bugzilla | CVE #(s): | CVE-2008-4437 CVE-2008-6098 CVE-2009-0481 CVE-2009-0483 CVE-2009-0484 CVE-2009-0485 CVE-2009-0486 CVE-2009-0482 | ||||||||||||
| Created: | March 19, 2009 | Updated: | June 4, 2010 | ||||||||||||
| Description: | Bugzilla has a number of vulnerabilities. From the Fedora alerts:
Directory traversal vulnerability in importxml.pl in Bugzilla before 2.22.5, and 3.x before 3.0.5, when --attach_path is enabled, allows remote attackers to read arbitrary files via an XML file with a .. (dot dot) in the data element. (CVE-2008-4437) Bugzilla 3.2 before 3.2 RC2, 3.0 before 3.0.6, 2.22 before 2.22.6, 2.20 before 2.20.7, and other versions after 2.17.4 allows remote authenticated users to bypass moderation to approve and disapprove quips via a direct request to quips.cgi with the action parameter set to "approve." (CVE-2008-6098) Bugzilla 2.x before 2.22.7, 3.0 before 3.0.7, 3.2 before 3.2.1, and 3.3 before 3.3.2 allows remote authenticated users to conduct cross-site scripting (XSS) and related attacks by uploading HTML and JavaScript attachments that are rendered by web browsers. (CVE-2009-0481) Cross-site request forgery (CSRF) vulnerability in Bugzilla before 3.2 before 3.2.1, 3.3 before 3.3.2, and other versions before 3.2 allows remote attackers to perform bug updating activities as other users via a link or IMG tag to process_bug.cgi. (CVE-2009-0482) Cross-site request forgery (CSRF) vulnerability in Bugzilla 2.22 before 2.22.7, 3.0 before 3.0.7, 3.2 before 3.2.1, and 3.3 before 3.3.2 allows remote attackers to delete keywords and user preferences via a link or IMG tag to (1) editkeywords.cgi or (2) userprefs.cgi. (CVE-2009-0483) Cross-site request forgery (CSRF) vulnerability in Bugzilla 3.0 before 3.0.7, 3.2 before 3.2.1, and 3.3 before 3.3.2 allows remote attackers to delete shared or saved searches via a link or IMG tag to buglist.cgi. (CVE-2009-0484) Cross-site request forgery (CSRF) vulnerability in Bugzilla 2.17 to 2.22.7, 3.0 before 3.0.7, 3.2 before 3.2.1, and 3.3 before 3.3.2 allows remote attackers to delete unused flag types via a link or IMG tag to editflagtypes.cgi. (CVE-2009-0485) Bugzilla 3.2.1, 3.0.7, and 3.3.2, when running under mod_perl, calls the srand function at startup time, which causes Apache children to have the same seed and produce insufficiently random numbers for random tokens, which allows remote attackers to bypass cross-site request forgery (CSRF) protection mechanisms and conduct unauthorized activities as other users. (CVE-2009-0486) | ||||||||||||||
| Alerts: |
| ||||||||||||||
compiz-fusion: screen lock bypass
| Package(s): | compiz-fusion | CVE #(s): | CVE-2008-6514 | ||||||||||||||||
| Created: | March 25, 2009 | Updated: | March 30, 2010 | ||||||||||||||||
| Description: | Compiz-fusion allows local users to simply drag the screen saver out of the way, thus bypassing any associated screen lock. | ||||||||||||||||||
| Alerts: |
| ||||||||||||||||||
drupal-cck: cross-site scripting
| Package(s): | drupal-cck | CVE #(s): | |||||||||
| Created: | March 23, 2009 | Updated: | March 25, 2009 | ||||||||
| Description: | From the Drupal advisory: The Node reference and User reference sub-modules, which are part of the Content Construction Kit (CCK) project, lets administrators define node fields that are references to other nodes or to users. When displaying a node edit form, the titles of candidate referenced nodes or names of candidate referenced users are not properly filtered, allowing malicious users to inject arbitrary code on those pages. Such a cross site scripting (XSS) attack may lead to a malicious user gaining full administrative access. | ||||||||||
| Alerts: |
| ||||||||||
ejabberd: cross-site scripting vulnerability
| Package(s): | ejabberd | CVE #(s): | CVE-2009-0934 | ||||||||||||
| Created: | March 19, 2009 | Updated: | April 17, 2009 | ||||||||||||
| Description: | ejabberd has a cross-site scripting vulnerability.
From the Fedora alert:
Cross-site scripting (XSS) vulnerability in ejabberd before 2.0.4 allows remote attackers to inject arbitrary web script or HTML via unknown vectors related to links and MUC logs. | ||||||||||||||
| Alerts: |
| ||||||||||||||
ffmpeg: unspecified vulnerabilities
| Package(s): | ffmpeg | CVE #(s): | CVE-2008-4868 CVE-2008-4869 | ||||||||||||
| Created: | March 20, 2009 | Updated: | December 7, 2009 | ||||||||||||
| Description: | From the CVE entries:
Unspecified vulnerability in the avcodec_close function in libavcodec/utils.c in FFmpeg 0.4.9 before r14787, as used by MPlayer, has unknown impact and attack vectors, related to a free "on random pointers." FFmpeg 0.4.9, as used by MPlayer, allows context-dependent attackers to cause a denial of service (memory consumption) via unknown vectors, aka a "Tcp/udp memory leak." | ||||||||||||||
| Alerts: |
| ||||||||||||||
ghostscript: integer overflows
| Package(s): | ghostscript | CVE #(s): | CVE-2009-0583 CVE-2009-0584 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Created: | March 19, 2009 | Updated: | December 4, 2009 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Description: | Ghostscript has several integer overflow vulnerabilities.
From the Red Hat alert:
Multiple integer overflow flaws which could lead to heap-based buffer overflows, as well as multiple insufficient input validation flaws, were found in Ghostscript's International Color Consortium Format library (icclib). Using specially-crafted ICC profiles, an attacker could create a malicious PostScript or PDF file with embedded images which could cause Ghostscript to crash, or, potentially, execute arbitrary code when opened by the victim. (CVE-2009-0583, CVE-2009-0584) | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
jasper: insecure temp files
| Package(s): | jasper | CVE #(s): | CVE-2008-3521 | ||||||||||||||||||||
| Created: | March 20, 2009 | Updated: | April 19, 2010 | ||||||||||||||||||||
| Description: | From the Ubuntu advisory: It was discovered that JasPer created temporary files in an insecure way. Local users could exploit a race condition and cause a denial of service in libjasper applications. | ||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||
kernel: multiple ext4 denial of service vulnerabilities
| Package(s): | linux-2.6 | CVE #(s): | CVE-2009-0745 CVE-2009-0746 CVE-2009-0747 CVE-2009-0748 | ||||||||||||||||||||
| Created: | March 23, 2009 | Updated: | September 16, 2009 | ||||||||||||||||||||
| Description: | From the Debian advisory: CVE-2009-0745: Peter Kerwien discovered an issue in the ext4 filesystem that allows local users to cause a denial of service (kernel oops) during a resize operation. CVE-2009-0746: Sami Liedes reported an issue in the ext4 filesystem that allows local users to cause a denial of service (kernel oops) when accessing a specially crafted corrupt filesystem. CVE-2009-0747: David Maciejak reported an issue in the ext4 filesystem that allows local users to cause a denial of service (kernel oops) when mounting a specially crafted corrupt filesystem. CVE-2009-0748: David Maciejak reported an additional issue in the ext4 filesystem that allows local users to cause a denial of service (kernel oops) when mounting a specially crafted corrupt filesystem. | ||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||
lcms: multiple vulnerabilities
| Package(s): | lcms | CVE #(s): | CVE-2009-0581 CVE-2009-0723 CVE-2009-0733 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Created: | March 19, 2009 | Updated: | December 3, 2009 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Description: | lcms has three vulnerabilities.
From the Red Hat alert:
Multiple integer overflow flaws which could lead to heap-based buffer overflows, as well as multiple insufficient input validation flaws, were found in LittleCMS. An attacker could use these flaws to create a specially-crafted image file which could cause an application using LittleCMS to crash, or, possibly, execute arbitrary code when opened by a victim. (CVE-2009-0723, CVE-2009-0733) A memory leak flaw was found in LittleCMS. An application using LittleCMS could use excessive amount of memory, and possibly crash after using all available memory, if used to open specially-crafted images. (CVE-2009-0581) | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
libvirt: privilege escalation
| Package(s): | libvirt | CVE #(s): | CVE-2009-0036 | ||||
| Created: | March 19, 2009 | Updated: | March 25, 2009 | ||||
| Description: | libvirt has a privilege escalation vulnerability.
From the Red hat alert:
libvirt_proxy, a setuid helper application allowing non-privileged users to communicate with the hypervisor, was discovered to not properly validate user requests. Local users could use this flaw to cause a stack-based buffer overflow in libvirt_proxy, possibly allowing them to run arbitrary code with root privileges. (CVE-2009-0036) | ||||||
| Alerts: |
| ||||||
muttprint: insecure temporary files
| Package(s): | muttprint | CVE #(s): | CVE-2008-5368 | ||||
| Created: | March 24, 2009 | Updated: | March 25, 2009 | ||||
| Description: | From the Gentoo advisory: Dmitry E. Oboukhov reported an insecure usage of the temporary file "/tmp/muttprint.log" in the muttprint script. A local attacker could perform symlink attacks to overwrite arbitrary files with the privileges of the user running the application. | ||||||
| Alerts: |
| ||||||
opensc: insufficient access restrictions
| Package(s): | opensc | CVE #(s): | CVE-2009-0368 | ||||||||||||||||
| Created: | March 19, 2009 | Updated: | June 1, 2009 | ||||||||||||||||
| Description: | opensc has a vulnerability involving insufficient access restrictions
on private data.
From the Red Hat alert:
OpenSC stores private data without proper access restrictions. User "b.badrignans" reported this security problem on December 4th, 2008. In June 2007 support form private data objects was added to OpenSC. Only later a severe security bug was found out: while the OpenSC PKCS#11 implementation requires PIN verification to access the data, low level APDU commands or debugging tools like opensc-explorer or opensc-tool can access the private data without any authentication. This was fixed in OpenSC 0.11.7. | ||||||||||||||||||
| Alerts: |
| ||||||||||||||||||
pam: denial of service, possible privilege escalation
| Package(s): | pam | CVE #(s): | CVE-2009-0887 | ||||||||||||||||||||||||
| Created: | March 23, 2009 | Updated: | May 31, 2011 | ||||||||||||||||||||||||
| Description: | From the Mandriva advisory: Integer signedness error in the _pam_StrTok function in libpam/pam_misc.c in Linux-PAM (aka pam) 1.0.3 and earlier, when a configuration file contains non-ASCII usernames, might allow remote attackers to cause a denial of service, and might allow remote authenticated users to obtain login access with a different user's non-ASCII username, via a login attempt (CVE-2009-0887). | ||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||
postgresql: denial of service
| Package(s): | postgresql | CVE #(s): | CVE-2009-0922 | ||||||||||||||||||||||||||||||||||||||||||||||||
| Created: | March 23, 2009 | Updated: | November 2, 2009 | ||||||||||||||||||||||||||||||||||||||||||||||||
| Description: | From the Red Hat bugzilla: A stack overflow was found in how PostgreSQL handles conversion encoding. This could allow an authenticated user to kill connections to the PostgreSQL server for a small amount of time, which could interrupt transactions by other users/clients. | ||||||||||||||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||||||||||||||
seamonkey: multiple vulnerabilities
| Package(s): | seamonkey | CVE #(s): | |||||
| Created: | March 25, 2009 | Updated: | April 14, 2009 | ||||
| Description: | Seamonkey 1.1.15 contains fixes for a number of security issues. | ||||||
| Alerts: |
| ||||||
thunderbird: multiple vulnerabilities
| Package(s): | thunderbird | CVE #(s): | |||||
| Created: | March 25, 2009 | Updated: | March 25, 2009 | ||||
| Description: | A number of security issues, generally involving memory corruption, have been fixed in the thunderbird 2.0.0.21 release. | ||||||
| Alerts: |
| ||||||
webcit: format string vulnerability
| Package(s): | webcit | CVE #(s): | CVE-2009-0364 | ||||
| Created: | March 24, 2009 | Updated: | March 25, 2009 | ||||
| Description: | From the Debian advisory: Wilfried Goesgens discovered that WebCit, the web-based user interface for the Citadel groupware system, contains a format string vulnerability in the mini_calendar component, possibly allowing arbitrary code execution. | ||||||
| Alerts: |
| ||||||
Page editor: Jake Edge
Kernel development
Brief items
Kernel release status
The 2.6.29 kernel is out, released by Linus on March 23. For those just tuning in, some of the most significant features of 2.6.29 include the Btrfs filesystem (still very much in an experimental mode), the squashfs filesystem, kernel mode setting for Intel graphics adapters, task credentials, WiMAX support, the filesystem freeze feature, and much more; see the KernelNewbies 2.6.29 page for all the details.As of this writing, merging of changes for 2.6.30 has not yet begun.
The 2.6.27.21 and 2.6.28.9 stable kernel updates were released on March 23. Both contain a long list of fixes for bugs in the USB subsystem, i915 graphics driver, device mapper, and sound subsystems (and beyond).
Kernel development news
Quotes of the week
In terms of development methodology and tools, in fact i claim that the kernel workflow and style of development can be applied to most user-space software projects with great success.
NetworkManager is both the carrot and the stick. If NM just worked around broken stuff and proprietary drivers, it would be a hacktower of doom and we may still be stuck largely in 2006-wireless land.
I've been using [ext4] on my laptop since July, and haven't lost significant amounts of data yet.
The return of utrace
An in-kernel tracing infrastructure for user-space code, utrace, has long been in a kind of pending state; it has shipped in every Fedora kernel since Fedora Core 6, and has done some time in the -mm tree, but it has never gotten into the mainline. That may now be changing, given a recent push for inclusion of the core utrace code. There are some lingering questions about including utrace, at least for 2.6.30, because the patchset doesn't add any in-kernel user of the interface.
Utrace grew out of Roland McGrath's work on maintaining the ptrace() system call. That call is used by user-space programs to do things like trace system calls using strace, but it is also used in less obvious ways—to implement user-mode-linux (UML) for example. While ptrace() has generally sufficed, it is, by all accounts, a rather ugly and flawed interface both for kernel hackers to maintain and for developers to use. McGrath described the genesis of utrace in a recent linux-kernel post:
Basically, utrace implements a framework for controlling user-space tasks. It provides an interface that can be used by various tracing "engines", implemented as loadable kernel modules, that wish to be notified of events that occur on threads of interest. As might be expected, engines register callback functions for specific events, then attach to whichever thread they wish to trace.
The callbacks are made from "safe" places in the kernel, which allows the functions great leeway in the kinds of processing they can do. No locks are held when the callbacks are made, so they can block for a short time (in calls like kmalloc()), but they shouldn't block for long periods. Doing so, risks making the SIGKILL signal from working properly. If the callback needs to wait for I/O or block on some other long-running activity, it should stop the execution of the thread and return, then resume the thread when the operation completes.
There are various events that can be watched via utrace: system call entry and exit, fork(), signals being sent to the task, etc. Single-stepping through a task being traced can also be handled via utrace. One of the benefits that utrace provides, which ptrace() lacks, is the ability to have multiple engines tracing the same task. Utrace is well documented in DocBook manual included with the patch.
LWN first looked at utrace
just over two years ago, but, since then, it has largely disappeared from
view. Reimplementing ptrace() using utrace is
certainly one of the goals, but the current patches do not do that. But,
there is a fundamental disagreement between McGrath and other kernel
hackers about whether utrace can be merged without it. The problem is that
there is no in-tree user of the new interface, and, as Ted Ts'o put it, "we need
to have a user for the kernel interface along with the new kernel
interface
".
The proposed utrace patchset consists of a small patch to clean up some of
the tracehook functionality, a large 4000 line patch that implements the
utrace core, and another patch that adds an ftrace tracer that is based on
utrace event handling. The latter, implemented by SystemTap
developer Frank Eigler, would provide an in-tree user of the new utrace
code, but received a rather chilly response
from Ingo Molnar: "[...] without the
ftrace plugin the
whole utrace machinery is just something that provides a _ton_ of
hooks to something entirely external: SystemTap mainly.
"
Therein lies one of the main concerns expressed about utrace. The
utrace-ftrace interface is not seen as a real user of utrace, more of a
"big distraction
", as Andrew Morton called it. The worry is that adding utrace
just makes it easier to keep SystemTap out of the mainline. While the
kernel hackers have some serious reservations about the specifics of the
SystemTap implementation, they would like to see it head towards the
mainline. The fear is that by merging things like utrace, it may enable
SystemTap to stay out of the mainline that much longer. Molnar posted his take on the issue, concluding:
In addition, Molnar is not pleased that the utrace changes haven't been reviewed by the ftrace developers and were submitted just as the merge window for 2.6.30 is about to open. He believes that McGrath, Eigler, and the other utrace developers should be working with the ftrace team:
The ftrace/utrace plugin is the only real connection utrace has to the mainline kernel, so proper review by the tracing folks and cooperation with the tracing folks is very much needed for the whole thing.
But McGrath sees things rather differently. From his perspective, utrace has enough usefulness in its own right—not primarily as just a piece of SystemTap—to be considered for the mainline. Several different uses for utrace, in addition to the ptrace() cleanup, were mentioned in the thread: kmview, a kernel module for virtualization; uprobes for DTrace-style user-space probing; changing UML to use utrace directly, rather than ptrace(); and more. Eigler also defended utrace as a standalone feature:
Molnar would like to see the "rewrite-ptrace-via-utrace" patch included before merging utrace. That would give the facility a solid in-kernel user, which could be used by other kernel developers to test and debug utrace. But, McGrath is not yet ready to submit that code:
In some ways, the association with SystemTap is unfairly coloring the reaction to utrace. Molnar posted an excellent summary of the issues that stop him (and other kernel hackers) from using SystemTap—along with some possible solutions—but utrace and SystemTap aren't equivalent. It may not make sense to merge utrace without a serious in-kernel user of the interface, but most of the rest of the arguments have been about SystemTap, not utrace. As McGrath puts it:
It remains to be seen whether utrace will make its way into 2.6.30 or not. Linus Torvalds was unimpressed with utrace dominating Fedora kerneloops.org reports, as relayed by Molnar—though the bug that caused those problems has been long fixed. McGrath sees value in merging utrace before the ptrace() rewrite is ready, while other kernel developers do not. If utrace misses this merge window, it would seem likely that it will return for 2.6.31, along with the rewrite; at that point merging would seem quite likely.
Union file systems: Implementations, part I
In last week's article, I reviewed the use cases, basic concepts, and common design problems of unioning file systems. This week, I'll describe several implementations of unioning file systems in technical detail. The unioning file systems I'll cover in this article are Plan 9 union directories, BSD union mounts, Linux union mounts. The next article will cover unionfs, aufs, and possibly one or two other unioning file systems, and wrap up the series.For each file system, I'll describe its basic architecture, features, and implementation. The discussion of the implementation will focus in particular on whiteouts and directory reading. I'll wrap up with a look at the software engineering aspects of each implementations; e.g., code size and complexity, invasiveness, and burden on file system developers.
Before reading this article, you might want to check out Andreas
Gruenbacher's just published write-up of
the union mount workshop
held last November. It's a good summary of the unioning file systems
features which are most pressing for distribution developers. From
the introduction: "All of the use cases we are interested in basically
boil down to the same thing: having an image or filesystem that is
used read-only (either because it is not writable, or because writing
to the image is not desired), and pretending that this image or
filesystem is writable, storing changes somewhere else.
"
Plan 9 union directories
The Plan 9 operating system (browseable source code here) implements unioning in its own special Plan 9 way. In Plan 9 union directories, only the top-level directory namespace is merged, not any subdirectories. Unconstrained by UNIX standards, Plan 9 union directories don't implement whiteouts and don't even screen out duplicate entries - if the same file name appears in two file systems, it is simply returned twice in directory listings.A Plan 9 union directory is created like so:
bind -a /home/val/bin/ /bin
This would cause the directory /home/val/bin to be union
mounted "after" (the -a option) /bin; other
options are to place the new directory before the existing directory,
or to replace the existing directory entirely. (This seems an odd
ordering to me, since I like commands in my personal bin/
to take precedence over the system-wide commands, but that's the
example from the Plan 9 documentation.) Brian Kernighan
explains one
of the uses of union directories: "This mechanism of union directories replaces the search path of conventional UNIX shells. As far as you are concerned, all executable programs are in /bin." Union directories can theoretically replace many uses of the fundamental UNIX building blocks of symbolic links and search paths.
Without whiteouts or duplicate elimination, readdir() on
union directories is trivial to implement. Directory entry offsets
from the underlying file system correspond directly to the offset in
bytes of the directory entry from the beginning of the directory. A
union directory is treated as though the contents of the underlying
directories are concatenated together.
Plan 9 implements an alternative to readdir() worth
noting, dirread().
dirread() returns structures of type Dir,
described in the stat()
man page. The important part of the Dir is
the Qid member. A Qid is:
So why is this interesting? One of the
reasons readdir() is such a pain to implement is that it
returns the d_off member of struct dirent, a
single off_t (32 bits unless the application is compiled
with large file support), to mark the directory entry where an
application should continue reading on the next readdir()
call. This works fine as long as d_off is a simple byte
offset into a flat file of less than 232 bytes and existing directory
entries are never moved around - not the case for many modern file
systems (XFS, btrfs, ext3 with htree indexes). The
96-bit Qid is a much more useful place marker than the 32
or 64-bit off_t. For a good summary of the issues involved in
implementing readdir(),
read Theodore
Y. Ts'o's excellent post on the topic to the btrfs mailing list.
From a software engineering standpoint, Plan 9 union directories are heavenly. Without whiteouts, duplicate entry elimination, complicated directory offsets, or merging of namespaces beyond the top-level directory, the implementation is simple and easy to maintain. However, any practical implementation of unioning file systems for Linux (or any other UNIX) would have to solve these problems. For our purposes, Plan 9 union directories serve primarily as inspiration.
BSD union mounts
BSD implements two forms of unioning: the"-o union"
option to the mount command, which produces a union
directory similar to Plan 9's, and the mount_unionfs
command, which implements a more full-featured unioning file system
with whiteouts and merging of the entire namespace. We will focus on
the latter.
For this article, we use two sources for specific implementation
details: the original BSD union mount implementation as described in
the 1995 USENIX paper
Union
mounts in 4.4BSD-Lite [PS], and
the FreeBSD
7.1 mount_unionfs man page and source code. Other
BSDs may vary.
A directory can be union mounted either "below" or "above" an existing
directory or union mount, as long as the top branch of a writable
union is writable. Two modes of whiteouts are supported: either a
whiteout is always created when a directory is removed, or it is only
created if another directory entry with that name currently exists in
a branch below the writable branch. Three modes for setting the
ownership and mode of copied-up files are supported. The simplest is
transparent, in which the new file keeps the same owner
and mode of the original. The masquerade mode makes
copied-up files owned by a particular user and supports a set of
mount options for determining the new file mode.
The traditional mode sets the owner to the user who ran
the union mount command, and sets the mode according to the umask at
the time of the union mount.
Whenever a directory is opened, a directory of the same name is created on the top writable layer if it doesn't already exist. From the paper:
As a result, a "find /union" will result in copying every
directory (but not directory entries pointing to non-directories) to
the writable layer. For most file system images, this will use a
negligible amount of space (less than, e.g., the space reserved for
the root user, or that taken up by unused inodes in an FFS-style file
system).
A file is copied up to the top layer when it is opened with write permission or the file attributes are changed. (Since directories are copied over when they are opened, the containing directory is guaranteed to already exist on the writable layer.) If the file to be copied up has multiple hard links, the other links are ignored and the new file has a link count of one. This may break applications that use hard links and expect modifications through one link name to show up when referenced through a different hard link. Such applications are relatively uncommon, but no one has done a systematic study to see which applications will fail in this situation.
Whiteouts are implemented with a special directory entry
type, DH_WHT. Whiteout directory entries don't refer to
any real inode, but for easy compatibility with existing file system
utilities such as fsck, each whiteout directory entry
includes a faux inode number, the WINO reserved whiteout
inode number. The underlying file system must be modified to support
the whiteout directory entry type. New directories that replace a
whiteout entry are marked as opaque via a new "opaque" inode attribute
so that lookups don't travel through them (again requiring minimal
support from the underlying file system).
Duplicate directory entries and whiteouts are handled in the userspace
readdir() implementation. At opendir()
time, the C library reads the directory all at once, removes
duplicates, applies whiteouts, and caches the results.
BSD union mounts don't attempt to deal with changes to branches below
the writable top branch (although they are permitted). The
way rename() is handled is not described.
An example from the mount_unionfs man page:
The commands
mount -t cd9660 -o ro /dev/cd0 /usr/src
mount -t unionfs -o noatime /var/obj /usr/src
mount the CD-ROM drive /dev/cd0 on /usr/src and then attaches /var/obj on
top. For most purposes the effect of this is to make the source tree
appear writable even though it is stored on a CD-ROM. The -o noatime
option is useful to avoid unnecessary copying from the lower to the upper
layer.
Another example (noting that I believe source control is best
implemented outside of the file system):
The command
mount -t unionfs -o noatime -o below /sys $HOME/sys
attaches the system source tree below the sys directory in the user's
home directory. This allows individual users to make private changes to
the source, and build new kernels, without those changes becoming visible
to other users.
Linux union mounts
Like BSD union mounts, Linux union mounts implement file system unioning in the VFS layer, with some minor support from underlying file systems for whiteouts and opaque directory tags. Several versions of these patches exist, written and modified by Jan Blunck, Bharata B. Rao, and Miklos Szeredi, among others.
One version of this code is merges the top-level directories only,
similar to Plan 9 union directories and the BSD -o union
mount option. This version of union mounts, which I refer to as union
directories, are described in some detail in a
recent LWN article by
Goldwyn Rodrigues and
in Miklos Szeredi's recent
post of an updated patch set. For the remainder of this article,
we will focus on versions of union mount that merge the full
namespace.
Linux union mounts are currently under active development. This article describes the version released by Jan Blunck against Linux 2.6.25-mm1, util-linux 2.13, and e2fsprogs 1.40.2. The patch sets, as quilt series, can be downloaded from Jan's ftp site:
Kernel patches: ftp://ftp.suse.com/pub/people/jblunck/patches/Utilities: ftp://ftp.suse.com/pub/people/jblunck/union-mount/
I have created a web page with links to git versions of the above patches and some HOWTO-style documentation at http://valerieaurora.org/union.
A union is created by mounting a file system with
the MS_UNION flag
set. (The MS_BEFORE, MS_AFTER,
and MS_REPLACE are defined in the mount code
base but not currently used.) If the MS_UNION flag is
specified, then the mounted file system must either be read-only or
support whiteouts. In this version of union mounts, the union mount
flag is specified by the "-o union" option
to mount. For example, to create a union of two loopback
device file systems, /img/ro and /img/rw, you would run:
# mount -o loop,ro,union /img/ro /mnt/union/
# mount -o loop,union /img/rw /mnt/union/
Each union mount creates a struct union_mount:
struct union_mount {
atomic_t u_count; /* reference count */
struct mutex u_mutex;
struct list_head u_unions; /* list head for d_unions */
struct hlist_node u_hash; /* list head for searching */
struct hlist_node u_rhash; /* list head for reverse searching */
struct path u_this; /* this is me */
struct path u_next; /* this is what I overlay */
};
As described
in Documentation/filesystems/union-mounts.txt, "All
union_mount structures are cached in two hash tables, one for lookups
of the next lower layer of the union stack and one for reverse lookups
of the next upper layer of the union stack."
Whiteouts and opaque directories are implemented in much the same way
as in BSD. The underlying file system must explicitly support whiteouts
by defining the .whiteout inode operation for directories
(currently, whiteouts are only implemented for ext2, ext3, and tmpfs).
The ext2 and ext3 implementations use the whiteout directory entry
type, DT_WHT, which has been defined
in include/linux/fs.h for years but not used outside of
the Coda file system until now. A reserved whiteout inode
number, EXT3_WHT_INO, is defined but not yet used;
whiteout entries currently allocate a normal inode. A new inode
flag, S_OPAQUE, is defined to mark directories as opaque.
As in BSD, directories are only marked opaque when they replace a
whiteout entry.
Files are copied up when the file is opened for writing. If necessary, each directory in the path to the file is copied to the top branch (copy-on-demand of directories). Currently, copy up is only supported for regular files and directories.
readdir() is one of the weakest points of the current
implementation. It is implemented the same way as BSD union mount
readdir(), but in the kernel. The d_off
field is set to the offset within the current underlying directory,
minus the sizes of the previous directories. Directory entries from
directories underneath the top layer must be checked against previous
entries for duplicates or whiteouts. As currently implemented,
each readdir() (technically, getdents())
system call reads all of the previous directory entries into an
in-kernel cache, then compares each entry to be returned with those
already in the cache before copying it to the user buffer. The end
result is that readdir() is complex, slow, and
potentially allocates a great deal of kernel memory.
One solution is to take the BSD approach and do the caching, whiteout,
and duplicate processing in userspace. Bharata B. Rao
is designing
support for union mount readdir() in glibc.
(The POSIX standard permits readdir() to be implemented
at the libc level if the bare kernel system call does not fulfill all
the requirements.) This would move the memory usage into the
application and make the cache persistent. Another solution would be
to make the in-kernel cache persistent in some way.
My suggestion is to take a technique from BSD union mounts and extend it: proactively copy up not just directory entries for directories, but all of the directory entries from lower file systems, process duplicates and whiteouts, make the directory opaque, and write it out to disk. In effect, you are processing the directory entries for whiteouts and duplicates on the first open of the directory, and then writing the resulting "cache" of directory entries to disk. The directory entries pointing to files on the underlying file systems need to signify somehow that they are "fall-through" entries (the opposite of a whiteout - it explicitly requests looking up an object in a lower file system). A side effect of this approach is that whiteouts are no longer needed at all.
One problem that needs to be solved with this approach is how to
represent directory entries pointing to lower file systems. A number
of solutions present themselves: the entry could point to a reserved
inode number, the file system could allocate an inode for each entry
but mark it with a new S_LOOKOVERTHERE inode attribute,
it could create a symlink to a reserved target, etc. This approach
would use more space on the overlying file system, but all other
approaches require allocating the same space in memory, and generally
memory is more dear than disk.
A less pressing issue with the current implementation is that inode numbers are not stable across boot (see the previous unioning file systems article for details on why this is a problem). If "fall-through" directories are implemented by allocating an inode for each directory entry on underlying file systems, then stable inode numbers will be a natural side effect. Another option is to store a persistent inode map somewhere - in a file in the top-level directory, or in an external file system, perhaps.
Hard links are handled - or, more accurately, not handled - in the same way as BSD union mounts. Again, it is not clear how many applications depend on modifying a file via one hard-linked path and seeing the changes via another hard-linked path (as opposed to symbolic link). The only method I can come up with to handle this correctly is to keep a persistent cache somewhere on disk of the inodes we have encountered with multiple hard links.
Here's an example of how it would work: Say we start a copy up for
inode 42 and find that it has a link count of three. We would create an
entry for the hard link database that includes the file system id, the
inode number, the link count, and the inode number of the new copy on
the top level file system. It could be stored in a file in CSV
format, or as a symlink in a reserved directory in the root directory
(e.g., "/.hardlink_hack/<fs_id>/42", which is a
link to "<new_inode_num> 3"), or in a real
database. Each time we open an inode on an underlying file system, we
look it up in our hard link database; if an entry exists, we decrement
the link count and create a hard link to the correct inode on the new
file system. When all of the paths are found, the link count drops to
one and the entry can be deleted from the database. The nice thing
about this approach is that the amount of overhead is bounded and will
disappear entirely when all the paths to the relevant inodes have been
looked up. However, this still introduces a significant amount of
possibly unnecessary complexity; the BSD implementation shows that
many applications will happily run with not-quite-POSIXLY-correct hard
link behavior.
Currently, rename() of directories across branches
returns EXDEV, the error for trying to rename a file
across different file systems. User space usually handles this
transparently (since it already has to handle this case for
directories from different file systems) and falls back to copying the
contents of the directory over one by one. Implementing
recursive rename() of directories across branches in the
kernel is not a bright idea for the same reasons as rename across
regular file systems; probably returning EXDEV is the
best solution.
From a software engineering point of view, union mounts seem to be a
reasonable compromise between features and ease of maintenance. Most
of the VFS changes are isolated into fs/union.c, a file
of about 1000 lines. About 1/3 of this file is the
in-kernel readdir() implementation, which will almost
certainly be replaced by something else before any possible merge.
The changes to underlying file systems are fairly minimal and only
needed for file systems mounted as writable branches. The main
obstacle to merging this code is the readdir()
implementation. Otherwise, file system maintainers have been
noticeably more positive about union mounts than any other unioning
implementation.
A nice summary of union mounts can be found in Bharata B. Rao's union mount slides for FOSS.IN [PDF].
Coming next
In the next article, we'll review unionfs and aufs, and compare the various implementations of unioning file systems for Linux. Stay tuned!Nftables: a new packet filtering engine
Packet filtering and firewalling has a long history in Linux. The first filtering mechanism, called "ipfwadm," was released in 1995 for the 1.2.1 kernel. This code was used until the 2.2.0 stable release (January, 1999), when the new "ipchains" module took over. While ipchains was useful, it only lasted until 2.4.0 (January, 2001), when it, too, was replaced by iptables/netfilter, which remains in the kernel now. If netfilter maintainer Patrick McHardy has his way, though, iptables, too, will be gone in the future, replaced by yet another mechanism called "nftables." This article will give an overview of how nftables works, followed by a discussion of the motivations behind this change.The first public nftables release came out on March 18. This code has been in the works for a while, though, and the ideas were discussed at the 2008 Netfilter Workshop. So nftables is not quite as new as it might seem.
The current iptables code has a lot of protocol awareness built into it. There is, for example, a module dedicated to extracting port numbers from UDP packets which is different from the module concerned with TCP packets. The nftables implementation is entirely different; there is no protocol knowledge built into it at all. Instead, nftables is implemented as a simple virtual machine which interprets code loaded from user space. So nftables has no operation which says anything like "compare the IP destination address to 196.168.0.1"; instead, it would execute code which looks like:
payload load 4 offset network header + 16 => reg 1
compare reg 1 192.168.0.1
(Patrick presents the code in mnemonic form, and your editor will do the same; the actual code loaded into the kernel uses opcodes instead). The first line loads four bytes from the packet, located 16 bytes past the beginning of the network reader, into register 1. The second line then compares that register against the given network address.
The language can do a lot more than just comparing addresses, of course. There is, for example, a set lookup feature. Consider the following:
payload load 4 offset network header + 16 => reg 1
set lookup reg 1 load result in verdict register
{ "192.168.0.1" : jump chain1,
"192.168.0.2" : drop,
"192.168.0.3" : jump chain2 }
This code will cause packets aimed at 192.168.0.2 to be dropped; for the other two listed addresses, control will be sent to specific rule chains. This set feature allows for multi-branch rules in a way which cannot be done with the current iptables implementation (though the ipset mechanism helps in that regard). The above code also introduces the "verdict register," which records an action to be performed on a packet. In nftables, more than one verdict can be rendered on a packet; it is possible to add a packet to a specific counter, log it, and drop it all in a single chain without the need (as seen in iptables) to repeat tests.
There are a number of other capabilities built into the nftables virtual machine. There's a set of operations for communicating with the connection-tracking mechanism, allowing connection information to be used in deciding the fate of specific packets. Other operators deal with various bits of packet metadata known to the networking subsystem; these include the length, the protocol type, security mark information, and more. Operators exist for logging packets and incrementing counters. There's also a full set of comparison operations, of course.
Network administrators are unlikely to be impressed by the idea of programming a low-level virtual machine for their future firewalling needs. The good news is that there will be no need for them to do so. Instead, they'll write higher-level rules which will then be compiled into virtual machine code before being loaded into the kernel. The nftables utility does this work, implementing a human-readable language encapsulating most of the needed information about how packets are put together. So, if we look back to the first test described above:
payload load 4 offset network header + 16 => reg 1
compare reg 1 192.168.0.1
The administrator would simply write "ip daddr 192.168.0.1" and let nftables turn that into the above code. A full (if simple) rule looks something like this:
rule add ip filter output ip daddr 192.168.0.1 counter
This rule will count packets sent to 192.168.0.1.
The new nftables API is based on netlink, naturally. Unlike the current iptables API, it has the ability to modify individual rules without the need to reload the entire configuration. There is also a decompilation facility built into nftables that allows the recreation of human-readable rules from the current in-kernel configuration.
[PULL QUOTE: This could be a disruptive and expensive transition; the kernel development community will want to see some very good reasons for inflicting this pain on its users. END QUOTE] All told, it looks like a nicely-designed packet filtering mechanism, but the merging of nftables is likely to be controversial. The iptables mechanism works well, and is widely used; replacing it with code which breaks the user-space API and breaks all existing iptables configurations is guaranteed to raise some eyebrows. This could be a disruptive and expensive transition, even if, as seems necessary, the developers commit to maintaining both iptables and nftables in the mainline for an extended period of time. The kernel development community will want to see some very good reasons for inflicting this pain on its users.
There are some good reasons, but one should start by noting that it should be possible to create a tool which reads current iptables configurations and converts them to the nftables language - or even directly to kernel virtual machine code. Patrick seems to expect to create such a tool One Of These Days, but it does not exist at this time.
Some of the reasons for replacing iptables have already been hinted at above. The protocol knowledge built into the iptables code has turned out to be a problem over time; there is a lot of duplicated code doing the same thing (extracting port numbers, say) for different protocols. Even worse, the capabilities and syntax tend to vary from one protocol to the next. By moving all of that knowledge out to user space, nftables greatly simplifies the in-kernel code and allows for much more consistent treatment of all protocols.
There are a lot of optimization possibilities built into the new system. Some expensive operations (incrementing counters, for example) can be skipped unless the user really needs them. Features like set lookups and range mapping can collapse a whole set of iptables rules into a single nftables operation. Since filtering rules are now compiled, there is also potential for the compiler to optimize the rules further. Traditional firewall configurations tend to perform the same tests repeatedly; a smart nftables compiler could eliminate much of that duplicated work. Unsurprisingly, this optimization remains on the "to do" list for now, but the fact that all of this work is done in user space will make it easy to add such features in the future.
The nftables tool will also be able to perform a higher level of validation on the rules it is given, and it will be able to provide more useful diagnostics than can be had from the iptables code.
But, arguably, the most important motivation is the ability to dump the current ABI. The iptables ABI has become an increasing impediment to development over time. It includes protocol-specific fields which has made it hard to extend; that is part of why there are actually three copies of the iptables code in the kernel. When developers wanted to implement arptables and ebtables, they essentially had to copy the code and bang it into a new, protocol-specific shape. Patrick estimates that, even after four years of unification work, the kernel contains some 10,000 lines of duplicated filtering code. Beyond that, the structures used in the ABI are also used directly in the kernel's internal representation, making that implementation even harder to change. Separating the two would be possible through the addition of a translation layer, but the details involved (including the need to translate in both directions) increase the risk of adding subtle problems. In summary, the iptables ABI has become a serious impediment to further progress in packet filtering.
Nftables is a chance to dump all of that code and replace it with a much smaller filtering core which should prove to be quite a bit more flexible. With any luck, nftables should last a long time; the virtual machine can be extended in unexpected ways without the need to break the user-space ABI (again). It's smaller size should make it well suited to small router deployments, while its lockless design should appeal to administrators of high-end systems. All told, chances are good that the larger community will eventually see this change as being worthwhile. But not for a while: there are some unfinished pieces in nftables, and the larger discussion has not yet begun.
(For more information, see this weblog posting from August, 2008 and the slides from Patrick's presentation [ODF] at the Netfilter Workshop).
Patches and updates
Kernel trees
Build system
Core kernel code
Development tools
Device drivers
Filesystems and block I/O
Janitorial
Memory management
Networking
Security-related
Virtualization and containers
Benchmarks and bugs
Miscellaneous
Page editor: Jonathan Corbet
Distributions
News and Editorials
Moblin 2 Core Alpha
These days it looks like every major Linux distribution is trying to slim down its boot times: a faster boot-up is one of the main goals of Ubuntu 9.04, and so-called 'fastboot' systems such as HyperSpace and Splashtop are becoming mainstream as PC vendors are preinstalling them on mainboards. The Intel-sponsored Moblin project is part of the same evolution. Nevertheless, there's a fundamental difference: while fastboot solutions have minimal functionality and are meant to be used if you would like to read your Gmail account but don't want to wait for Windows booting, Moblin aims to have a full-fledged distribution which boots in seconds.
The unique selling point of the recently released Moblin 2 alpha is clearly the read-ahead boot technology by Intel. The release shows an impressive boot time: on an Acer Aspire One with SSD the Moblin 2 alpha boots in 6 seconds from the GRUB menu to the Xfce desktop (with autologin enabled). Other distributions will surely borrow this technology in the future. For example, the Netbook Edition of Ubuntu 9.10 ("Karmic Koala") will include Moblin's fastboot technology; Linpus and Mandriva are also planning to build on Moblin. In addition, at the beginning of this month, embedded Linux company MontaVista announced a Moblin-based Linux platform, as its competitor Wind River did last year.
The Moblin platform
Moblin 2 alpha is more a technology showcase and a platform, rather than yet another Linux distribution. Moblin 2 is not based on another distribution, but borrows parts from various other distributions, and leans heavily on Fedora by its use of RPM package management and other Fedora tools. The Moblin toolchain comes from openSUSE.
Moblin Core, the heart of the Moblin platform, provides a base that can be shared for platform-specific implementations, such as netbooks, MID's and even in-vehicle systems. It is built on GNOME Mobile and extended with Intel's fastboot and power saving technologies. Intel engineers have also sent patches to Xfce to improve the startup time of the graphical session.
Moblin 2 alpha uses a kernel version named 2.6.29.rc2-13.1.moblin2-netbook. It supports Intel Atom and Intel Core 2 cpu's. Moblin 2 is reported to work on the Acer Aspire One, Asus eeePC 901, Dell Mini 9 and MSI Wind. Your author was delighted to see wireless networking work out-of-the-box on his Acer Aspire One.
Moblin 2 can be tried out easily on a MID or netbook. Just download the Moblin live image, copy it with dd to a USB pen drive and boot from it. If you install Moblin on your netbook's SSD or hard drive, what you get is fairly minimal: the Minefield (the future Firefox 3.5) web browser, the Thunar file manager, the Totem movie player, the Mousepad text editor, the Pimlico suite of PIM applications, a terminal, and some other tools.
The graphical interface is based on the Xfce desktop environment, but, according to Intel, this is a placeholder which will be replaced in the final release. Moblin 2 doesn't use GNOME's Network Manager, instead it uses the Linux Connection Manager, which accounts for the lightweight connman daemon and applet connman-gnome. The project is specifically designed to run on embedded devices with low resources.
Using the alpha version for day-to-day work is not recommended: there are errors floating on VT 1 and many things don't work yet. For example, choosing Quit in the Xfce menu doesn't halt the machine, but restarts X. Because it's an alpha version and because Moblin is more a platform than a distribution, it's not fair to attach too much importance to these errors. Actually, there are only two reasons to use Moblin 2 alpha: to play with the bleeding edge fastboot technology, or to build your own Moblin-based distribution.
Build your own Moblin
As Moblin is targeted to distribution builders, there's a toolkit to build your own Moblin-based distribution: Moblin Image Creator 2 (MIC2), which is based primarily on Fedora live CD tools. MIC2 automates the creation of installation media, such as an ISO image or an image for a USB pen drive. You can create a project and a target, customize your target with specific packages, then create an image. You can specify different repositories, such as Ubuntu, OpenSUSE, and Fedora. MIC2 is a generic tool that can be used to create images from any yum or apt package repository, so applications can be packaged as rpm or deb files. Thus, MIC2 makes it possible to build a full-fledged distribution which goes much further than the standard Moblin application set.
Conclusion
The Moblin 2 alpha release is a good showcase of what we can expect from netbook-targeted Linux distributions in 2009. Intel's fastboot technology, the Linux Connection Manager and the Moblin Image Creator are a good base platform. It will make distributors and netbook makers lives a lot easier. If these parties pick it up, the lives of netbook users will also be much easier by the end of this year.
New Releases
Novell Ships SUSE Linux Enterprise 11
Novell has announced the availability of SUSE Linux Enterprise 11 in server (SLES) and desktop (SLED) and JeOS (Just enough Operating System) editions . "Later this year, Novell plans to release the next version of SUSE Linux Enterprise Real Time Extension, which will leverage the SUSE Linux Enterprise 11 code base to reduce latency and increase predictability and reliability of time-sensitive, mission-critical applications."
Distribution News
Debian GNU/Linux
Bits from the Debian Pure Blends Team
The Debian Pure Blends team has announced that the process of renaming Custom Debian Distributions to Debian Pure Blends is now regarded as finished. "The package which was used to build the metapackages of each Blend was renamed from ccd-dev to blends-dev but there will be a compatibility wrapper package cdd-dev to make migration easy for each single Blend. The package is currently sitting in experimental for testing purposes and the blends metapackages of Debian Med, Debian Science and Debian Jr. are there as well. An upload to unstable will follow soon."
Fedora
Fedora Board Recap 2009-03-24
Click below for a recap of the Fedora Advisory Board meeting held on March 24th. Topics include Involvement of the Board in Future Security Incidents, Contributions from Embargoed Nations, and What is Fedora.Fedora Board Recap 2009-03-17
Click below for a recap of the March 17 meeting of the Fedora Advisory Board. Topics include Contributions from Embargoed Nations, What is Fedora, Involvement of the Board in Future Security Incidents and Board Transparency.
Gentoo Linux
Gentoo Council summary for meeting on 12 March
A summary (click below) of the March 12 meeting of the Gentoo Council is out. Topics include EAPI-3 Proposals, Technical Agenda Items and Open Floor.
SUSE Linux and openSUSE
openSUSE Build Service 1.5
Version 1.5 of the openSUSE Build Service has been announced. It's not just for building packages anymore. "The 1.5 release makes it possible to build entire releases within the build service. and export ISO images and FTP trees."
Planet SUSE DNS Troubles
Stephan Binner reported a problem with the Planet SUSE Domain Name Server. Planet SUSE can still be reached at planet.opensu.se.
Ubuntu family
Ubuntu 7.10 reaches end-of-life on April 18, 2009
Ubuntu 7.10 "Gutsy Gibbon" will reach its end-of-life on April 18, 2009. "At that time, Ubuntu Security Notices will no longer include information or updated packages for Ubuntu 7.10. The supported upgrade path from Ubuntu 7.10 is via Ubuntu 8.04 LTS. Instructions and caveats for the upgrade may be found at https://help.ubuntu.com/community/HardyUpgrades."
New Distributions
Igelle PC/Desktop
Igelle PC/Desktop is a new independent project providing a graphical desktop operating system for Intel (x86) compatible personal computers, including desktop computers, laptops, netbooks, and so on. It features the usual applications and features found in modern desktop operating systems/environments, in a lightweight configuration. The source release can be used to build custom distributions or images. Igelle joined the list with the release of v0.6.0 dated March 18, 2009.
Distribution Newsletters
Ubuntu Weekly Newsletter #134
The Ubuntu Weekly Newsletter for the week ending March 21, 2009 is out. "In this issue we cover: Ubuntu 9.04 Beta Freeze in effect, LoCo Team information request, Ubuntu Server: KVM call for testing, MOTU Release Charter, QA Team next testing day, Ubuntu Drupal 6.3.0 released, Ubuntu India re-launches User Forums, Ubuntu Honduras begins to work, FossConf 2009 - Madurai and Ubuntu Tamil Team, Announcing Eucalyptus, Ubuntu Forums nuts and bolts, Daniel Holbach: Time to Party, Soren Hansen: gtk-vnc and virt-viewer mozilla plug-in, Thierry Carrez: What I want Ubuntu Server to be, What is Qimo?, Ubuntu Podcast #22, Server Team Minutes: March 17th, QA Team Minutes: March 18th, Behind MOTU Interview: Roderick Greening, and much, much more!"
openSUSE Weekly News, Issue #64
This issue of the openSUSE Weekly News covers openSUSE Build Service 1.5 Announced, Gabriel Stein: SuSE-Studio - Quick and Easier, Joe Brockmeier: openSUSE Project Accepted to Google Summer of Code 2009, mendesdomnic: Package Management Quick Reference, Survey: Is openSUSE Developer Friendly? and more.The Mint Newsletter - issue 79
This issue of the Mint Newsletter covers News about Mint mintCast - Episode 9, Linux Mint 4.0 Daryna reaches end-of-life, Linux Mint now has a forum at LinuxQuestions.org, New packages are continuously added to the community repositories - merlwiz79 has made a .deb that makes the "software-sources" application work on Mint Twitter, and more.Fedora Weekly News #168
The Fedora Weekly News for the week ending March 22, 2009 is out. "With the Fedora 11 Beta release slipping by one week Announcements reminds the community about "FUDCon Berlin 2009". In PlanetFedora the recent Red Hat patent acquisitions are among several topics covered. Ambassadors reports on the OLPC XO work at Rochester Institute of Technology. QualityAssurance gets excited about "Test Days" for DeviceKit, Xfce and an upcoming one for nouveau. Developments reflects a lot of anxious upgrading and "How to Open ACLs and Find Non-responsive Maintainers". Translation notes the "Upgraded Transifex" and translation to Cornish. Infrastructure advises in "Change Requests" that the infra team is in freeze and lists all the approved recent changes and hotfixes. Controversy rages in "Artwork" over the choice of Greek temple imagery. Yet again SecurityAdvisories lists packages that you want, really, really want. Virtualization worries about "More Flexible x86 Emulator Choice". Needless to say there's lots more to read this week!"
DistroWatch Weekly, Issue 295
The DistroWatch Weekly for March 23, 2009 is out. "This week we interview Robert Shingledecker, a former Damn Small Linux developer and now founder of Tiny Core Linux, a new mini-distribution and probably the smallest desktop live CD ever created. In the news, Ubuntu's upcoming release, version 9.04 and code name "Jaunty Jackalope", hits beta freeze and gains an as-yet unreleased AMD video card driver, Gentoo releases automated builds for the ARM processor, Mandriva helps to port KDE's premier optical burning software to Qt 4, and openSUSE updates its online build service. We also link to a brief interview with Jono Bacon, the Ubuntu community manager. Finally, three new distributions have been added to the DistroWatch database last week; these include the Fedora-based Bee Linux from Algeria, the independent Igelle PC/Desktop with a lightweight desktop, and Privatix, a distribution that allows anonymous browsing and storing of data on encrypted USB drives."
Newsletters and articles of interest
Distributions: The big and the small (The H)
Here's a survey of upcoming distribution releases on The H. "Later this week, CentOS version 5.3 is expected to appear. The Red Hat clone, which traditionally releases a few weeks after the final releases of Red Hat Enterprise Linux (RHEL), this time is a little late. Scientific Linux 5.3, also a Red Hat clone, appeared late last week. Just like CentOS, the developers built the distribution from the Quell packages of Red Hat Linux. However, the Scientific Linux developers have added some of their own extras and the distribution is backed by several scientific institutions, including Fermilab and CERN."
A Short Introduction To Apt-Pinning (HowtoForge)
HowtoForge takes a look at apt-pinning. "This article is a short overview of how to use apt-pinning on Debian and Debian-based distributions (like Ubuntu). Apt-Pinning allows you to use multiple releases (e.g. stable, testing, and unstable) on your system and to specify when to install a package from which release. That way you can run a system based mostly on the stable release, but also install some newer packages from testing or unstable (or third-party repositories). I do not issue any guarantee that this will work for you!"
Page editor: Rebecca Sobol
Development
A first look at Xfce 4.6
After two years of development, the new series of the lightweight Xfce desktop environment recently became available. Xfce 4.6 is closer than ever to changing the perception that common free software desktop environments are limited to a bipolar world of just GNOME and KDE. Xfce 4.6 introduces a new set of features and improvements which push its limits to a new level.
Installing Xfce 4.6 is as easy as it has been in the past, thanks to a graphical installer which simplifies the process. The GTK and GLib development libraries are required in order to run the graphical installer. The installer then lists the rest of the libraries which need to be installed in order to proceed with the Xfce installation. Using an Ubuntu test system, satisfying the required dependencies turned out to be a relatively easy job, as all of the required packages were one aptitude install away. This is the same case for Debian installations. RPM-based distributions, especially Fedora, should have the necessary libraries available as development packages.
At a certain point in the installation process, the GUI installer offers a choice for enabling optimizations, debugging and display manager setup. The first option applies to compile time optimizations, which should improve performance. Despite some warnings, Xfce compiled on mainstream x86_64 hardware and performed perfectly well. The third option is something that the most of users should probably check (except those who like to set up the display manager by hand, of course). It adds Xfce to the list of available sessions in the display manager, which was successfully tested with GDM. This installation step will only work if Xfce is installed by root. It is important to include the bin and sbin directories inside the $PATH variable in order for Xfce to start properly.
An Important part of Xfce is its Goodies, which is a package of plugins that extend the desktop's usability and functionality. The Goodies graphical installer is not listed on the main download page for some reason, but it is available in the installers directory under download servers. Goodies requires a few additional libraries.
The most of popular distributions will likely include Xfce 4.6 in the near future, so waiting might be the best solution for those who find manual installations difficult.
Improved desktop
The new improvements to Xfce's usability are immediately visible on the desktop. This version of Xfce has reached the point where it now offers many of the same intuitive functions that are available on other advanced desktops, such as selection and manipulation of multiple files. The Xfce desktop menu is also improved. Users can now create files, directories and launchers, start the file manager, and access a desktop configuration GUI from the menu. One shortcoming, though, is that moving multiple selected files on the desktop doesn't work yet.
Speaking of files, Xfce's Thunar file manager has received cosmetic and functional updates. Thunar is now XDG compliant and adds support for encrypted devices. This enables users to differentiate between mounted and unmounted devices and set wallpaper from the file manager window. The development team claims that newest version of Thunar is now faster, and it includes many bug fixes.
Improvements to the panel include bug fixes and new plugin functionalities; some of those changes were introduced during the XFCE 4.4 rewrite. Panel changes in version 4.6 include speed and resource improvements to the clock plugin. A new keyboard plugin adds new layout selection capabilities, and a notification area allows users to show or hide icons. Unfortunately, it still isn't possible to drag launchers from the Xfce menu to the panel.
The Xfce audio mixer was also rewritten and now uses the Gstreamer multimedia framework, which adds an installation requirement for the Gstreamer libraries. It is now possible to manage multiple sound cards using the improved interface. The mixer starts with no channels enabled, which might give a bad impression to users who are not aware of this behavior. The mixer panel applet adds the ability to change the volume with the mouse scroll wheel.
Environment
Xfce 4.6 brings serious improvements to the session manager, which should guarantee smarter management (automatic restart of environment processes like the desktop, panel, etc.), process manipulation, and suspend/hibernate logout dialog support out of the box.
The window manager also became smarter, adding the ability to detect non-responding windows and allowing users to terminate such windows. The window action menu now provides handy moving and resizing options. Usability of the fullscreen option turns out to be questionable, since there is no obvious way to return to the non-fullscreen state, except by closing the window.
Configuration tools throughout Xfce are now more polished and functional, allowing users to tune the environment better than before. The major highlight in this area is the new Settings Manager, which groups configuration dialog launchers in one place, making them more accessible and easier to activate via a single mouse click.
KDE and GNOME, look out
The first impressions of Xfce 4.6 is that the system has had a number of significant improvements, the progress line has been pushed much higher when compared to earlier releases. New Xfce 4.6 features are visible and will improve the user's experience during everyday use.
With this release, Xfce has managed to overcome a number of usability issues which, in the past, have kept it out of the leagues of the "big" desktop environments. Staying true to its original design goals, Xfce remains lightweight and fast, while adding new functions which make it almost as usable as KDE or GNOME from the average user's perspective. For those who have tried and rejected Xfce in the past, this latest version has overcome enough shortcomings from previous releases to justify another look.
System Applications
Audio Projects
JACK 1.9.2 released
Version 1.9.2 of the JACK Audio Connection Kit has been announced. "Future JACK2 will be based on C++ jackdmp code base. Jack 1.9.2 is the "renaming" of jackdmp and the result of a lot of developments started after LAC 2008."
Rivendell 1.3.0 announced
Version 1.3.0 of the Rivendell radio station automation software has been announced. "Changes: Podcast System Enhancements. Support has been added to allow interoperation with third-party podcast traffic measurement and verification systems. It is also now possible to override the default ordering of episodes and configure automatic redirection of feed subscriptions. RDLogManager Enhancements. It is now possible to configure log import under-/over-fill warnings even for non-autofill events..."
Clusters and Grids
Cell Messaging Layer: v2.4 released (SourceForge)
Version 2.4 of Cell Messaging Layer has been announced. "The Cell Messaging Layer is an extremely fast, MPI-like communication library for clusters of Cell Broadband Engine processors. With it, any Cell synergistic processing element (SPE) can communicate directly with any other SPE, even across a network. Version 2.4 of the Cell Messaging Layer (CML) is now available from SourceForge. CML is a message-passing library that simplifies programming clusters of Cell processors (as used, for example, in the PlayStation 3 and in LANL's Roadrunner supercomputer)."
Database Software
MySQL 6.0.10 alpha has been released
Version 6.0.10 alpha of the MySQL DBMS has been announced. "MySQL 6.0 includes two new storage engines: the transactional Falcon engine, and the crash-safe Maria engine. If you are using the Falcon storage engine in MySQL 6.0.9-alpha, you are encouraged to wait for the MySQL 6.0.11-alpha before upgrading. Live upgrade is not recommended for 6.0 alpha releases. Users are strongly encouraged to dump their database and reload them after the upgrade."
PostgreSQL Weekly News
The March 22, 2009 edition of the PostgreSQL Weekly News is online with the latest PostgreSQL DBMS articles and resources.PostgreSQL Weekly News
The March 15, 2009 edition of the PostgreSQL Weekly News is online with the latest PostgreSQL DBMS articles and resources.Malcolm: SQL for the command line: "show"
In his blog, David Malcolm writes about "show", which is a SQL "select" statement that is used from the command line to query various log file formats. "This got me thinking. We have many different log formats, and many different sources of data. All of our tools seem to have different interfaces. [...] For example, why should I write regular expressions and shell pipelines to get at my logs? Why do I have to learn a custom syntax ("rpm -qa --queryformat='various things'") for looking at the software I have installed? Why does e.g. the audit subsystem have its own query format? [...] Why can't I just use SQL, and write SELECT statements to drill down into all of this data?"
Middleware
SOGo 1.0 announced
Version 1.0 of SOGo has been announced, this is the initial release. "SOGo is groupware server with a focus on scalability and open standards. SOGo provides a rich AJAX-based Web interface and supports multiple native clients through the use of standard protocols such as CalDAV, CardDAV and GroupDAV."
Networking Tools
iptables 1.4.3.1 released
Version 1.4.3.1 of iptables has been announced. "The netfilter coreteam presents: iptables version 1.4.3.1 the iptables release for the 2.6.29 kernel. This version includes a compilation fix and a couple of minor fixes: - compilation error fix from Peter Volkov - documentation update from Jan Engelhardt - cleanup error reporting by myself."
Security
announcing ClamAV 0.95
Version 0.95 of the ClamAV virus scanner has been announced. "ClamAV 0.95 introduces many bugfixes, improvements and additions."
Telecom
OpenSIPS: major release 1.5.0 is out (SourceForge)
Version 1.5.0 of OpenSIPS has been announced. "OpenSIPS (former OpenSER) is an GPL implementation of a multi-functionality SIP Server that targets to deliver a high-level technical solution (performance, security and quality) to be used in professional SIP server platforms. After almost 6 months from the last major release (1.4.0), OpenSIPS evolves with a new major release, 1.5.0. OpenSIPS 1.5.0 comes with several critical improvements (DB area, Management Interface, dialog support), but also with new functionalities (like cache support, Load Balancing, PrePaid support, SIP Identity , Dynamic Routing, IP geo location, etc)."
Web Site Development
circuits 1.1 released
Version 1.1 of circuits, a light-weight, event-driven framework with a strong component architecture, has been announced. "Aside from bug fixes, circuits 1.1 includes the following enhancements: * New drivers package containing drivers for pygame and inotify * New and improved web package (circuits.web) providing a HTTP 1.0/1.1 and WSGI compliant Web Server. * New developer tools * python-2.5 compatibility fixes * Runnable Components * Improved Debugger".
Django 1.1 beta released
Version 1.1 beta of the Django web development platform has been announced. "As part of the Django 1.1 release process, tonight we've released Django 1.1 beta 1, a preview package that shows off the new features coming in Django 1.1. As with all alpha and beta packages, this is not for production use, but if you'd like to try out some of the new goodies coming in 1.1, or if you'd like to pitch in and help us fix bugs before the final 1.1 release (due in April), feel free to grab a copy and give it a spin."
Rails 2.3 released
Version 2.3 of the Rails web development platform has been announced. "This is one of the most substantial upgrades to Rails in a very long time. A brief rundown of the top hitters: * Templates: Allows your new skeleton Rails application to be built your way with your default stack of gems, configs, and more. * Engines: Share reusable application pieces complete with routes that Just Work, models, view paths, and the works. * Rack: Rails now runs on Rack which gives you access to all the middleware goodness. * Metal: Write super fast pieces of optimized logic that routes around Action Controller. * Nested forms: Deal with complex forms so much easier."
TikiWiki: CMS/Groupware 2.3 -Arcturus- released (SourceForge)
Version 2.3 of TikiWiki has been announced. "Powerful multilingual Wiki/CMS/Groupware: File/Image gallery, Article, Blog, Tracker/Forms, Forum, Poll/Survey & Quiz, Newsletter, Calendar, Drawing, Bookmarks, FAQ, Banner ads, Categories, Spreadsheet, Maps, Workflow, Search, Theme control, WAP, VoiceXML, RSS, LDAP, Stats..."
Announcing Transifex 0.5
Version 0.5 of Transifex has been announced, it includes a number of new capabilities. "Indifex and the Transifex Community are proud to announce the newest version of their flagship translation platform, Transifex 0.5. Transifex is a web application written in Python using the Django web framework that gives translators a web interface to various version control systems. Files to be translated can be downloaded, translated files can be uploaded directly to the source repository, and various translation statistics can be read at a glance."
Web Services
Pylot version 1.22 released
Version 1.22 of Pylot has been announced. "Pylot is a free open source tool for testing performance and scalability of web services. It runs HTTP load tests, which are useful for capacity planning, benchmarking, analysis, and system tuning. Pylot generates concurrent load (HTTP Requests), verifies server responses, and produces reports with metrics. Tests suites are executed and monitored from a GUI or shell/console."
Miscellaneous
Puppet 0.24.8 now available
Version 0.24.8 of Puppet, a framework for automating system administration across the network, has been announced. "This is a maintenance release for the 0.24.x branch but contains a small number of new features including some significant performance enhancements for large installations and stored configurations."
Rockbox 3.2 released
Version 3.2 of Rockbox, an open-source operating system for mp3 players, has been announced. This version adds some new capabilities and includes many bug fixes, see the release notes for more information.
Desktop Applications
Audio Applications
Distributions Break Ardour And Waste My Time
Ardour developer Paul Davis has posted a rant about distribution-related issues with Ardour. "For some time there have been reports on IRC from users of various Linux distributions that some feature of Ardour is broken. It is getting increasingly tiresome that we end up as the frontline support for breakages that are distro-specific and that we cannot control. These problems waste my time. It would be nice if they would go away. Meanwhile, heres what distribution users can do..."
Sonic Annotator v0.2 released
Version 0.2 of Sonic Annotator has been announced. "Sonic Annotator is a utility program for batch feature extraction from audio files. It runs Vamp audio analysis plugins with specified parameters on audio files, and writes the result features in a selection of formats, in particular as RDF using the Audio Features and Event ontologies. Version 0.2 is now available, offering more stable and predictable results than the earlier 0.1."
Sonic Visualiser v1.5 now available
Version 1.5 of Sonic Visualiser has been announced, it includes some new features and many bug fixes. "Sonic Visualiser is an application for inspecting and analysing the contents of music audio files. It combines powerful waveform and spectral visualisation tools with automated feature extraction plugins and annotation capabilities."
Vamp Plugin Tester: a simple test utility for Vamp plugin development
Version 0.1 of the Vamp plugin tester has been announced. "Announcing v0.1 of the Vamp plugin tester, a simple program that loads and tests Vamp audio feature extraction plugins for various common failure cases. It can't check whether you're getting the right results, but it can help you write more resilient and better-behaved plugins."
Desktop Environments
GNOME to migrate to git
The GNOME project has announced plans to switch to the GIT distributed version control system. "The GNOME Release Team would like to announce that git will be the new Version Control System (VCS) for GNOME. In our opinion, the decision reflects the opinion of the majority of our active contributors. In December 2008, Behdad Esfahbod organized the GNOME DVCS (Distributed Version Control System) Survey on behalf of the GNOME Foundation board of directors, Release Team, and Sysadmin Team with the aim of better understanding familiarity and preferences of our active contributor base regarding the future VCS for GNOME. The survey results, released in January 2009, show that git is by far the preferred DVCS for the majority of our active contributors - the main users of GNOME infrastructure."
GNOME Software Announcements
The following new GNOME software has been announced this week:- Fantasdic 1.0-beta7 (new features, bug fixes and translation work)
- GDM 2.20.10 (bug fixes and translation work)
- Gnome Games 2.24.3.1 (bug fixes and translation work)
- gnoMint 0.9.9 (new features, bug fixes, code cleanup and translation work)
- Gnumeric 1.9.5 (new features, bug fixes and translation work)
- GPointingDeviceSettings 1.2.0 (new appearance and bug fixes)
- Libvtemm 0.16, 0.17 and 0.19 (new API wrappings)
- libxklavier 3.9 (bug fixes)
- mistelix 0.1 (initial release)
- Pygtksourceview 2.6.0 (new stable release)
- rhythmbox 0.12.0 (new features, bug fixes and code cleanup)
KDE e.V. Quarterly Report 2008 Q3/Q4 Now Available (KDE.News)
A report [PDF] from KDE e.V., the non-profit organization that represents the KDE project, is now available. The report covers the activities of the organization over the last half of 2008. In it, current KDE e.V. President Aaron Seigo writes about a changing of the guard: "The beginning of 2009 is also a poignant time for me personally as a member of the KDE e.V. board, as I will soon be stepping aside as President to allow others to apply their own style and brand of input in this position. Rotating responsibilities is key in my opinion to keeping KDE true to its roots as a community project. [...] I'm very happy to announce that the board has collectively agreed that my successor as President of KDE e.V. will be Cornelius Schumacher."
KDE Software Announcements
The following new KDE software has been announced this week:- 2ManDVD 0.7.0 (bug fixes, code cleanup and translation work)
- 2ManDVD 0.7.1 (bug fixes)
- choqoK 0.5 (new features and bug fixes)
- digiKam 0.9.5-beta3 (unspecified)
- digiKam 0.10.0 (unspecified)
- Dikt 1d (initial release)
- Frescobaldi 0.7.8 (new features, bug fixes and translation work)
- gambas 2 2.12 (new features and bug fixes)
- GamCat 0.29 (bug fixes)
- Kipi-Plugins 0.2.0 (unspecified)
- KRadio4 for KDE 4.2 snapshot-2009-03-22 (new features and bug fixes)
- KWakeOnLan 0.1 (initial release)
- Perl Audio Converter 4.0.5 (new features, bug fixes and code cleanup)
- PokerTH 0.6.4 (new features and bug fixes)
- Polish Radio for Amarok 1.4.x 0.21 (new features)
- sMovieDB beta0.80 (new features and translation work)
Xorg Software Announcements
The following new Xorg software has been announced this week:- xf86-input-evdev 2.2.1 (bug fixes)
- xf86-video-ati 6.12.1 (bug fixes and documentation work)
- xf86-video-i740 1.3.0 (code cleanup and documentation work)
Announcing xpra 0.0.6
Version 0.0.6 0f xpra has been announced, it includes new features and bug fixes. "Xpra is 'screen for X' -- it allows you to run X programs, usually on a remote host, direct their display to your local machine, and then to disconnect from these programs and reconnect from the same or another machine, without losing any state. It is licensed under the GPLv2+."
Fonts and Images
Libertine Open Fonts Project releases version 4.4.1
Version 4.4.1 of Libertine Open Fonts has been announced. "The organic grotesque (sans serif) Linux Biolinum is a new member of our font family. The vertical metric is identical with that of the Libertine and the proportions fit perfectly together. Biolinum is intended for emphasizing, small point sizes etc."
Games
Cyphesis 0.5.19 released (WorldForge)
Version 0.5.19 of Cyphesis has been announced. "Cyphesis is a small to medium scale server for WorldForge games, with builtin AI. This version includes the demo game Mason which is currently in development. This release is intended for server administrators wishing to run a Mason server and World developers developing new worlds or game systems."
Interoperability
Wine 1.1.17 announced
Version 1.1.17 of Wine has been announced. Changes include: "Joystick support on Mac OS X. Implementation of iphlpapi on Solaris. A number of 64-bit improvements. Obsolete LinuxThreads support has been removed. Many fixes to the regression tests on Windows. Various bug fixes."
Mail Clients
Thunderbird 2.0.0.21 available for download
Version 2.0.0.21 of the Thunderbird email client has been announced. "We strongly recommend that all Thunderbird users upgrade to this latest release. If you already have Thunderbird 2.0.0.x, you will receive an automated update notification within 24 to 48 hours. This update can also be applied manually by selecting "Check for Updates?" from the Help menu."
Multimedia
Elisa Media Center 0.5.33 released
Version 0.5.33 of Elisa Media Center has been announced. "This release is a lightweight release, meaning it is pushed through our automatic plugin update system. Additionally a windows installer is available for download on our website. This installer fixes various "crash at startup" problems."
Music Applications
guitarix release version 0.03.8-1
Version 0.03.8-1 of guitarix, a simple Linux Rock Guitar amplifier for jack, has been announced. "This release include all build'in effects also as LADSPA plugins (UniqID 4061 - 4068). The jconv settings widget include now a wave form viewer with the posibility to select a part of the file (offset and length) for the use with jconv. The Overdrive effect is coupled now with an auto gain correction (remove the added gain when run high overdrive level's) The trigger in the Distrortion can set now up to 1, that is usefull when you run Overdrive and Distortion together."
Office Applications
OpenGoo: 1.3 final is out (SourceForge)
Version 1.3 final of OpenGoo has been announced. "OpenGoo is a free and open source WebOffice, project management and collaboration tool, licensed under the Affero GPL 3 license. OpenGoo 1.3 final has been released, with updates and new functionality that improve usability! Some of the new features introduced since version 1.2 are a billing module, reminders, and a workspace information widget."
Science
ETS 3.2.0 released
Version 3.2.0 of the Enthought Tool Suite (ETS), a collection of components for constructing custom scientific applications, has been announced. "ETS 3.2.0 is a feature-added update to ETS 3.1.0, including numerous bug-fixes."
Video Applications
AmFast AMF encoder/decoder for Python released
The initial release of the AmFast AMF0/AMF3 video encoder/decoder has been announced. "AmFast's core encoder and decoder are written in C, so it's around 18x faster than PyAmf"
DVDStyler: 1.7.3 beta 1 released (SourceForge)
Version 1.7.3 beta 1 of DVDStyler has been announced. "DVDStyler is a cross-platform free DVD authoring application that makes possible for video enthusiasts to create professional-looking DVDs. The first beta version of DVDStyler 1.7.3 is now available for download for testing. This release adds a cache for transcoded files. So if DVD must be generated multiple times e.g. to display preview of DVD, the files will be transcoded only at the first time. It adds also a check if there is enough space on temporary directory and some other small changes."
Languages and Tools
Caml
Caml Weekly News
The March 24, 2009 edition of the Caml Weekly News is out with new articles about the Caml language.
Java
IcedTea7 1.9 released
Version 1.9 of IcedTea7 has been announced, it includes a long list of security fixes and some new features. "IcedTea7 provides a means to build OpenJDK7 build drops using Free software tools, in addition to a number of additional features including additional platform support via the Zero/Shark and CACAO virtual machines, and the only Free 64-bit Java web plugin."
Perl
Rakudo Perl development release #15 (use Perl)
Development release #15 of Rakudo Perl has been announced. "On behalf of the Rakudo development team, I'm pleased to announce the March 2009 development release of Rakudo Perl #15 "Oslo". Rakudo is an implementation of Perl 6 on the Parrot Virtual Machine."
Python
lxml 2.2 released
Version 2.2 of lxml, a Pythonic binding for the libxml2 and libxslt libraries, has been announced. "This is a major new, stable and mature release that takes over the stable 2.x release series. All previous 2.x releases are now officially out of maintenance."
Portable Python 1.1 released
Version 1.1 of Portable Python, a Python distribution for USB memory sticks, has been announced. "This release contains three different packages for three different Python versions - Python 2.5.4, Python 2.6.1 and Python 3.0.1. Packages are totally independent and can run side-by-side each other or any other Python installation."
pylint 0.17.0 and astng 0.18.0 released
Version 0.17.0 of pylint and version 0.18.0 of astng have been announced. "we are glad to announce the release of pylint 0.17.0 which is based on a major refactoring of astng (0.18.0). For python 2.5, pylint will now use python's _ast module which is much faster than the older compiler.ast module."
Python-URL! - weekly Python news and links
The March 19, 2009 edition of the Python-URL! is online with a new collection of Python article links.
XML
DITA-OT: 1.4.3 released (SourceForge)
Version 1.4.3 of DITA Open Toolkit has been announced. "The DITA Open Toolkit is an implementation of the OASIS DITA XML Specification. The Toolkit transforms DITA content into many deliverable formats. See http://dita.xml.org/wiki/the-dita-open-toolkit for information about releases and download packages. Version 1.4.3 of the DITA Open Toolkit was released March 18, 2008. This is the final build to be based entirely on the DITA 1.1 standard".
Cross Compilers
Small Device C Compiler 2.9.0 released
Version 2.9.0 of SDCC has been announced. "A new release of SDCC, the portable optimizing compiler for 8051, DS390, Z80, HC08, and PIC microprocessors is now available. Sources, documentation and binaries compiled for x86 Linux, x86 MS Windows and universal Mac OS X are available."
Test Suites
TestLink: 1.8.0 (final) released (SourceForge)
Version 1.8.0 of TestLink has been announced. "Our community today released TestLink 1.8.0, a major update to its popular and acclaimed free, open source Test management tool. TestLink 1.8.0 is the culmination of 16 months of efforts from developers, security experts, localization and support communities, and testers from around the globe. TestLink 1.8 is faster than its predecessor and offers amount of improvements, including the SOAP interface, event logger, test prioritization and extensive under the hood work to improve the stability, usability and performance of the tool."
Version Control
bzr 1.13.1 released
Version 1.13.1 of the bzr version control system has been announced. "A couple regessions where found in the 1.13 release. The pyrex- generated C extensions are missing from the .tar.gz and .zip files. Documentation on how to generate GNU ChangeLogs is wrong. The merge --force works again."
Mercurial 1.2.1 released
Version 1.2.1 of the Mercurial source code management system has been announced. "This is a bugfix release."
monotone 0.43 released
Version 0.43 of the monotone distributed version control system has been announced. "* monotone no longer bundles several required 3rd party libraries; this not only makes our life easier but was often requested by distributions. * monotone can now be configured to use forward deltas which speeds netsync servers quite a lot. * the speed of mtn log has been improved tremendously and new useful selectors became available there. * monotone can now export its databases into git's fast-import format (hey, but that doesn't mean you guys should now all switch to git ;) * tons of bugfixes..."
Miscellaneous
Paver 1.0 released
Version 1.0 of Paver, a Python-based software project scripting tool, has been announced. "After months of use in production and about two months of public testing for 1.0, Paver 1.0 has been released. The changes between Paver 0.8.1, the most recent stable release, and 1.0 are quite significant. Paver 1.0 is easier, cleaner, less magical and just better all around. The backwards compatibility breaks should be easy enough to work around, are described in DeprecationWarnings and were introduced in 1.0a1 back in January."
Page editor: Forrest Cook
Linux in the news
Recommended Reading
Wheeler: Fixing Unix/Linux/POSIX Filenames
David A. Wheeler says it's time to adopt tighter rules for file names to improve ease of use, robustness, and security. "In a well-designed system, simple things should be simple, and the 'obvious easy' way to do something should be the right way. I call this goal 'no sharp edges' - to use an analogy, if you're designing a wrench, don't put razor blades on the handles. The current POSIX filesystem fails this test - it does have sharp edges. Because it's hard to do things the 'right' way, many Unix/Linux programs simply assume that 'filenames are reasonable', even though the system doesn't guarantee that this is true. This leads to programs with errors that aren't immediately obvious."
Companies
Sun rises on talk of IBM deal. Good for Linux? (Tectonic)
Alastair Otter considers the ramifications of IBM's potential acquisition of Sun Microsystems in a Tectonic article. "Clearly the market likes the idea of IBM snapping up Sun but would such a deal be good for open source and Linux? Its hard to say but there are many advantages in such a deal. For a start, despite its heritage as a hardware vendor, Suns future looks certain to lie in open source software, even though it is finding it incredibly hard to make that transition. Sun owns some very valuable software properties including Java, MySQL and VirtualBox, items that IBM could well monetise if it could get its hands on them. And in doing so it might well preserve and grow these properties."
Linux Adoption
Is Linux only for the poor? (ZDNet)
Over at ZDNet, Christopher Dawson looks at Linux adoption in schools, specifically whether it is a decision based only on cost. "Cost will certainly give people a reason to switch, but I dont think a crappy economy or poverty in a developing country is the only reason to use Linux and open source software. I wont even get into the argument of exposing kids to a variety of computing environments. I think the biggest reason to use Linux (aside from potential cost savings if you can develop some in-house *nix expertise) is simply the giant body of software that is freely available."
Legal
Now TomTom Sues Microsoft for Patent Infringement (Groklaw)
Here's Groklaw's take on TomTom's countersuit against Microsoft. It seems that TomTom has made PJ's day. "Can you believe it? This is so great!! Morrison & Foerster are representing TomTom in a new patent infringement lawsuit TomTom has just filed against Microsoft! I love covering their cases. Patent law is usually soooo boring to me, but these guys will keep me awake, and no doubt if I pay attention, I'll learn a lot." Groklaw has TomTom's complaint [PDF] available too; the countersuit is for infringement of four patents, all of which are related to navigation software.
Resources
Benchmarking The Linux 2.6.24 Through 2.6.29 Kernels (Phoronix)
Phoronix has published the results of a long series of kernel benchmarks, generally concluding that 2.6.29 is faster than its predecessors. "When it came to the SQLite performance, a serious performance regression began with the Linux 2.6.26 kernel and ended with the Linux 2.6.29 release. Normally it required 27~28 seconds to perform 12,500 database insertions using SQLite, but with the Linux 2.6.26 through 2.6.28 kernel releases it took 109 seconds! Fortunately, this regression is now fixed." There's no sense for why things might have changed, though.
Williams: That's when I reach for my revolver...
Dan Williams examines the vagaries of mobile broadband cards in a posting on his blog. He reports on the problems when trying to get NetworkManager working with all of this different hardware. "Yes, there are standards. But as we all know, given 10 people and a standard, you'll end the day with 12 or 13 differently behaving "standards-compliant" implementations. People suck. Youd think it would be easy to agree on an AT command for "prefer 3G / prefer 2G / 3G only / 2G only". NO SIMPLE FOR YOU. But NetworkManager has to work around huge amounts of stupid. Here's a run-down of some of the mobile broadband hardware thats available today and what about it sucks."
Reviews
One Laptop Battery Later And I'm A Django Fan
Zed Shaw reviews the Django web platform on his blog. "I mostly ignored Django for most of its life because I thought it was just another web framework. Yawn. Yay. A framework. Joy. Models. Views. Controllers. Oh boy, I think Ill just stick to one of the hundreds I already know. Then I saw James Tauber talking about Pinax but more importantly, talking about how 2008 was the year of modularity (he used different words). Apparently, Django has been pushing the idea of having discrete applications that act within a site as cooperating but separate components. The idea is that, unlike other components, these ones act like decoupled little web sites you can put in and configure for a site, and through the magic of HTTP work seamlessly."
Testing 3.0 - A Sneak Peek at 64 Studio 3.0 and Ardour3 (Linux Journal)
Dave Phillips covers developments in 64 Studio and Ardour. "[64 Studio] is loaded with an excellent selection of audio/video production software, and the maintainers particularly want feedback on the base system (that is, the system as it's set up by a fresh install). I took things a bit further and installed a complete development environment as well. I've already built and installed the latest libsndfile, which I needed for building and installing Ardour3 (see below). Everything's gone smoothly, and I've had no problem finding any required tools and utilities."
At last: GNOME adds native Exchange Server support (DesktopLinux.com)
DesktopLinux.com reviews GNOME 2.26. "The 2.26 GNOME release includes a broad range of new improvements, but before delving into them, let's call out two in particular: claimed support for Microsoft Exchange Server's native MAPI protocol, and "direct" import of Outlook Personal Folders."
Writing a Linux shell book the community way (blogs.ComputerWorld)
Steven J. Vaughan-Nichols takes a look at a new book from the FSF and O'Reilly. "There are several ways you can learn how to use the Linux command line. The way I took was the traditional one. I read the, ahem, fine manual, RTFM as we like to say, and I used the 'man' command a lot. That was well back before O'Reilly started publishing its great Unix and Linux technology books. Now, the FSF (Free Software Foundation), is having a community 'write-in' to create a new, free book "Introduction to the Command Line" for Linux beginners."
Miscellaneous
It's *Not* The 15th Birthday of Linux - and Why That Matters (Linux Journal)
Glyn Moody questions recent proclamations about the 15th birthday of Linux. "This is one of the most profound strengths of free software - that the software is never really finished, with the corollary that it is also never really *not* finished. Huge quantum jumps are rare: mostly it's more granular. That's why I think it's misguided to celebrate Linux 1.0: it gives the impression that free software is like any other proprietary bit of code, rubbish until you hit the magic release number, and somehow finished when you do. If you want to celebrate Linux (and that's an eminently sensible thing to do), the only possible date to choose is when the project was started - after all, that's what the "birth" bit in birthday means. The trouble is, even that date doesn't exist."
Page editor: Forrest Cook
Announcements
Non-Commercial announcements
Google Summer of Code 2009 announcements
The following projects have announced their participation in the 2009 Google Summer of Code. See the official Google Summer of Code site for more information.Stallman: the JavaScript trap
Richard Stallman has posted a warning about non-free JavaScript code and a call for a mechanism which would enable browsers to run only freely-licensed JavaScript. "It is possible to release a Javascript program as free software, by distributing the source code under a free software license. But even if the program's source is available, there is no easy way to run your modified version instead of the original. Current free browsers do not offer a facility to run your own modified version instead of the one delivered in the page. The effect is comparable to tivoization, although not quite so hard to overcome."
Sugar Labs Nonprofit Announces New Version of Sugar Learning Platform
Sugar Labs has announced the availability of version 0.84 of the Sugar Learning Platform for the One Laptop Per Child XO-1, classroom PCs, and netbook computers. "Designed from the ground up for children, the Sugar computer environment is used by almost one-million students aged 5 to 12 in over 40 countries every school day. This improved version features new collaborative Sugar Activities and, in response to teacher feedback, the ability to easily suspend and resume Activities, saving time in the classroom."
Commercial announcements
Mandriva announces Q4 financial results
Mandriva has announced its latest financial results. "Turnover is 0.83 million Euros, operating revenue is 1.11 million Euros while costs are down to 1.51 million Euros representing a trading loss of 0.40 million Euros. Turnover remains at the same level as for the previous quarter. Net loss comes to 0.14 million Euros. The company has redeployed its strategy around the OS (OEM, ODM ...) applications yielding strong added value (Pulse 2; ...) and the web. The financial restructuring carried out at the end of 2008, along with the sales reorganisation currently underway, should begin to show tangible results in the 2009 financial year."
TomTom joins the Open Invention Network
The Open Invention Network has announced that TomTom has signed up. There's no mention of the Microsoft litigation, but clearly that has to be a motivating factor; it suggests that OIN may get involved in that case. "'Linux plays an important role at TomTom as the core of all our Portable Navigation Devices,' said Peter Spours, director of IP at TomTom. 'We believe that by becoming an Open Invention Network licensee, we encourage Linux development and foster innovation in a technical community that benefits everyone.'"
Contests and Awards
Wietse Venema and Creative Commons win FSF awards
The Free Software Foundation has announced the recipients of its annual free software awards. "Creative Commons was honored with the Award for Projects of Social Benefit, and Wietse Venema was honored with the Award for the Advancement of Free Software. Presenting the awards was FSF founder and president Richard Stallman."
Meeting Minutes
Amendments to the OpenOffice.org Community Council Charter
The OpenOffice.org Community Council Charter has been amended. "The main changes are an increase in the number of members from nine to ten and the corresponding voting constituencies. With the new charter, any OpenOffice.org community member may stand for a council seat. We are looking forward to the upcoming elections to increase the vitality of our community and the Community Council."
Calls for Presentations
EuroSciPy 2009 call for papers
EuroSciPy 2009 has been announced, along with a call for papers. "We're pleased to announce the EuroSciPy 2009 Conference to be held in Leipzig, Germany on July 25-26, 2009. This is the second conference after the successful conference last year. Again, EuroSciPy will be a venue for the European community of users of the Python programming language in science." Submissions are due by June 15.
Call for presentations - Libre Graphics Meeting 2009
A call for presentations has gone out for the Libre Graphics Meeting 2009. "Libre Graphics Meeting, the premiere workshop and conference for developers and enthusiasts of free software graphics, will be held May 6-9, 2009, at Ecole Polytechnique in Montreal, Quebec, Canada. LGM invites you to share your work with the community. Topics of interest include reports on major open source graphics projects, technology previews, engineering talks, power-user techniques, graphics business best practices, and general issues such as open file formats and collaboration." Submissions are due by April 1.
UKUUG - Summer 2009 - Call For Papers
A call for papers has gone out for the UKUUG summer 2009 conference. Submissions are due by May 8. "Summer 2009 will take place at the Birmingham Conservatoire from Friday 7th to Sunday 9th August. The conference this year will have a choice of conference streams, and we are particularly keen to get other groups and projects involved."
Upcoming Events
FSFE announces the second European Licensing and Legal Workshop for Free Software
The Free Software Foundation Europe (FSFE) has announced a workshop on the licensing and legal aspects of free software to be held April 23-24 in Amsterdam. It is primarily targeted at members of the European Legal Network, which was created to address free software legal issues throughout all of the different jurisdictions in Europe. "This event is one of the activities of FSFE's Freedom Task Force (FTF). The FTF is an infrastructure activity to help individuals, projects and businesses understand Free Software licensing and the opportunities that it presents. The FTF works in partnership with gpl-violations.org to deal with licence violations in the European arena. The goal of the FTF is to foster best practice throughout the industry." Click below for the full announcement.
White Oak Technologies, Inc., Google, Sun Microsystems Sponsor PyCon
White Oak Technologies, Inc., Google, Sun Microsystems have been announced as sponsors of the PyCon 2009 conference. "White Oak Technologies, Inc., Google, Sun Microsystems Sponsor World's Largest Python Conference Python 3.0 enters spotlight at PyCon 2009 CHICAGO - March 24, 2009 - PyCon 2009, the largest annual conference of the worldwide Python programming community, takes place March 25 - April 2 at the Hyatt Regency O'Hare and the Crowne Plaza Chicago O'Hare in Chicago, IL. The core conference runs March 27-29, with days of special events both before and after the main conference."
Events: April 2, 2009 to June 1, 2009
The following event listing is taken from the LWN.net Calendar.
| Date(s) | Event | Location |
|---|---|---|
| March 23 April 3 |
Google Summer of Code '09 Student Application Period | online, USA |
| March 31 April 2 |
Solutions Linux France | Paris, France |
| March 31 April 3 |
Web 2.0 Expo San Francisco | San Francisco, CA, USA |
| April 3 April 5 |
PostgreSQL Conference: East 09 | Philadelphia, PA, USA |
| April 3 April 4 |
Flourish Conference | Chicago, IL, USA |
| April 6 April 8 |
CELF Embedded Linux Conference | San Francisco, CA, USA |
| April 6 April 7 |
Linux Storage and Filesystem Workshop | San Francisco, CA, USA |
| April 8 April 10 |
Linux Foundation Collaboration Summit | San Francisco, CA, USA |
| April 14 | OpenClinica European Summit | Brussels, Belgium |
| April 15 | Linuxwochen Österreich - Krems | Krems, Austria |
| April 16 April 17 |
Nordic Perl Workshop 2009 | Oslo, Norway |
| April 16 April 19 |
Linux Audio Conference 2009 | Parma, Italy |
| April 16 April 18 |
Linuxwochen Austria - Wien | Wien, Austria |
| April 20 April 24 |
samba eXPerience 2009 | Göttingen, Germany |
| April 20 April 23 |
MySQL Conference and Expo | Santa Clara, CA, USA |
| April 20 April 24 |
Perl Bootcamp at the Big Nerd Ranch | Atlanta, GA, USA |
| April 20 April 24 |
Cloud Slam '09 | Online, Online |
| April 22 April 25 |
ACCU 2009 | Oxford, United Kingdom |
| April 23 April 26 |
Liwoli 2009 | Linz, Austria |
| April 23 | Linuxwochen Austria - Linz | Linz, Austria |
| April 23 April 24 |
European Licensing and Legal Workshop for Free Software | Amsterdam, The Netherlands |
| April 25 May 1 |
Ruby & Ruby on Rails Bootcamp | Atlanta, Georgia, USA |
| April 25 April 26 |
LinuxFest Northwest 2009 10th Anniversary | Bellingham, Washington, USA |
| April 25 | Linuxwochen Austria - Graz | Graz, Austria |
| April 25 | Festival Latinoamericano instalación de Software libre | All Latin America, All Latin America |
| April 25 | Grazer Linux Tage 2009 | Graz, Austria |
| April 27 | OSDM 2009 | Bangkok, Thailand |
| May 4 May 8 |
JavaScript/Ajax Bootcamp at the Big Nerd Ranch | Atlanta, Georgia, USA |
| May 4 May 7 |
RailsConf 2009 | Las Vegas, NV, USA |
| May 4 May 6 |
EuroDjangoCon 2009 | Prague, Czech Republic |
| May 4 May 6 |
SYSTOR 2009---The Israeli Experimental Systems Conference | Haifa, Israel |
| May 5 | Linuxwochen Austria - Salzburg | Salzburg, Austria |
| May 6 May 9 |
Libre Graphics Meeting 2009 | Montreal, Quebec, Canada |
| May 6 May 8 |
Embedded Linux training | Maynard, USA |
| May 7 | NLUUG spring conference | Ede, The Netherlands |
| May 8 May 10 |
PyCon Italy 2009 | Florence, Italy |
| May 8 May 9 |
Linuxwochen Austria - Eisenstadt | Eisenstadt, Austria |
| May 8 May 9 |
Erlanger Firebird Conference 2009 | Erlangen-Nürnberg, Germany |
| May 11 | The Free! Summit | San Mateo, CA, USA |
| May 13 May 15 |
FOSSLC Summercamp 2009 | Ottawa, Ontario, Canada |
| May 15 May 16 |
CONFidence 2009 | Krakow, Poland |
| May 15 | Firebird Developers Day - Brazil | Piracicaba, Brazil |
| May 16 May 17 |
YAPC::Russia 2009 | Moscow, Russia |
| May 18 May 19 |
Cloud Summit 2009 | Las Vegas, NV, USA |
| May 19 May 22 |
PGCon PostgreSQL Conference | Ottawa, Canada |
| May 19 | Workshop on Software Engineering for Secure Systems | Vancouver, Canada |
| May 19 May 22 |
php|tek 2009 | Chicago, IL, USA |
| May 19 May 21 |
Where 2.0 Conference | San Jose, CA, USA |
| May 19 May 22 |
SEaCURE.it | Villasimius, Italy |
| May 21 | 7th WhyFLOSS Conference Madrid 09 | Madrid, Spain |
| May 22 May 23 |
eLiberatica - The Benefits of Open Source and Free Technologies | Bucharest, Romania |
| May 23 May 24 |
LayerOne Security Conference | Anaheim, CA, USA |
| May 25 May 29 |
Ubuntu Developers Summit - Karmic Koala | Barcelona, Spain |
| May 27 May 28 |
EUSecWest 2009 | London, UK |
| May 28 | Canberra LUG Monthly meeting - May 2009 | Canberra, Australia |
| May 29 May 31 |
Mozilla Maemo Mer Danish Weekend | Copenhagen, Denmark |
| May 31 June 3 |
Techno Security 2009 | Myrtle Beach, SC, USA |
If your event does not appear here, please tell us about it.
Event Reports
O'Reilly's ETech conference scopes out ideas at the edge of innovation
O'Reilly has published a report on the recent ETech 2009 conference. "ETech 2009, O'Reilly's Emerging Technology Conference held March 9-12 in San Jose, urged web technologists and visionaries to grasp the opportunities in today's financial and political turmoil by focusing on work they care deeply about. Through four jam-packed days, conference-goers immersed themselves in revolutionary ideas and emergent technologies they can exploit to succeed."
Web sites
KDE Brainstorm: Get Your Ideas Into KDE! (KDEDot)
KDE.News has announced the new brainstorm forum. "KDE is about the community, rather than the product. It is not all about the code: there are many other ways in which people can be part of KDE, and a very simple way is to connect with other people. In an effort to bridge the gap between users and developers, the KDE Community Forums have launched a new initiative to coordinate feature requests. A new "Brainstorm" section has been created in the KDE Community Forums: users are encouraged to post requests there."
Miscellaneous
Creative Commons weighs in on proposed OpenStreetMap license
The OpenStreetMap (OSM) project has been looking into changing the license that covers its data for some time now. A new license—the Open Database License or ODbL—was proposed in February to replace the current Creative Commons Attribution-ShareAlike license. LWN had a detailed look at the licensing issues in October 2008, but the controversy goes back at least a year before that.
Creative Commons recently made some comments on ODbL that are rather critical of the license, at least for use by OSM; it would rather see OSM data reside in the public domain—as would a number of OSM contributors. "In general, we believe that the interests of both providers and users of data and databases, particularly in science, education, and other areas where the ability to exchange and re-use data freely is critical to achieving the objectives of the data exchange community, are best served by reducing unnecessary transaction costs, simplifying legal tools, and providing as much clarity and certainty to providers and users of their respective rights and obligations as the law allows.
" This seems likely to muddy the waters further, which may delay or change any OSM relicensing plans.
Page editor: Forrest Cook
