|
|
Subscribe / Log in / New account

LWN.net Weekly Edition for February 18, 2010

Python and PostgreSQL

By Jonathan Corbet
February 17, 2010
As some LWN readers will know, this site is implemented with a combination of free technologies, including the Python language and the PostgreSQL relational database management system. Anybody who has tried to combine those two tools will have encountered the variety of modules out there intended to serve as the glue between them. It's the sort of variety that nobody wants, though: lots of options, none of which has the full support of the community or works as well as one might like. The good news is that this situation may not last a whole lot longer.

The conversation started when Bruce Momjian noted that the state of Python/PostgreSQL support was not as good as it could be. The PostgreSQL Python page and the Python PostgreSQL page agree on one thing: there are several adapters available, none of which is truly dominant, but many of which are seemingly unmaintained. How, he asked, is a developer to choose between them? Your editor, who has tried a number of them, could only nod in sympathy. Bruce requested:

What is really needed is for someone to take charge of one of these projects and make a best-of-breed Python driver that can gain general acceptance as our preferred driver. I feel Python is too important a language to be left in this state.

The purpose of a database adapter module is to make the capabilities of the database available to Python applications. To that end, it accepts SQL queries from the application, passes them to the database, and hands the results (if any) back to the application. Application writers like the idea of database independence; it holds the promise of being able to move easily to a different database should the need arise. To enable this independence, language development communities define standards for how database adapters should operate. The Python version of this standard is the Python Database API Specification, often called DB-API.

One of the problems, as identified by Greg Smith, is that the DB-API fails to cover much of the needed functionality, meaning that each adapter ends up making its own (incompatible, naturally) extensions. One of your editor's favorite quirks is the specification of five different "styles" by which parameters can be passed into queries; the application is expected to support all five and use whichever one the adapter is prepared to accept that day. The end result of all this is that adapters tend to diverge from each other, portability between them is problematic, and none becomes the standard.

That said, there currently seem to be two serious competitors in this area:

  • Psycopg almost certainly has the widest support among Python applications. It is reasonably solid and performs well, but some potential users may have been daunted by the fact that its web page took the form of an anti-Trac rant for some time (it's still in the Google cache as of this writing).

  • PyGreSQL has been around for a long time; it predates the DB-API and still does not implement it completely. Development on the code has been slow for some time, and its performance is not as good as Psycopg.

One might think that Psycopg would be the clear leader among these two, and it would have been, except for one little problem: Psycopg is GPL with a bunch of exceptions. The PostgreSQL community feels fairly strongly about permissive licenses, to the point that a GPL-licensed adapter is seen as a deal breaker. So Greg lamented:

If everybody who decided they were going to write their own from scratch had decided to work on carefully and painfully refactoring and improving PyGreSQL instead, in an incremental way that grew its existing community along the way, we might have a BSD driver with enough features and users to be a valid competitor to psycopg2. But writing something shiny new from scratch is fun, while worrying about backward compatibility and implementing all the messy parts you need to really be complete on a project like this isn't, so instead we have a stack of not quite right drivers without any broad application support.

As a way toward a solution, Greg put together a Python PostgreSQL driver TODO page describing the issues with both Psycopg and PyGreSQL. For Psycopg, wishlist items included some testing, refactoring, and documentation work. The list for PyGreSQL is longer and more daunting. The conclusion found on that page is:

A relicensed Psycopg seems like the closest match to the overall goals of the PostgreSQL project, in terms of coding work needed both in the driver and on the application side (because so many apps already use this driver).

Authors of GPL-licensed code often tend not to react well to requests for relicensing. That can be true even in cases like a database adapter, which is normally a relatively small component in a much larger application. In this case, though, Psycopg hacker Federico Di Gregorio acknowledged that, perhaps, GPL wasn't the best license for this module. So, he has announced, the next Psycopg release will carry the LGPLv3 license (plus the obligatory exceptions involved in using libssl) instead. That is probably enough to tip the scales in that direction and, finally, lead to a situation where there is an obvious default choice for developers.

There will be, beyond doubt, no end of lessons from this episode on how to run a successful free software project. There is one which stands out, though: until well into this discussion, there had been little or no communication between the PostgreSQL development community and the people working on Python adapters. Given how tightly coupled the two efforts are, a lack of communication for years can only make the creation of top-quality adapters harder. Once the relevant developers started talking to each other, it only took a few days to find a path toward a satisfactory conclusion.

Comments (17 posted)

FOSDEM'10: distributions and downstream-upstream collaboration

February 17, 2010

This article was contributed by Koen Vervloesem

For the first time in its ten-year history, FOSDEM didn't organize individual developer rooms per distribution, but it opted for a joint 'mini-conference' in two distribution developer rooms, with talks that specifically target interoperability between distributions, governance, common issues that distributions are facing, and working with upstream projects. A couple of them piqued your author's interest.

Debian and Ubuntu

As a Debian and Ubuntu developer since 2006, Lucas Nussbaum knows the relationship between these two distributions inside and out. He has attended DebConf and Ubuntu Developer Summit, has friends in both communities, and is involved in improving collaboration between both projects. In his talk, he discussed the current state of affairs from his point of view and what could be done to improve matters.

Ubuntu has a lot of upstream projects, like Linux, X.org, GNOME, KDE, and so on, but it has one special upstream: Debian. Integrating a new project or a new release in Ubuntu regularly requires changes, such as toolchain changes and bug fixes. It is often not possible to do this work in Debian first. Lucas gave some statistics from Ubuntu Karmic Koala (9.10): 74% of the projects come directly and unmodified from Debian, 15% come from Debian but are modified with Ubuntu patches, and 7% come directly from upstream projects. (For those that are puzzled why these numbers don't add up: the missing 4% is when Ubuntu packages a newer upstream release than the release that Debian has. This package can be based on the Debian version or fully repackaged.)

Managing this divergence is not trivial. Keeping local changes in Ubuntu requires a lot of manpower, and the changes need to be merged again when Debian updates the package. This is already a strong incentive to push changes to Debian. Bug reports are the main vehicle for pushing changes, but this is where problems can start. Lucas summarized it neatly:

Ubuntu users that want to file a bug, have the choice between three options. They can file a bug upstream, where they might get flamed; they can file a bug in Debian, where they are very likely to get flamed; or they can file a bug in Ubuntu's Launchpad, where there are very likely to get ignored.

There is already some collaboration on bugs today. For example, some bugs get filed in Debian by Ubuntu developers: about 250 to 400 bugs per Ubuntu release cycle, mostly upstreaming of Ubuntu patches. There is also a link to Ubuntu patches and bugs in Launchpad on the Debian Package Tracking System (PTS), although Lucas admitted that at the moment the data is imported using a fragile hack.

The second part of Lucas's talk was about his view on the current state of the relationship between Debian and Ubuntu. Historically, many Debian developers have been unhappy about Ubuntu because of the feeling of the distribution being "stolen" and due to some problems with Canonical employees that tend to reflect on Ubuntu as a whole. However, according to Lucas things have improved considerably and many Debian developers see some good points in Ubuntu: Ubuntu brings a lot of new Linux (and Debian derivative) users, it also brings new developers to Debian, and it serves as a technological playground. Examples of the latter are dash as the default /bin/sh, boot improvements, hardening GCC flags, and so on. Having these things tested first in Ubuntu makes it much easier to import them into Debian later.

On the Ubuntu side, Lucas sees a culture where contributing to Debian is the right thing to do, and as a result many Ubuntu developers contribute to Debian. However, there is often not a lot to contribute back on the package level: many bug fixes are just workarounds. Also, Canonical is a company, which contributes back when there are benefits for it, so Debian shouldn't expect many free gifts.

However, Ubuntu's rise also causes some problems to Debian, and Lucas called the most important problem "the loss of relevance of Debian". Not only has the Debian user base (or at least market share) decreased, but, for many new users, Linux equals Ubuntu. Recent innovations have usually happened in Ubuntu, so even if Debian is now the basis of a major distribution, it becomes less relevant. However, Lucas thinks collaborating with Ubuntu is the right thing to do for free software world domination, because Debian fights for important values and takes position on technical and political issues such as the Firefox trademark issue.

Lucas's proposal to make Debian relevant again and still help Ubuntu is to behave like a good upstream and to communicate on why Debian is better. The former means that collaboration with Ubuntu should be improved. Not only should there be more cross-distribution packaging teams, Lucas also maintains that Debian could help Ubuntu to maintain Debian's packages, e.g. by notifying Ubuntu of important transitions and by bug triaging and fixing directly in Launchpad if time permits this. He also stressed that Debian should acknowledge high-quality work that is done in Ubuntu. This could then be imported into Debian: "Importing packages doesn't have to be one-way only."

According to Lucas, Debian fails at communicating that it is better. It even needs external people like Bradley Kuhn sometimes to do that communicating. Debian is a volunteer-based project where decisions are made in the open, and it advocates the free software philosophy since 1993. In contrast, Ubuntu is a project controlled by Canonical, where decisions like Ubuntu One or the switch to Yahoo! as a search engine are imposed. Moreover, Ubuntu advocates proprietary web services such as Ubuntu One, the installer recommends proprietary software, and there is the controversial copyright assignment to contribute to Canonical projects.

Lucas went further and claimed that Debian is a better distribution because many package maintainers (e.g. of scientific software) are experts in their field, and the emphasis is on quality. In contrast, most of Ubuntu's packages (the 74 % he mentioned before) are just synchronized from Debian, with many maintainers that have no real knowledge about the packages. The conclusion of his talk was that Ubuntu is a chance for Debian to get back in the center of the FLOSS ecosystem, but that the distribution should be more vocal about Ubuntu's issues.

How to be a good upstream

Petteri Räty, a member of the Gentoo Council, talked about how to be a good upstream. Or rather, he gave some "dos and don'ts" to bootstrap a discussion with the audience. The bottom line was: if a project is a good upstream, then distributions need a minimal amount of iterations to package the software. One ground rule to prevent a lot of problems is: "Never change a once released file". Without releasing a new version of the software, that is. If an upstream project violates this rule, bug reports don't make any sense, because upstream and downstream no longer know which version of the file the user is referring to. Another thing to watch for when releasing files is that a release should not build in debug mode by default. There should also be no -Werror in CFLAGS, as the project should respect the user's choice. Moreover, changes that are relevant for distributions should be documented in changelogs, Petteri stressed: "For example, say it explicitly when there's a security fix."

How the upstream project handles dependencies also has consequences for distributions. For example, they should link dependencies dynamically and use libtool in autoconf. They also should never bundle dependencies: for example, if they would include zlib, security problems in the library wouldn't get solved when the distribution updates the "global" zlib. And last but not least, the upstream project should allow downstreams to configure build support for optional dependencies.

Petteri also emphasized that the "Release early, release often" software development philosophy is really important: that way, end users get the project's code faster, which means that the code is tested more quickly, by more people. Not all releases will end up in distributions, but the best ones (tested by distributions' developers and maintainers) will. It's also important to have consistent version numbers: going from 0.10 to 10.0 and then to 11.1 is not the way to go. The audience grinned understandably when Petteri mentioned that a 4.0 version number should only be given to a stable release.

Working with GNOME upstream

As a GNOME and openSUSE contributor, Vincent Untz was the perfect speaker to lead a session about collaboration between GNOME upstream and downstream. This was really an interactive session where the audience gave a lot of suggestions. For example, one downstream packager said that it would be nice to have the same predictable 6-months schedule for GNOME applications, like Rhythmbox, as for the GNOME desktop environment. There is already a precedent: at the end of January, the Banshee music player developers announced that they will align their release schedule with GNOME's: Banshee 1.6 will be released together with GNOME 2.30. Another member of the public found it inconvenient that new GNOME features sometimes depend on arbitrary upstream choices such as X running on the first virtual terminal.

Big changes in GNOME upstream also have an impact on stability. Vincent pointed out the migration from GnomeVFS to GVFS in GNOME 2.22, which maybe happened too early. GDM 2.24 also had too many changes, even to the point where many distributions still use GDM 2.20 now. The change from AT-SPI (Assistive Technology Service Provider Interface) built upon CORBA (Common Object Request Broker Architecture) to AT-SPI2 based on D-Bus is also a big change that will be a challenge for distributions. An example that merits following according to Vincent is the move from HAL to DeviceKit in GNOME Power Manager: Richard Hughes maintained the two branches, which was good for distributions that didn't want to migrate immediately. GNOME 3 will obviously also have a big impact. Vincent welcomes downstream distributions to tell GNOME which "old" libraries they would like to keep using for a time, so that upstream can maintain these to give distributions some time.

With respect to patches, Vincent applauded Debian, which is working on a format where a patch has information in the comments about where the patch has been sent upstream, if it is accepted but not yet released, and so on. Distributions that have an online patch tracker also help upstream maintainers. Again it is Debian (with patch-tracker.debian.org) and Ubuntu (with patches.ubuntu.com) that are helpful to upstream maintainers.

Packaging Perl modules

Gabor Szabo, who has been involved in the Perl community for 10 years, talked about packaging Perl and CPAN (Comprehensive Perl Archive Network) modules for distributions. The problem from the end-user's perspective is that many Perl applications are not packaged in the user's distribution, but are only available as a CPAN module that has to be built and installed in another way. The problem becomes even uglier when users want to install a Perl script that needs several CPAN modules that are not in the distribution's repository.

Gabor calls this a major issue, and he gave some numbers to put this into perspective: CPAN has around 20,000 packages, while Ubuntu 9.10 has about 1,900 of them, which is roughly 10 %. The numbers are even worse for Ubuntu 8.04, a Long Term Support (LTS) version that is used on many web servers: this release has about 1,200 CPAN packages or roughly 6 % of them. "Other distributions have roughly the same numbers, with FreeBSD having quite a bit more in their ports collection. Of course we don't need all CPAN modules packaged in distributions, but we definitely need a lot more than we have now."

So why do most distributions have such a low percentage of Perl packages? According to Gabor, the number one reason is that users just don't ask for more modules, maybe because many users are not used to talking to their distribution. Another obvious reason is that it's time consuming to package and maintain a Perl module. This issue could be solved by further automation and better integration of the packaging tools and the CPAN toolchain. "We should also catch license or dependency issues earlier. As far as I know, only Fedora and Debian have a dedicated Perl packaging mailing list: Fedora-perl-devel-list and debian-perl, respectively." But on the other hand, it's not the quantity that counts, and many modules are not worth packaging: "Therefore, the Perl community needs a better way to indicate the quality and the importance of a CPAN module, as a guideline for distributions. Importance can be measured in different ways: by popularity, by the number of external (non-CPAN) dependencies, by the number of packages depending on this package, and so on."

Apart from solving the usual occasional communication problems (upstream CPAN authors that don't respond or even disappear, patches and bugs reported to downstream that don't always reach upstream, and so on), Gabor has some suggestions for both upstream and downstream. For example, distributions could supply more data directly to CPAN, such as the name and version of the CPAN modules they include as packages, the list of bugs reported to the distribution, and the list of downstream patches. The Perl community could then gather this data and display it on one web site. CPAN itself could also be improved for better downstream integration. For example, there should be a "pass-through CPAN installation" where CPAN.pm should use the distribution's native package manager where possible. Using this installation type, CPAN could also report missing packages and gather statistics about these modules for distributions.

RPM packaging collaboration

Pavol Rusnak of the openSUSE Boosters Team shared his view on RPM packaging collaboration. The biggest issue here is that many RPM-based distributions use their own distribution-specific macros in RPM spec files. However, a couple of them have already been unified between distributions. For example, Fedora and openSUSE first had defined completely different macros for Python packaging, but the Fedora macros have now been adopted in upstream RPM and openSUSE 11.2 is also using them. The make install differences have also been solved: while, at first, openSUSE, Mandriva and Fedora used different macros, RPM upstream introduced the make_install macro which is equivalent to make install DESTDIR=%{?buildroot}.

However, there are still a lot of differences that make porting RPM spec files a challenge. For example, handling desktop files happens differently in Fedora and Mandriva than in openSUSE: they have other BuildRequires dependencies and the content of the install macro is different. Pavol suggested unifying this procedure and pushing things to RPM upstream if some macro is still needed. Ruby packaging also happens with different macros, and here too, Pavol suggests creating common macros in upstream and using them consistently in distributions. "However, different distributions have a different mindset, so it's not always easy to find a solution that everyone likes." The same song goes for parallel builds: Fedora starts them as make %{?_smp_mflags}, while openSUSE uses make %{?jobs:-j%jobs}. And so on.

Pavol ended his presentation with some ideas for future RPM development. For example, Panu Matilainen is working on file triggers, which will help packagers to get rid of a large number of scriptlets in spec files, like calling ldconfig, gtk-update-icon-cache, and so on. File triggers, which allow running some scripts when some file has been added or removed, are already implemented in Mandriva. Pavol also suggests introducing two new scriptlets: %preup and %postup. These will be called when updating a package. The %preun, %postun, %pre and %post scriptlets will not be run in that case. That way, the package writers don't have to write weird code such as if [ "$1" -eq "0" ]; any more to detect whether an operation is an upgrade or an install. In the future, scriptlets will also get more information about the running transaction, which makes it possible to detect more precisely what is happening so that, for example, it could convert a configuration file when upgrading from Apache 1.x to Apache 2.x.

Learning from each other

When it was announced that distributions wouldn't get their own developer room at FOSDEM 2010 but would meet together in the cross-distribution rooms, many people were skeptical about the idea. Some of them liked the idea of cross-distribution rooms, but were disappointed to see the separate distribution-specific rooms go away. But all in all, many visitors seemed to find the new concept a great idea, and there were good cross-distribution discussions at FOSDEM 2010, although in many of the talks there were not enough people from enough distributions to really get the ball rolling. Packaging and working with upstream are important activities where all distributions are confronted with the same issues and where they can learn from each other and share good ideas. Your author has heard rumors that a lot of developers didn't come to FOSDEM this year because their favorite distribution didn't get a dedicated room, but this is a pity: being able to learn from other distributions would have made their own distribution even better.

Comments (7 posted)

MeeGo: the merger of Maemo and Moblin

February 16, 2010

This article was contributed by Nathan Willis

The mobile Linux world is about to get simpler, as Nokia's Maemo platform for handheld mobile devices and Intel's Moblin project for netbooks are merging. The combined "MeeGo" stack will still differentiate between device types at the "user experience" level, but will share the same system-level components and, hopefully, unite developer communities by offering a common base. The announcement was made on February 15th, with content and details continuing to roll out on the meego.com site.

Nokia started Maemo in 2005, which was first delivered on a series of WiFi-connected pocket tablets without cellular connectivity, but eventually moving into mobile phones with the 2009 launch of the N900. Moblin was launched in 2007 by Intel, targeting netbooks running on Intel Atom processors. In April of 2009, Intel signed governance of the project over to the non-profit Linux Foundation, but continued to guide its development. Moblin publicly released 2.0 "beta" code for netbooks in May, but previewed an update of the stack running on Atom-powered phones in September.

The mobile industry trade press, understandably, is reporting on the MeeGo announcement as a defensive move to counter the challenge posed by Google's Android (and, to a lesser extent, ChromeOS). Android has seen tremendous growth in the past year, with more than two dozen products now shipping from a variety of device manufacturers. But while Android may seem like a more direct challenge to Maemo, it is not limited to phones. Several products have been announced that edge further into the device space sought by Moblin, including netbooks, tablets, and e-book readers. Nor is Google the only mobile operating system vendor producing a Linux-based platform for portable devices; Palm, Samsung, and the multi-manufacturer consortium called the LiMo Foundation all produce competing offerings.

However, within the spectrum of mobile Linux operating systems, Moblin and Maemo were already the two projects with the greatest overlap in terms of technical design and governance structure. Android, ChromeOS, and Palm's webOS all use the Linux kernel and portions of the same software stack found on graphical desktops, but provide limited APIs for application deployment. While source code for upstream components is available for all Linux-based operating systems, the platforms still differ in terms of openness. Android and ChromeOS are freely available to outside platform developers, and accept patches and bug reports. The LiMo Platform includes components contributed by member handset makers, including some that are proprietary and is governed by its member contributors. Palm's webOS and Samsung's newly-announced Bada are single-vendor products.

Maemo and Moblin both started with existing desktop Linux distributions: Maemo was originally based on Debian, Moblin on Fedora, although both incorporated technology from other distributions as well. Underneath, however, they ran strikingly similar middleware platforms. Both used X, GLib, D-Bus, Pango, Cairo, GStreamer, Evolution Data Server, PulseAudio, Mozilla's Gecko HTML rendering engine, Telepathy, ConnMan, and a host of other common utilities. GNOME's Dave Neary even commented on the similarity in response to concerns that there were "too many" mobile Linux platforms. In addition, both projects sought to make their platforms fully accessible to third-party developers, without the need to license an SDK or, in most cases, to code to different APIs. Both worked to develop active, open communities around the code base.

In retrospect, though, perhaps the clearest harbinger of the merge is oFono, the open telephony stack project launched jointly by Intel and Nokia in May of 2009. The closed source telephony stack included in the N900 was a lightning rod for criticism from free software advocates — though Nokia justified its decision to write a new stack from scratch — but, prior to Monday's announcement, Intel's involvement in oFono stood out as an oddity. Now it makes sense.

Merging, nuts and bolts

In spite of the similarities, Maemo and Moblin did have their differences, particularly in the choice for top-level toolkit. Moblin used GTK+ and Clutter as its preferred toolkits, while the latest Maemo release was in the process of switching over to Qt.

As posted on the MeeGo web site, the combined platform closely resembles the Moblin base, but with Qt as the interface toolkit. The architecture diagram provided is vague, listing components only as "Internet Services," "Media Services," "Data Management," and so forth, but according to additional information provided by MeeGo, most of the stack remains unchanged from Moblin 2.0.

Both Qt and GTK+/Clutter blocks are shown in the MeeGo architecture diagram, but the Qt block is three times larger, and the "getting started" documents in the developers' section of the site only address Qt. Still, the FAQ explicitly says that both toolkits will be supported, so the simplest interpretation of the diagram may just be that Nokia, as corporate owner of Qt, is expected to contribute a larger share of developer-time via its toolkit.

More interesting is the fact that the combined MeeGo platform will support at least two processor architectures, ARM and Atom — in addition to any others championed by the community. The hardware enabling process outlines what the MeeGo project has in mind, with platform maintainers for each architecture overseeing responsibility for kernel, X, bootloader, and other hardware-specific adaptations.

Different devices appear to diverge at top of the stack, however, which MeeGo describes as the user experience (UX) level. The two UXes discussed on the site are the Handheld UX and the Netbook UX, which correspond neatly to the previous Maemo and Moblin product spaces. What is contained in each UX block is not as clear, however; each definitely includes the basic user interface and application suite, but according to the architecture diagram also incorporates a UI framework. Intriguingly, several places on the MeeGo site mention other UXes, including in-vehicle computers and connected televisions in particular — one possibility may be from Genivi, a non-profit industry group working on "In-Vehicle Infotainment" systems.

In the past, the UX layer formed a minor point of contention in the Maemo community, because Nokia kept the source code closed to several of its top-level applications. While most accepted the situation, there were always some who objected to the presence of any closed components in the distribution. Nokia responded by providing a detailed list of the closed components, the reasons why each was not opened, and a process through which developers could request the opening of a particular component.

To see precisely what MeeGo devices will include as UX components, one will have to wait for products to ship. The first release of the MeeGo platform itself is slated for the second quarter of 2010, with products following later in the year. Nokia's Quim Gil says that the Maemo 6 release previously scheduled for 2010 will proceed as planned; Maemo will be rebranded as MeeGo, but will not incorporate any changes to the software.

Communities

Arguably a bigger challenge than producing a new mobile Linux distribution will be merging the existing Moblin and Maemo user and developer communities. Maemo, being the older, has the more established community, with an active forum, a community council, and extensive documentation and processes for independent application developers to get their software tested and packaged for public release.

Moblin's community is smaller, centered around a pair of mailing lists, and it has a smaller garage of third-party applications. On the other hand, the majority of third-party Moblin applications are Moblin builds of existing desktop Linux applications; in contrast many Maemo projects are either standalone works or heavily-modified applications tailored for the handheld user experience.

Discussions on how to merge the communities have already started within both Moblin and Maemo. The MeeGo site has IRC and mailing list options in place already, but has not yet launched developer documentation or bug tracking. It does, however, outline the participation process for working on MeeGo itself, through the licensing policy and contribution guidelines.

Governance will be provided by a Technical Steering Group, to be headed by Imad Sousou from Intel and Valtteri Halla from Nokia, and meeting publicly biweekly. There is no admission or membership process to become a MeeGo contributor, and contributing code will require no copyright assignment:

MeeGo project will neither require nor accept copyright assignment for code contributions. The principle behind this is on the one hand to avoid extra bureaucracy or other obstacles discouraging contributions. On the other hand the idea is to emphasize that contributors themselves carry the rights and responsibilities associated with their code. MeeGo is a common concern of its project community and all participants should represent themselves and continuously influence the result through their own contribution.

Code contributions are encouraged to take place rather in the upstream component projects than in MeeGo. Project focus is in integrating existing upstream components into a platform release rather than in new code development. Therefore the objective is to minimize MeeGo patch against upstream projects and to avoid accumulating patches which serve other purpose than release integration and stabilization.

Finally, MeeGo raises one other community challenge, that as the only Linux distribution hosted by the vendor-neutral Linux Foundation itself, it could be perceived as an endorsed threat against the mobile Linux offerings of other Foundation members — such as Google.

Executive Director Jim Zemlin says the Foundation has a long history of industry collaboration on technical, legal, and community efforts, and cites other direct competitors such as Red Hat, Ubuntu, and Novell, that participate willingly in projects together, and benefit from the work. The Linux Foundation's job is to protect the ecosystem and the community, he said, and by hosting the MeeGo project it will provide a neutral forum for progress in development, specifications, and governance. This makes MeeGo similar to other Linux Foundation projects, he added, such as Linuxprinting.org, Carrier Grade Linux, accessibility, the Linux Standard Base, and others.

There is no denying that mobile is top-priority target for the Linux community. Android, webOS, and ChromeOS fans may perceive MeeGo as a threat, but when one looks beyond the brand names, the playing field is no different than that on which desktop and server Linux distributions compete: all have access to the same kernel, the same graphics subsystem, utilities and toolkits. No competitor is at a technical disadvantage.

What does make MeeGo unique in the mobile line-up is that it follows the desktop Linux model so closely. Individually, the Moblin and Maemo projects used that to their advantage, rapidly building robust communities and products. The odds are that their combined effort will play much the same way; if other, less open Linux-based mobile stacks see their sales threatened as a result, the best response will be for them to change their game.

Comments (50 posted)

Page editor: Jonathan Corbet

Security

Trust, but verify

By Jake Edge
February 17, 2010

Public-key cryptography has been an enormous boon for securing internet communication, but it suffers from a difficult-to-solve problem: authentication and key management. When presented with a public key over an insecure channel—as part of setting up a secure channel for example—how does one determine that the public key belongs to the entity that it purports to? There are several ways to solve that problem, but none are completely satisfactory. The Monkeysphere project seeks to turn the currently used system on its head, to some extent, and entrust users, rather than centralized authorities, with the power to bestow trust on a key.

There are three main ways for a key to be "trusted": the key (or its fingerprint) is transferred via some secure channel (by phone or in person for example), the key is signed by an authority which has been entrusted to only sign valid keys, or the key is signed by "enough" different entities that are fully or partially trusted (i.e. a web of trust). Most of today's secure internet communications use SSL/TLS which requires keys that have been signed by certificate authorities (CAs), which are "trusted" authorities.

There are two smaller subsets of secure communication, mostly only used by computer-savvy folks, that use other means for determining trust: SSH for interactive encrypted communication and PGP for encrypted email. SSH relies on key fingerprints being exchanged securely, at least in theory, while PGP relies on a web of trust. Monkeysphere's first project is to move the PGP web of trust into the SSH world.

A web of trust is a decentralized, user-controlled key management scheme whereby keys are signed by multiple entities, each using its own keys. The signature can be verified based on the public key of the signer and the user can decide which signers are to be trusted—and at what level to trust them. In practice, if Adam signs Bonnie's key, and Clarisse trusts Adam, that means that Clarisse can trust Bonnie's key. Whether Clarisse should trust David's key, which is signed by Bonnie, depends to a large extent on how much she trusts Adam.

Key signing only implies that the signer verified the identity of the key holder, i.e. that the key holder is the same person or organization that is identified in the key. It is not necessarily an indication that the key holder should be trusted in a general sense, only that the key holder is who they say (via the key) they are. The web of trust used by the Monkeysphere OpenSSH framework is based on the GNU Privacy Guard (GnuPG or GPG) implementation of the OpenPGP standard (RFC 4880).

There are levels of trust that a user can place on a particular signer privately in the user's GPG configuration. They can also issue a trust signature that specifies publicly what trust level they have for a particular signer. So, from the example above, if Adam has published a trust signature for Bonnie saying that she is fully trusted by him, and Clarisse fully trusts Adam (publicly or privately), she is likely to trust David's key. The number of signatures and trust levels required to fully trust a key are configurable by the user, allowing users to decide what their trust parameters are.

What Monkeysphere has done is to add some Perl around OpenSSH to manage keys, along with the known_hosts and authorized_keys files which normally live in the ~/.ssh directory. No modification to the OpenSSH client or server is required, though using Monkeysphere requires that all outbound connections go through the "monkeysphere ssh-proxycommand" command. On the server side, OpenSSH needs to be configured to use an alternate, Monkeysphere-managed AuthorizedKeysFile. The documentation page outlines the configuration needed for OpenSSH and GPG on the client or server sides.

For SSH, especially for sites with lots of hosts, it means that users or system administrators don't have to laboriously propagate keys into authorized_keys files on each new system. Instead, they can say that any key signed by their organization's key is trusted. Each user then has their key signed and can log in to any machine. Of course, ensuring that the organizational keys don't get lost, or fall into the wrong hands, is imperative.

While it is much more user-centric than a trusted authority mechanism, and does not require a separate secure channel for fingerprint exchange, a web of trust is no panacea. There are still issues with handling key revocations, especially if the user loses their key. A bigger problem may be getting a large enough web of trust, with enough trusted key signers, built such that users' keys, especially new users' keys, have a reasonable shot at being accepted.

The very user-centrism that makes a web of trust so intriguing to those who care about secure communications may in fact be one of its biggest downfalls. Non-technical users have shown very little inclination towards wanting any control over which keys they accept or decline. Someone faced with trying to decide who to trust, and at what level, along with how many different signatures/types they require is likely to throw up their hands in frustration. Non-technical users typically don't use SSH or encrypted email, but they may use other services, like SSL/TLS encrypted web traffic that might also benefit from a web of trust model.

LWN commenter dkg pointed to Monkeysphere (or similar techniques) as a possible solution for the problem of blindly trusting whatever CA root certificates a browser installs: "The more communications security is in the hands of the end users, with tools that are intelligible to end users, the more we can reject these abusive (or at least easily abused) centralized authorities." The italicized phrase is both the most important, and probably the hardest, part to get right.

Tools like Monkeysphere, and efforts like those of CAcert, are good starting points. How well those can translate into workable, user-friendly, user-centric authentication and key management mechanisms is an open question. While those of us who are technically inclined will be able to use a web of trust if desired, it would be nice one day if our parents, siblings, and others who aren't so technical could also stop relying on potentially corrupt organizations for their internet communication security. A web of trust may be a big step down that path.

Comments (38 posted)

Brief items

Debian to start deploying DNSSEC

The Debian system administrators (DSA) have announced that they will soon be deploying DNSSEC for selected Debian zones. "The plan is to introduce DNSSEC in several steps so that we can react to issues that arise without breaking everything at once. [...] We will start with serving signed debian.net and debian.com zones. Assuming nobody complains loudly enough the various reverse zones and finally the debian.org zone will follow. Once all our zones are signed we will publish our trust anchors in ISC's DLV Registry, again in stages. [...] The various child zones that are handled differently from our normal DNS infrastructure (mirror.debian.net, alioth, bugs, ftp, packages, security, volatile, www) will follow at a later date." (Thanks again to Paul Wise.)

Comments (11 posted)

Mark Cox: Red Hat's Top 11 Most Serious Flaw Types for 2009

Red Hat's director of security response, Mark Cox, has posted some information about which security flaw types were most prevalent in the security fixes made by Red Hat in 2009. He compares those fixes with the Top 25 Most Dangerous Programming Errors that were just published by MITRE and the SANS Institute. "This quick review shows us that 2009 was the year of the kernel NULL pointer dereference flaw, as they could allow local untrusted users to gain privileges, and several public exploits to do just that were released. For Red Hat, interactions with SELinux prevented them being able to be easily mitigated, until the end of the year when we provided updates. Now, in 2010, the upstream Linux kernel and many vendors ship with protections to prevent kernel NULL pointers leading to privilege escalation."

Comments (none posted)

New vulnerabilities

ajaxterm: denial of service

Package(s):ajaxterm CVE #(s):CVE-2009-1629
Created:February 12, 2010 Updated:December 30, 2010
Description: From the Debian advisory:

It was discovered that ajaxterm, a web-based terminal, generates weak and predictable session IDs, which might be used to hijack a session or cause a denial of service attack on a system that uses ajaxterm.

Alerts:
Fedora FEDORA-2010-18867 Ajaxterm 2010-12-13
Debian DSA-1994-1 ajaxterm 2010-02-11

Comments (none posted)

fetchmail: arbitrary code execution

Package(s):fetchmail CVE #(s):CVE-2010-0562
Created:February 16, 2010 Updated:June 2, 2010
Description: From the Mandriva advisory:

The sdump function in sdump.c in fetchmail 6.3.11, 6.3.12, and 6.3.13, when running in verbose mode on platforms for which char is signed, allows remote attackers to cause a denial of service (application crash) or possibly execute arbitrary code via an SSL X.509 certificate containing non-printable characters with the high bit set, which triggers a heap-based buffer overflow during escaping.

Alerts:
Gentoo 201006-12 fetchmail 2010-06-01
Fedora FEDORA-2010-3800 fetchmail 2010-03-06
SuSE SUSE-SR:2010:005 fetchmail, krb5, rubygem-actionpack-2_1, libexpat0, unbound, apache2-mod_php5/php5 2010-02-23
Mandriva MDVSA-2010:037 fetchmail 2010-02-16

Comments (none posted)

flash-plugin: information disclosure

Package(s):flash-plugin CVE #(s):CVE-2010-0186 CVE-2010-0187 CVE-2010-0188
Created:February 12, 2010 Updated:January 21, 2011
Description: From the Red Hat advisory:

If a victim loaded a web page containing specially-crafted SWF content, it could cause Flash Player to perform unauthorized cross-domain requests, leading to the disclosure of sensitive data.

Alerts:
Gentoo 201101-09 flash-player 2011-01-21
Gentoo 201009-05 acroread 2010-09-07
Red Hat RHSA-2010:0470-01 flash-plugin 2010-06-14
SuSE SUSE-SR:2010:006 2010-03-15
Red Hat RHSA-2010:0103-01 flash-plugin 2010-02-12
SuSE SUSE-SR:2010:004 moodle, xpdf, pdns-recursor, pango, horde, gnome-screensaver, fuse, gnutls, flash-player 2010-02-16
Red Hat RHSA-2010:0102-01 flash-plugin 2010-02-12
Pardus 2010-37 flashplugin 2010-02-25
Red Hat RHSA-2010:0114-01 acroread 2010-02-18

Comments (none posted)

fwbuilder: symlink attack

Package(s):fwbuilder CVE #(s):
Created:February 16, 2010 Updated:February 17, 2010
Description: From the Red Hat bugzilla:

An insecure temporary file handling in the generated iptables script was found in fwbuilder. A local attacker could use this flaw to perform symlink attack against user running this script, which will result in overwrite of arbitrary file writable by this script.

Alerts:
Fedora FEDORA-2010-0157 libfwbuilder 2010-01-05
Fedora FEDORA-2010-0157 fwbuilder 2010-01-05

Comments (none posted)

gnome-screensaver: lock bypass

Package(s):gnome-screensaver CVE #(s):CVE-2010-0422
Created:February 16, 2010 Updated:March 8, 2010
Description: From the Red Hat bugzilla:

gnome-screensaver can lose its keyboard grab when locked, exposing the system to intrusion by adding and removing monitors.

Alerts:
Ubuntu USN-907-1 gnome-screensaver 2010-03-08
Fedora FEDORA-2010-1855 gnome-screensaver 2010-02-16
SuSE SUSE-SR:2010:004 moodle, xpdf, pdns-recursor, pango, horde, gnome-screensaver, fuse, gnutls, flash-player 2010-02-16

Comments (none posted)

kernel: several vulnerabilities

Package(s):kernel CVE #(s):CVE-2010-0410 CVE-2010-0415
Created:February 12, 2010 Updated:October 8, 2010
Description: From the Red Hat bugzilla: Sebastian Krahmer found a problem in the drivers/connector/connector.c code where users could send/allocate arbitrary amounts of NETLINK_CONNECTOR messages to the kernel, causing OOM condition, killing selected processes or halting the system. CVE-2010-0410

From the Red Hat bugzilla: Ramon de C. Valle spotted a problem in sys_move_pages, where "node" value is read from userspace, but not limited to the node set within the kernel itself. Due to the bit tests in mm/migrate.c:do_move_pages it is easy to read out the kernel memory (as node can also be negative). CVE-2010-0415

Alerts:
Oracle ELSA-2013-1645 kernel 2013-11-26
openSUSE openSUSE-SU-2013:0927-1 kernel 2013-06-10
Mandriva MDVSA-2010:188 kernel 2010-09-23
Mandriva MDVSA-2010:198 kernel 2010-10-07
CentOS CESA-2010:0398 kernel 2010-05-28
Red Hat RHSA-2010:0398-01 kernel 2010-05-06
SuSE SUSE-SA:2010:023 kernel 2010-05-06
Mandriva MDVSA-2010:088 kernel 2010-04-30
SuSE SUSE-SA:2010:019 kernel 2010-03-30
Mandriva MDVSA-2010:066 kernel 2010-03-24
Red Hat RHSA-2010:0161-01 kernel-rt 2010-03-23
SuSE SUSE-SA:2010:018 kernel 2010-03-22
CentOS CESA-2010:0147 kernel 2010-03-18
Red Hat RHSA-2010:0147-01 kernel 2010-03-16
Ubuntu USN-914-1 linux, linux-source-2.6.15 2010-03-17
SuSE SUSE-SA:2010:016 kernel 2010-03-08
SuSE SUSE-SA:2010:014 kernel 2010-03-03
Fedora FEDORA-2010-1804 kernel 2010-02-12
Pardus 2010-35 kernel kernel-pae 2010-02-25
Debian DSA-2003-1 linux-2.6 2010-02-22
Fedora FEDORA-2010-1787 kernel 2010-02-12
Debian DSA-1996-1 linux-2.6 2010-02-12
Debian DSA-2004-1 linux-2.6.24 2010-02-27

Comments (none posted)

mod_security: multiple vulnerabilities

Package(s):mod_security CVE #(s):
Created:February 16, 2010 Updated:February 17, 2010
Description: From the Red Hat bugzilla:

Multiple security flaws, which might lead to bypass of intended security restrictions and denial of service, have been reported and fixed in ModSecurity.

Alerts:
Fedora FEDORA-2010-1903 mod_security 2010-02-16
Fedora FEDORA-2010-1862 mod_security 2010-02-16

Comments (none posted)

openoffice.org: buffer overflows

Package(s):openoffice.org CVE #(s):CVE-2009-2140
Created:February 11, 2010 Updated:May 24, 2010
Description: From the Mandriva alert:

Multiple heap-based buffer overflows allow remote attackers to execute arbitrary code via a crafted EMF+ file.

Alerts:
Mandriva MDVSA-2010:105 openoffice.org 2010-05-21
Mandriva MDVSA-2010:091 openoffice.org 2010-05-04
Mandriva MDVSA-2010:056 openoffice.org 2010-03-05
Mandriva MDVSA-2010:035 openoffice.org 2010-02-11

Comments (none posted)

openoffice.org: multiple vulnerabilities

Package(s):openoffice.org CVE #(s):CVE-2009-2949 CVE-2009-2950 CVE-2009-3301 CVE-2009-3302
Created:February 12, 2010 Updated:November 8, 2010
Description: From the Red Hat advisory:

An integer overflow flaw, leading to a heap-based buffer overflow, was found in the way OpenOffice.org parsed XPM files. An attacker could create a specially-crafted document, which once opened by a local, unsuspecting user, could lead to arbitrary code execution with the permissions of the user running OpenOffice.org. Note: This flaw affects embedded XPM files in OpenOffice.org documents as well as stand-alone XPM files. (CVE-2009-2949)

An integer underflow flaw and a boundary error flaw, both possibly leading to a heap-based buffer overflow, were found in the way OpenOffice.org parsed certain records in Microsoft Word documents. An attacker could create a specially-crafted Microsoft Word document, which once opened by a local, unsuspecting user, could cause OpenOffice.org to crash or, potentially, execute arbitrary code with the permissions of the user running OpenOffice.org. (CVE-2009-3301, CVE-2009-3302)

A heap-based buffer overflow flaw, leading to memory corruption, was found in the way OpenOffice.org parsed GIF files. An attacker could create a specially-crafted document, which once opened by a local, unsuspecting user, could cause OpenOffice.org to crash. Note: This flaw affects embedded GIF files in OpenOffice.org documents as well as stand-alone GIF files. (CVE-2009-2950)

Alerts:
Gentoo 201408-19 openoffice-bin 2014-08-31
Mandriva MDVSA-2010:221 openoffice.org 2010-11-05
Pardus 2010-67 openoffice 2010-06-04
SuSE SUSE-SA:2010:017 OpenOffice_org 2010-03-16
CentOS CESA-2010:0101 openoffice.org 2010-02-14
CentOS CESA-2010:0101 openoffice.org 2010-02-14
Fedora FEDORA-2010-1941 openoffice.org 2010-02-16
Fedora FEDORA-2010-1847 openoffice.org 2010-02-16
Ubuntu USN-903-1 openoffice.org 2010-02-24
Debian DSA-1995-1 openoffice.org 2010-02-12
Red Hat RHSA-2010:0101-02 openoffice.org 2010-02-12

Comments (none posted)

openoffice.org: insufficient macro security

Package(s):openoffice.org CVE #(s):CVE-2010-0136
Created:February 12, 2010 Updated:November 8, 2010
Description: From the Debian advisory:

It was discovered that macro security settings were insufficiently enforced for VBA macros.

Alerts:
Mandriva MDVSA-2010:221 openoffice.org 2010-11-05
SuSE SUSE-SA:2010:017 OpenOffice_org 2010-03-16
Debian DSA-1995-1 openoffice.org 2010-02-12
Ubuntu USN-903-1 openoffice.org 2010-02-24

Comments (none posted)

otrs2: SQL injection vulnerability

Package(s):otrs2 CVE #(s):CVE-2010-0438
Created:February 11, 2010 Updated:August 2, 2010
Description: From the Debian alert:

It was discovered that otrs2, the Open Ticket Request System, does not properly sanitise input data that is used on SQL queries, which might be used to inject arbitrary SQL to, for example, escalate privileges on a system that uses otrs2.

Alerts:
SUSE SUSE-SR:2010:014 OpenOffice_org, apache2-slms, aria2, bogofilter, cifs-mount/samba, clamav, exim, ghostscript-devel, gnutls, krb5, kvirc, lftp, libpython2_6-1_0, libtiff, libvorbis, lxsession, mono-addon-bytefx-data-mysql/bytefx-data-mysql, moodle, openldap2, opera, otrs, popt, postgresql, python-mako, squidGuard, vte, w3m, xmlrpc-c, XFree86/xorg-x11, yast2-webclient 2010-08-02
openSUSE openSUSE-SU-2010:0366-1 otrs 2010-07-13
Debian DSA-1993-1 otrs2 2010-02-10

Comments (none posted)

postfix: insecure default configuration

Package(s):postfix CVE #(s):CVE-2010-0230
Created:February 15, 2010 Updated:February 17, 2010
Description:

From the SUSE advisory:

The value of SMTPD_LISTEN_REMOTE accidentally defaulted to 'yes'. The postfix smtp daemon therefore was reachable over the network by default. This update resets the value to 'no' in /etc/sysconfig/mail. If you intentionally want postfix to listen for remote connections you need to manually set it to 'yes' again.

Alerts:
SuSE SUSE-SA:2010:011 postfix 2010-02-15

Comments (none posted)

ruby: arbitrary code execution

Package(s):ruby1.9 CVE #(s):CVE-2009-4124
Created:February 16, 2010 Updated:February 17, 2010
Description: From the Ubuntu advisory:

Emmanouel Kellinis discovered that Ruby did not properly handle certain string operations. An attacker could exploit this issue and possibly execute arbitrary code with application privileges.

Alerts:
Ubuntu USN-900-1 ruby1.9 2010-02-16

Comments (none posted)

samba: read/write access on protected files

Package(s):samba CVE #(s):
Created:February 15, 2010 Updated:February 17, 2010
Description:

From the Pardus advisory:

The weakness is caused due to the insecure "wide links" option being enabled by default, which allows the creation of symlinks to directories placed outside a writable share. This can be exploited to gain read and write access to restricted directories with the privileges of the e.g. guest account user via directory traversal attacks.

Successful exploitation without authentication requires that a public writable share is exported and that the option "wide links" is set to "yes" (default).

Alerts:
Pardus 2010-32 samba 2010-02-14

Comments (none posted)

sun-java: arbitrary code execution

Package(s):sun-jdk sun-jre CVE #(s):
Created:February 15, 2010 Updated:February 17, 2010
Description:

From the Pardus advisory:

The vulnerability is caused from package.py, postInstall script of sun-java package. It tries to create /opt/sun-jdk/jre/.systemPrefs directory with "os.makedirs()" function, however default permission of the directories created by os.makedirs() is 0777. This allows anyone to replace sun java binaries, which can be used to execute arbitrary code.

NOTE: This vulnerability is Pardus specific.

Alerts:
Pardus 2010-31 sun-jdk sun-jre 2010-02-14

Comments (none posted)

tomcat6: multiple vulnerabilities

Package(s):tomcat6 CVE #(s):CVE-2009-2693 CVE-2009-2901 CVE-2009-2902
Created:February 12, 2010 Updated:December 28, 2012
Description: From the Ubuntu advisory:

It was discovered that Tomcat did not correctly validate WAR filenames or paths when deploying. A remote attacker could send a specially crafted WAR file to be deployed and cause arbitrary files and directories to be created, overwritten, or deleted.

Alerts:
openSUSE openSUSE-SU-2012:1700-1 tomcat6 2012-12-27
openSUSE openSUSE-SU-2013:0147-1 tomcat6 2013-01-23
openSUSE openSUSE-SU-2012:1701-1 tomcat 2012-12-27
Gentoo 201206-24 tomcat 2012-06-24
Debian DSA-2207-1 tomcat5.5 2011-03-30
Pardus 2011-38 tomcat-servlet-api 2011-02-14
Mandriva MDVSA-2010:177 tomcat5 2010-09-12
Mandriva MDVSA-2010:176 tomcat5 2010-09-12
CentOS CESA-2010:0580 tomcat5 2010-08-03
Red Hat RHSA-2010:0582-01 tomcat5 2010-08-02
Red Hat RHSA-2010:0580-01 tomcat5 2010-08-02
SuSE SUSE-SR:2010:008 gnome-screensaver tomcat libtheora java-1_6_0-sun samba 2010-04-07
Ubuntu USN-899-1 tomcat6 2010-02-11

Comments (none posted)

webmin: cross-site scripting

Package(s):webmin CVE #(s):CVE-2009-4568
Created:February 15, 2010 Updated:February 17, 2010
Description:

From the Mandriva advisory:

[...] a cross-site scripting issue which allows remote attackers to inject arbitrary web script or HTML via unspecified vectors (CVE-2009-4568).

Alerts:
Mandriva MDVSA-2010:036 webmin 2010-02-12

Comments (none posted)

Page editor: Jake Edge

Kernel development

Brief items

Kernel release status

The current development kernel is 2.6.33-rc8, released on February 12. Linus says:

I think this is going to be the last -rc of the series, so please do test it out. A number of regressions should be fixed, and while the regression list doesn't make me _happy_, we didn't have the kind of nasty things that went on before -rc7 and made me worried.

Full details can be found in the changelog.

According to the latest regression report, the number of unresolved regressions has risen to 31, the highest point yet in this development cycle.

Comments (4 posted)

Quotes of the week

There _are_ things we can do though. Detect a write to the old file and emit a WARN_ON_ONCE("you suck"). Wait a year, turn it into WARN_ON("you really suck"). Wait a year, then remove it.
-- Feature deprecation Andrew Morton style

The post-Google standard company perks - free food, on-site exercise classes, company shuttles - make it trivial to speak only to fellow employees in daily life. If you spend all day with your co-workers, socialize only with your co-workers, and then come home and eat dinner with - you guessed it - your co-worker, you might go several years without hearing the words, "Run Solaris on my desktop? Are you f-ing kidding me?"
--- Valerie Aurora

Everybody takes it for granted to run megabytes of proprietary object code, without any memory protection, attached to an insecure public network (GSM). Who would do that with his PC on the Internet, without a packet filter, application level gateways and a constant flow of security updates of the software? Yet billions of people do that with their phones all the time.
-- Harald Welte

Comments (9 posted)

Compression formats for kernel.org

By Jonathan Corbet
February 17, 2010
The kernel.org repository depends heavily on compression to keep its storage and bandwidth expenses down. An uncompressed tarball for the 2.6.32 release weighs in at 365MB; if downloaders grabbed the data in this format, the resulting bandwidth usage would be huge. So kernel.org does not make uncompressed tarballs available; instead, one can choose between versions compressed with gzip (79MB) or bzip2 (62MB). Bzip2 is the newer choice; it took a while to catch on because the needed tools were not widely shipped. Now, though, the folks at kernel.org are considering making a change in the compression formats used there.

What's driving this discussion is the availability of the XZ tool, which is based on the LZMA compression algorithm. XZ offers better compression performance - 53MB on that 2.6.32 tarball - but it suffers from a familiar problem: the tools are not yet widely available in distributions, especially those of the "enterprise" variety. This has led to pushback against the idea of standardizing on XZ in the near future, as can be seen in this comment from Ted Ts'o:

Keep in mind that there are people where who are still using RHEL 3, and some of them might want to download from ftp.kernel.org. So those people who are suggesting that we replace .gz files with .xz on kernel.org are *really* smoking something good.

In fact, there is little pressure to replace the gzip format anytime in the near future. Its compression performance may not be the best, but it does have the advantage of being far faster than any of the alternatives. From the discussion, it is fairly clear that some users care about decompression time. What is more likely is that XZ might eventually displace files in the bzip2 format. Then there would be a clear choice: speed and widespread availability or the best available compression. Even that change, though, is likely to be at least a year away; in the mean time, kernel.org will probably carry files in all three formats.

(This discussion also included a side thread on changing the 2.6.xx numbering scheme. Once again, though, the expected flame wars failed to materialize. There just does not seem to be much interest in or energy for this particular change.)

Comments (19 posted)

Extended error reporting

By Jonathan Corbet
February 17, 2010
Linux contains a number of system calls which do complex things; they take large structures as input, operate on significant internal state, and, perhaps, return some sort of complicated output data. The normal status returned from these system calls, however, is compressed down into a single integer called errno. Application programmers dealing with certain subsystems (Video4Linux2 being your editor's favorite in this regard) will all be well familiar with the process of trying to figure out what the problem is when the kernel says only "it failed."

Andi Kleen describes the problem this way:

I always describe that as a the "ed approach to error handling". Instead of giving a error message you just give ?. Just ? happens to be EINVAL in Linux.

My favourite example of this is the configuration of the networking queueing disciplines, which configure complicated data structures and algorithms and in many cases have tens of different error conditions based on the input parameters -- and they all just report EINVAL.

It would be nice to provide application developers with better information than this. A brief discussion covered some of the options:

  • Use printk() to put information into the system logfile. This approach is widely used, but it bloats the kernel with string data, risks flooding the logs, and the resulting information may not be easily accessible to an unprivileged programmer.

  • Extend specific system calls to enable them to provide richer status information. Just adding a new version of ioctl() would address many of the worst problems.

  • Create an errno-like mechanism by which any system call could return extended information. That information could be an error string, some sort of special code, or, as Alan Cox suggested, a pointer to the structure field which caused the problem.

One could certainly argue that the narrow errno mechanism is showing its age and could use an upgrade. Any enhancements, though, would be Linux-specific and non-POSIX, which always tends to limit their uptake. They would also have to be lived with forever, and, thus, would require careful design. So we're unlikely to see a solution in the mainline anytime soon, even if somebody does take up the challenge.

Comments (9 posted)

Kernel development news

Merging kdb and kgdb

By Jake Edge
February 17, 2010

It was something of a surprise when Linus Torvalds merged kgdb—a stub to talk to the gdb debugger—back in the 2.6.26 merge window, because of his well-known disdain for kernel debuggers. But there is another kernel debugging solution that has long been out of the mainline: kdb. Jason Wessel has proposed merging the two solutions by reworking kgdb to use the "kdb shell" underneath, which would lead to both solutions being available for kernel hackers.

The two debuggers serve different purposes, with kdb having much less functionality, but they both have uses. Kgdb allows source-level debugging using gdb over a serial line, but that requires a separate system. For systems where it is painful or impractical to set up a serial connection, kdb may provide enough capability to debug a problem. In addition, things like kernel modesetting (KMS) allow for additional features that kdb has lacked. Wessel described one possibility:

A 2010 example of where kdb can be useful over kgdb is where you have a small netbook, no serial ports etc... and you are running X and your file system driver crashes the kernel. With kdb plus kms you can get an opportunity to see the crash which would have otherwise been lost from /var/log/messages because the crash was in the file system driver.

While kgdb allows access to all of the standard debugging commands that gdb provides, kdb has a much more limited command set. One can examine and change memory locations or registers, set breakpoints, and get a backtrace of the stack, but those commands typically require using addresses, rather than symbolic names. Currently, the best reference for kdb commands comes from a developerWorks article, though Wessel plans to change that. There is some documentation that comes with the patches, but a command reference will depend on exactly which pieces, if any, actually land in the mainline.

It should be noted that one of the capabilities that was removed from kdb as part of the merger is the disassembler. It was x86 specific, and the new code is "99% platform independent", according to the FAQ about the merged code. Because kgdb is implemented for many architectures, rewriting it atop kdb led to support for many more architectures for kdb. Instead of just the x86 family, kdb now supports arm, blackfin, mips, sh, powerpc, and sparc.

In addition, kgdb and kdb can work together. From a running kgdb session, one can use the gdb monitor command to access kdb commands. There are several that might be helpful like ps for a process list or dmesg to see log output.

The FAQ lists a number of other advantages that would come from the merge, beyond just getting kdb into the mainline so that its users no longer have to patch their kernels, The basic idea behind the advantages listed is to unite the users and developers of kgdb and kdb so that they are all pulling in the same direction, because "both kdb and kgdb have similar needs in terms of how they integrate into the kernel". There have been arguments in the past about which of the two solutions is best, but, since they serve different use cases, having both available would have another benefit: "No longer will people have to debate which is better, kdb or kgdb, why do we have only one... Just go use the best tool for the job."

Wessel notes that Ubuntu has enabled kgdb in recent kernels, which is something he would like to see done by other distributions. If kdb is available, that too could be enabled, which would make it easier for users to access the functionality:

My other hope is that the new kdb is much easier to use in the sense that the barrier of entry is much lower. For example, someone with a laptop running a kernel with a kdb enabled kernel can use it as easily as:
    echo kms,kbd > /sys/module/kgdboc/parameters/kgdboc
    echo g > /proc/sysrq-trigger
    dmesg
    bt
    go
And voila you just ran the kernel debugger.

In the example above, Wessel shows how to enable kdb (for keyboard (kbd) and KMS operation), then trap into it using sysrq-g (once enabled, kdb will also be invoked if there is a panic or oops). The following three commands are kdb commands for looking at log output, getting a stack backtrace, and continuing execution.

The patches themselves are broken up into three separate patchsets: the first and largest adds the kdb infrastructure into kernel/debug/ and moves kgdb.c into that directory, the second adds KMS support for kdb along with an experimental patch to do atomic modesetting for the i915 graphics driver, and the third allows kernel debugging via kdb or kgdb early in the boot process; starting from the point where earlyprintk() is available. Wessel is targeting 2.6.34 and, at least so far, the patches have been well received. The most recent posting is version 3 of the patchset, with a long list of changes made in response to earlier comments. Furthermore, an RFC about the idea last May gained a fair number of comments that clearly indicated there was interest in kdb and merging it with the kgdb code.

Sharp-eyed readers will note some similarities between this proposal and the recent utrace push. In both cases, an existing debugging facility was rewritten using a new core, but there are differences as well. Unlike utrace, the kdb/kgdb patches directly provide some lacking user-space functionality. Whether that is enough to overcome Torvalds's semi-hostile attitude towards kernel debuggers—though the inclusion of kgdb would seem to indicate some amount of softening—remains to be seen.

Comments (7 posted)

How old is our kernel?

By Jonathan Corbet
February 17, 2010
April 2005 was a bit of a tense time in the kernel development community. The BitKeeper tool which had done so much to improve the development process had suddenly become unavailable, and it wasn't clear what would replace it. Then Linus appeared with a new system called git; the current epoch of kernel development can arguably be dated from then. The opening event of that epoch was commit 1da177e4, the changelog of which reads:

Initial git repository build. I'm not bothering with the full history, even though we have it. We can create a separate "historical" git archive of that later if we want to, and in the meantime it's about 3.2GB when imported into git - space that would just make the early git days unnecessarily complicated, when we don't have a lot of good infrastructure for it.

Let it rip!

The community did, indeed, let it rip; some 180,000 changesets have been added to the repository since then. Typically hundreds of thousands of lines of code are changed with each three-month development cycle. A while back, your editor began to wonder how much of the kernel had actually been changed, and how much of our 2.6.33-to-be kernel dates back to 2.6.12-rc2, which was tagged at the opening of the git era? Was there anything left of the kernel we were building in early 2005?

Answering this question is a simple matter of bashing out some ugly scripts and dedicating many hours of processing time. In essence, the "git blame" command can be used to generate an annotated version of a file which lists the last commit to change each line of code. Those commit IDs can be summed, then associated with major version releases. At the end of the process, one has a simple table showing the percentage of the current kernel code base which was created for each major release since 2.6.12. Here's what it looks like:

[Pretty bar chart]

In summary: just over 41% 31% of the kernel tree dates back to 2.6.12, and has not been modified since then. Our kernel may be changing quickly, but parts of it have not changed at all for nearly five years. Since then, we see a steady stream of changes, with more recent kernels being more strongly represented than the older ones. That curve will partly be a result of the general increase in the rate of change over time; 2.6.13 had fewer than 4,000 commits, while 2.6.33 will have almost 11,000. Still, one has to wonder what happened with 2.6.20 (5,000 commits) to cause that release to represent less than 2% of the total code base.

Much of the really old material is interspersed with newer lines in many files; comments and copyright notices, in particular, can go unchanged for a very long time. The 2.6.12 top-level makefile set VERSION=2 and PATCHLEVEL=6, and those lines have not changed since; the next line (SUBLEVEL=33) was changed in December.

There are interesting conclusions to be found at the upper end of the graph as well. Using this yardstick, 2.6.33 is the smallest development cycle we have seen in the last year, even though this cycle will have replaced some code added during the previous cycles. 4.2% of the code in 2.6.33 was last touched in the 2.6.33 cycle, while each of the previous four kernels (2.6.29 through 2.6.32) still represents more than 5.5% of the code to be shipped in 2.6.33.

Another interesting exercise is to look for entire files which have not been touched in five years. Given the amount of general churn and API change which has happened over that time, files which have not changed at all have a good chance of being entirely unused. Here is a full list of files which are untouched since 2.6.12 - all 1062 of them. Some conclusions:

  • Every kernel tarball carries around drivers/char/ChangeLog, which is mostly dedicated to documenting the mid-90's TTY exploits of Ted Ts'o. There is only one change since 1998, and that was in 2001. Files like this may be interesting from a historical point of view, but they have little relevance to current kernels.

  • Unsurprisingly, the documentation directory contains a great deal of material which has not been updated in a long time. Much of it need not change; the means by which one configures an ISA Sound Blaster card is pretty much as it always was - assuming one can find such a card and an ISA bus to plug it into. Similarly, Klingon language support (Documentation/unicode.txt), Netwinder support, and such have not seen much development activity recently, so the documentation can be deemed to be current, if not particularly useful. All told, 41% of the documentation directory dates back to 2.6.12. There was a big surge of documentation work in 2.6.32; without that, a larger percentage of this subtree would look quite old.

  • Some old interfaces haven't changed in a long time, resulting in a lot of static files in include/. <linux/sort.h> declares sort(), which is used in a number of places. <include/fcdevice.h> declares alloc_fcdev(), and includes a warning that "This file will get merged with others RSN." Much of the sunrpc interface has remained static for a long time as well.

  • Ancient code abounds in the driver tree, though, perhaps unsurprisingly, old header files are much more common than old C files. The ISDN driver tree has been quite static.

  • Much of sound/oss has not been touched for a long time and must be nicely filled with cobwebs by now; there hasn't been much of a reason to touch the OSS code for some time.

  • net/decnet/TODO contains a "quick list of things that need finishing off"; it, too, hasn't been changed in the git era. One wonders how the DECnet hackers are doing on that list...

So which subsystem is the oldest? This plot looks at the kernel subsystems (as defined by top-level directories) and gives the percentage of 2.6.12 code in each:

[Oldest subsystems]

The youngest subsystem, unsurprisingly, is tools/, which did not exist prior to 2.6.29. Among subsystems which did exist in 2.6.12, the core kernel, with about 13% code dating from that release, is the newest. At the other end, the sound subsystem is more than 45% 2.6.12 code - the highest in the kernel. For those who are curious about the age distribution in specific subsystems, this page contains a chart for each.

In summary: even in a code base which is evolving as rapidly as the kernel, there is a lot of code which has not been touched - even by coding style or white space fixes - in the last five years. Code stays around for a long time.

(For those who would like to play with this kind of data, the scripts used have been folded into the gitdm repository at git://git.lwn.net/gitdm.git).

Note: this article has been edited to fix an error which overstated the amount of 2.6.12 code remaining in the full kernel.

Comments (55 posted)

Huge pages part 1 (Introduction)

February 16, 2010

This article was contributed by Mel Gorman

[Editor's note: this article is the first in a five-part series on the use of huge pages with Linux. We are most fortunate to have core VM hacker Mel Gorman as the author of these articles! The remaining installments will appear in future LWN Weekly Editions.]

One of the driving forces behind the development of Virtual Memory (VM) was to reduce the programming burden associated with fitting programs into limited memory. A fundamental property of VM is that the CPU references a virtual address that is translated via a combination of software and hardware to a physical address. This allows information only to be paged into memory on demand (demand paging) improving memory utilisation, allows modules to be arbitrary placed in memory for linking at run-time and provides a mechanism for the protection and controlled sharing of data between processes. Use of virtual memory is so pervasive that it has been described as “one of the engineering triumphs of the computer age” [denning96], but this indirection is not without cost.

Typically, the total number of translations required by a program during its lifetime will require that the page tables are stored in main memory. Due to translation, a virtual memory reference necessitates multiple accesses to physical memory, multiplying the cost of an ordinary memory reference by a factor depending on the page table format. To cut the costs associated with translation, VM implementations take advantage of the principal of locality [denning71] by storing recent translations in a cache called the Translation Lookaside Buffer (TLB) [casep78,smith82,henessny90]. The amount of memory that can be translated by this cache is referred to as the "TLB reach" and depends on the size of the page and the number of TLB entries. Inevitably, a percentage of a program's execution time is spent accessing the TLB and servicing TLB misses.

The amount of time spent translating addresses depends on the workload as the access pattern determines if the TLB reach is sufficient to store all translations needed by the application. On a miss, the exact cost depends on whether the information necessary to translate the address is in the CPU cache or not. To work out the amount of time spent servicing the TLB misses, there are some simple formulas:

Cyclestlbhit = TLBHitRate * TLBHitPenalty

Cyclestlbmiss_cache = TLBMissRatecache * TLBMissPenaltycache

Cyclestlbmiss_full = TLBMissRatefull * TLBMissPenaltyfull

TLBMissCycles = Cyclestlbmiss_cache + Cycles_tlbmiss_full

TLBMissTime = (TLB Miss Cycles)/(Clock rate)

If the TLB miss time is a large percentage of overall program execution, then the time should be invested to reduce the miss rate and achieve better performance. One means of achieving this is to translate addresses in larger units than the base page size, as supported by many modern processors.

Using more than one page size was identified in the 1990s as one means of reducing the time spent servicing TLB misses by increasing TLB reach. The benefits of huge pages are twofold. The obvious performance gain is from fewer translations requiring fewer cycles. A less obvious benefit is that address translation information is typically stored in the L2 cache. With huge pages, more cache space is available for application data, which means that fewer cycles are spent accessing main memory. Broadly speaking, database workloads will gain about 2-7% performance using huge pages whereas scientific workloads can range between 1% and 45%.

Huge pages are not a universal gain, so transparent support for huge pages is limited in mainstream operating systems. On some TLB implementations, there may be different numbers of entries for small and huge pages. If the CPU supports a smaller number of TLB entries for huge pages, it is possible that huge pages will be slower if the workload reference pattern is very sparse and making a small number of references per-huge-page. There may also be architectural limitations on where in the virtual address space huge pages can be used.

Many modern operating systems, including Linux, support huge pages in a more explicit fashion, although this does not necessarily mandate application change. Linux has had support for huge pages since around 2003 where it was mainly used for large shared memory segments in database servers such as Oracle and DB2. Early support required application modification, which was considered by some to be a major problem. To compound the difficulties, tuning a Linux system to use huge pages was perceived to be a difficult task. There have been significant improvements made over the years to huge page support in Linux and as this article will show, using huge pages today can be a relatively painless exercise that involves no source modification.

This first article begins by installing some huge-page-related utilities and support libraries that make tuning and using huge pages a relatively painless exercise. It then covers the basics of how huge pages behave under Linux and some details of concern on NUMA. The second article covers the different interfaces to huge pages that exist in Linux. In the third article, the different considerations to make when tuning the system are examined as well as how to monitor huge-page-related activities in the system. The fourth article shows how easily benchmarks for different types of application can use huge pages without source modification. For the very curious, some in-depth details on TLBs and measuring the cost within an application are discussed before concluding.

1 Huge Page Utilities and Support Libraries

There are a number of support utilities and a library packaged collectively as libhugetlbfs. Distributions may have packages, but this article assumes that libhugetlbfs 2.7 is installed. The latest version can always be cloned from git using the following instructions

  $ git clone git://libhugetlbfs.git.sourceforge.net/gitroot/libhugetlbfs/libhugetlbfs
  $ cd libhugetlbfs
  $ git checkout -b next origin/next
  $ make PREFIX=/usr/local

There is an install target that installs the library and all support utilities but there are install-bin, install-stat and install-man targets available in the event the existing library should be preserved during installation.

The library provides support for automatically backing text, data, heap and shared memory segments with huge pages. In addition, this package also provides a programming API and manual pages. The behaviour of the library is controlled by environment variables (as described in the libhugetlbfs.7 manual page) with a launcher utility hugectl that knows how to configure almost all of the variables. hugeadm, hugeedit and pagesize provide information about the system and provide support to system administration. tlbmiss_cost.sh automatically calculates the average cost of a TLB miss. cpupcstat and oprofile_start.sh provide help with monitoring the current behaviour of the system. Manual pages are available describing in further detail each utility.

2 Huge Page Fault Behaviour

In the following articles, there will be discussions on how different type of memory regions can be created and backed with huge pages. One important common point between them all is how huge pages are faulted and when the huge pages are allocated. Further, there are important differences between shared and private mappings depending on the exact kernel version used.

In the initial support for huge pages on Linux, huge pages were faulted at the same time as mmap() was called. This guaranteed that all references would succeed for shared mappings once mmap() returned successfully. Private mappings were safe until fork() was called. Once called, it's important that the child call exec() as soon as possible or that the huge page mappings were marked MADV_DONTFORK with madvise() in advance. Otherwise, a Copy-On-Write (COW) fault could result in application failure by either parent or child in the event of allocation failure.

Pre-faulting pages drastically increases the cost of mmap() and can perform sub-optimally on NUMA. Since 2.6.18, huge pages were faulted the same as normal mappings when the page was first referenced. To guarantee that faults would succeed, huge pages were reserved at the time the shared mapping is created but private mappings do not make any reservations. This is unfortunate as it means an application can fail without fork() being called. libhugetlbfs handles the private mapping problem on old kernels by using readv() to make sure the mapping is safe to access, but this approach is less than ideal.

Since 2.6.29, reservations are made for both shared and private mappings. Shared mappings are guaranteed to successfully fault regardless of what process accesses the mapping.

For private mappings, the number of child processes is indeterminable so only the process that creates the mapping mmap() is guaranteed to successfully fault. When that process fork()s, two processes are now accessing the same pages. If the child performs COW, an attempt will be made to allocate a new page. If it succeeds, the fault successfully completes. If the fault fails, the child gets terminated with a message logged to the kernel log noting that there were insufficient huge pages. If it is the parent process that performs COW, an attempt will also be made to allocate a huge page. In the event that allocation fails, the child's pages are unmapped and the event recorded. The parent successfully completes the fault but if the child accesses the unmapped page, it will be terminated.

3 Huge Pages and Swap

There is no support for the paging of huge pages to backing storage.

4 Huge Pages and NUMA

On NUMA, memory can be local or remote to the CPU, with significant penalty incurred for remote access. By default, Linux uses a node-local policy for the allocation of memory at page fault time. This policy applies to both base pages and huge pages. This leads to an important consideration while implementing a parallel workload.

The thread processing some data should be the same thread that caused the original page fault for that data. A general anti-pattern on NUMA is when a parent thread sets up and initialises all the workload's memory areas and then creates threads to process the data. On a NUMA system this can result in some of the worker threads being on CPUs remote with respect to the memory they will access. While this applies to all NUMA systems regardless of page size, the effect can be pronounced on systems where the split between worker threads is in the middle of a huge page incurring more remote accesses than might have otherwise occurred.

This scenario may occur for example when using huge pages with OpenMP, because OpenMP does not necessarily divide its data on page boundaries. This could lead to problems when using base pages, but the problem is more likely with huge pages because a single huge page will cover more data than a base page, thus making it more likely any given huge page covers data to be processed by different threads. Consider the following scenario. A first thread to touch a page will fault the full page's data into memory local to the CPU on which the thread is running. When the data is not split on huge-page-aligned boundaries, such a thread will fault its data and perhaps also some data that is to be processed by another thread, because the two threads' data are within the range of the same huge page. The second thread will fault the rest of its data into local memory, but will still have part of its data accesses be remote. This problem manifests as large standard deviations in performance when doing multiple runs of the same workload with the same input data. Profiling in such a case may show there are more cross-node accesses with huge pages than with base pages. In extreme circumstances, the performance with huge pages may even be slower than with base pages. For this reason it is important to consider on what boundary data is split when using huge pages on NUMA systems.

One work around for this instance of the general problem is to use MPI in combination with OpenMP. The use of MPI allows division of the workload with one MPI process per NUMA node. Each MPI process is bound to the list of CPUs local to a node. Parallelisation within the node is achieved using OpenMP, thus alleviating the issue of remote access.

5 Summary

In this article, the background to huge pages were introduced, what the performance benefits can be and some basics of how huge pages behave on Linux. The next article (to appear in the near future) discusses the interfaces used to access huge pages.

Read the successive installments:

Details of publications referenced in these articles can be found in the bibliography at the end of Part 5.

This material is based upon work supported by the Defense Advanced Research Projects Agency under its Agreement No. HR0011-07-9-0002. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author and do not necessarily reflect the views of the Defense Advanced Research Projects Agency.

Comments (18 posted)

Patches and updates

Kernel trees

Linus Torvalds Linux 2.6.33-rc8 ?

Architecture-specific

Core kernel code

Development tools

Device drivers

Filesystems and block I/O

Memory management

Networking

Virtualization and containers

Benchmarks and bugs

Miscellaneous

Page editor: Jonathan Corbet

Distributions

News and Editorials

"Easy" is in the eye of the beholder

By Jake Edge
February 17, 2010

There is often a stark difference between how developers and users see problems and their solutions. A recent thread on the opensuse-project mailing list highlights exactly that difference in a discussion on providing a way for users to easily create a USB stick installation image. What developers think of as easy may not match their users' expectations.

Clayton (aka smaug42) posted a request to the list: "When openSUSE 11.3 is released is there any possibility that we can provide a 'easy' way to create a full install from USB sticks?" The request was met with a number of solutions along the lines of Rupert Horstkötter's:

doing a full installation from USB already is damn easy with oS 11.2. Just dump the hybrid iso (LiveCD) onto USB media and boot from that.
    dd if=image.iso of=/dev/sdX bs=32k

For a developer, or advanced user, solutions of that kind are fairly reasonable, but, as Clayton pointed out, it doesn't solve the problem for other users: "I don't think that dd if=openSUSE-11.2-KDE4-LiveCD-i686.iso of=/dev/sdX bs=4M;sync is 'easy' (I can do it, but my mother for example cannot deal with that on her machine)" In addition, there are some concerns associated with that method as Carlos E. R. notes:

dd is a dangerous tool. You get the device name wrong and you might destroy your entire hard disk content. And it is easy to get the name wrong: in my computer, device names change from boot to boot, the moment I plug an external usb disk.

There are cross-platform GUI tools like UNetbootin that will assist users in creating a bootable USB stick, but they aren't necessarily well-known—or installed by default. What Clayton and others are looking for is something that is integrated into the distribution:

Ubuntu for example has a "create a bootable USB installer" thing built into the system menu now.... we have nothing but.. "use the command line, it's easy" response :-( which is meaningless to the average user.

That led to a suggestion that was much more in line with what is easy for a regular user. Cornelius Schumacher mentioned the imagewriter tool, which is part of the KIWI project. Further investigation found that it solved most of the problem, though there were a few issues that needed to be dealt with—starting with installing it by default for desktops.

So it seems like openSUSE 11.3 (or a subsequent version) will add an easy mechanism for users to create their USB sticks. That is obviously a good thing for users. It reflects a view toward making desktop Linux more accessible to those who are not "computer geeks", which is something that Ubuntu has been pioneering for some time. Other distributions are getting on board with that as well, which requires developers and other power users to rethink how various things work.

Solving problems is what engineers do, but solving the right problem, in the right way, is something that requires a different mindset. As these kinds of discussions show, though, that mindset is starting to sink in. That bodes well for the existence of a "year of the Linux desktop"—some day.

Comments (11 posted)

New Releases

Announcing NetBSD 5.0.2

NetBSD 5.0.2 has been released. Click below for the full announcement or see the release notes for additional information.

Full Story (comments: none)

Distribution News

Debian GNU/Linux

Bits from the Stable Release Team

Click below for a few bits from the Debian Stable release team. Topics include Updating your package in stable, proposed-updates, oldstable, Appointment of a new Stable Release Manager, and New blood wanted.

Full Story (comments: none)

Debian press team updates

Steve McIntyre has announced that Alexander and Meike Reichle-Schmehl have been officially added to the Debian press team.

Full Story (comments: none)

Fedora

Fedora 13 and rawhide diverge

The much-anticipated split between the Fedora "Rawhide" development repository and the stabilizing Fedora 13 repository has happened at last. That means that people continuing to follow Rawhide should fasten their seat belts and update their backups in anticipation of a flood of packages intended for Fedora 14. All Rawhide guinea pigs users should check their yum configurations to be sure they are on the path they intend to follow.

Full Story (comments: none)

NVIDIA Has Gallium3D Support In Fedora 13 (Phoronix)

Phoronix reports that Fedora 13 will come with 3D support for the free Nouveau NVIDIA driver. "With Fedora 13, Red Hat is again shipping with the latest free software NVIDIA bits, which now includes 3D support. Thanks to an update to the mesa-dri-drivers-experimental package, there is 3D / OpenGL support enabled for NVIDIA hardware. This 3D support is coming from Nouveau's Gallium3D driver for most of the NVIDIA graphics hardware while there is also a classic Mesa driver for old NV hardware that recently came about."

Comments (27 posted)

New paths for Fedora development

Jesse Keating covers some repository changes as part of the No Frozen Rawhide initiative. "As part of the No Frozen Rawhide initiative, a couple new paths are showing up on our public mirrors. Previously rawhide was published to pub/fedora/linux/development/. In the very near future that path will change to pub/fedora/linux/development/rawhide/. At the same time, a new path will appear, pub/fedora/linux/development/13/. This path will be where the Fedora 13 stabilization happens as we work toward releasing Fedora 13. Rawhide will move on and start seeing changes more appropriate for Fedora 14 and beyond."

Full Story (comments: none)

Fedora Board Recap 2010-02-11

Click below for a recap of the February 11, 2010 meeting of the Fedora Advisory Board. Topics include Improved metrics, TLA for Fedora Turkiye, Strategic working group, Importance of strategy, Different default offering, and No Frozen Rawhide.

Full Story (comments: none)

Board Strategic Working Group Meeting Recap 2010-02-15

Click below for a recap of the February 15, 2010 meeting of the Fedora Strategic Working Group. Topics include Spins and What is Fedora the Distribution?.

Full Story (comments: none)

Gentoo Linux

Gentoo Foundation Trustees 2010 election

The Gentoo Foundation has opened an election to fill the 3 seats in the Trustees that have reached the end of their 2 year term. Nominations will be open until March 6, 2010.

Full Story (comments: none)

Summary of the Gentoo council meeting of February 8th, 2010

Click below for a summary of the February 8, 2010 meeting of the Gentoo Council. Topics include GLEPs 58 to 61 and VDB discussion.

Full Story (comments: none)

Mandriva Linux

Noteworthy Mandriva Cooker changes 1 February - 14 February 2010

Frederic Himpe covers some noteworthy changes in Mandriva Cooker (development branch). "KDE has been updated to final version 4.4.0. New features since KDE 4.3 include integrated desktop search in Dolphin, a new Plasma desktop interface optimized for netbooks, Palapelli (a jigsaw puzzle game), Cantor (a scientific maths application) and many others."

Comments (none posted)

Ubuntu family

Ubuntu Global Jam

There will be a Ubuntu Global Jam March 26 - 28, 2010. See the announcement for details. "What is Ubuntu Global Jam? The Ubuntu Global Jam is an online and in person event that takes place all across the world. People get together with the interest of making Ubuntu better, while having a good time socializing with other people near you who have the same interest and passion about Ubuntu as you do."

Comments (none posted)

New Distributions

Introducing the Live Hacking CD

The Live Hacking CD is a new Linux distribution packed with tools and utilities for ethical hacking, penetration testing and countermeasure verification. "Based on Ubuntu this 'Live CD' runs directly from the CD and doesn't require installation on your hard-drive. Once booted you can use the included tools to test, check and ethically hack your own network to make sure that it is secure from outside intruders."

Full Story (comments: none)

Element

Element is an Ubuntu-based operating system for Home Theater or Media Center Personal Computers designed to be connected to your HDTV. Element comes with the software you need to manage your music, videos, photos, and internet media. Also included are a variety of applications that provide many of the same functions as your desktop PC, from web browsing to instant messaging and playing games. Element 1.0 is available for download now.

Comments (none posted)

Distribution Newsletters

Arch Linux Magazine, February 2010

The February 2010 edition of Arch Linux Magazine is out. Inside you'll find news from Devland, the schwag store, community contributions, plus the feature articles: On Persistent Devices, Gimp Grunge, Motorcycle With A Twist, and more.

Comments (none posted)

DistroWatch Weekly, Issue 341

The DistroWatch Weekly for February 15, 2010 is out. "It's been a fun and exciting week in the Linux world with things like Jeremy Garcia's Linuxquestions.org Members Choice Awards and the announcement-opps-not-announcement of RMS GNU/Linux-libre distribution hitting the Webwaves. Mandriva won an impressive major deployment contract and Debian Squeeze is running late. Linux Mint released their community distributions for KDE64 and Fluxbox. I updated my stable and yummy Mandriva 2010 with the newly released KDE 4.4 and give one of my favorite Linux tips. Happy reading!"

Comments (none posted)

The Mint Newsletter - issue 100

This issue of the Mint Newsletter covers the releases of Mint 8 Fluxbox, KDE64 and KDE, and several other topics.

Comments (none posted)

openSUSE Weekly News/110

This issue of the openSUSE Weekly News/110 covers * openSUSE News: Call for Volunteers in the German Wiki, * Duncan Mac-Vicar Prett: You don't need Kopete Facebook plugin anymore, * KDE SC 4.4 in the openSUSE Build Service, * How to submit a Story to the openSUSE Weekly News?, * h-online/Thorsten Leemhuis: Kernel Log: Coming in 2.6.33 (Part 4) - Architecture and virtualisation, and more.

Comments (none posted)

Ubuntu Weekly Newsletter #180

The Ubuntu Weekly Newsletter for February 13, 2010 is out. "In this issue we cover: Ubuntu Opportunistic Developer Week: Call For Participation, Interview With Jono by Joe Barker, Interview with Dustin Kirkland, Ubuntu Core Developer about encryption in Ubuntu, Upcoming Ubuntu Global Jam and your Loco Team, Ubuntu Honduras Loco Team at the T3 conference, Call for feedback on preferred desktop fonts, and much, much more!"

Full Story (comments: none)

Page editor: Rebecca Sobol

Development

Karma targets easier creation of educational software

February 17, 2010

This article was contributed by Joe 'Zonker' Brockmeier.

In January 2009, Bryan Berry proposed an "opinionated" activity framework that would make it easier to create interactive educational materials. Berry, the technology director for Open Learning Exchange (OLE) Nepal, found the One Laptop Per Child (OLPC) PyGTK activity framework lacking and saw a need to accelerate development by lowering the barrier of entry to creating learning activities. The result is Karma, an open source JavaScript library and framework that builds on standard Web technologies.

[Karma Introduction Page]

Rather than emphasizing integration with Sugar, Berry suggested embracing the most widely used development tools popular in developing countries, namely Web development tools, rather than platform-specific tools like PyGTK or other toolkits popular with educational projects. Berry also decided to avoid proprietary technologies like Flash that are widely used in creating educational content.

As a result, the Karma framework is based on JavaScript, HTML5, and SVG. This means that Karma only requires an HTML5-capable browser like Firefox or Google Chrome for users. Developers only need a text editor and Web browser to get started, and free tools like Inkscape to create graphics. Having Git also helps, as the framework is hosted on Gitorious. Eventually, the project should also support storing a student's work via Sugar's Journal and provide collaboration through the Telepathy real-time messaging framework.

The other consideration is resource constraints. Since Karma is being targeted at OLPC machines, the activities have to run on low-power machines. OLPC XO-1 laptops come with AMD Geode 433MHz (yes, MHz) CPUs with only 256MB of RAM. The lessons should run within a screen resolution of 1024x768, minus browser UI, and can feature images in JPG, PNG, or SVG. Developers can also include sound in Ogg Vorbis, but video is not yet supported (though it is on the roadmap).

[Karma Occupations Chart]

Karma and lessons are being worked on by SugarLabs and OLE Nepal. OLE Nepal is currently in the second phase of an OLPC pilot test, which has rolled out OLPC machines to 26 schools over six districts. As part of the pilot, OLE Nepal had written 60 lessons in Squeak (an open source version of Smalltalk) which are now being converted to Karma. The lessons are aimed at kids in elementary school, and include math, vocabulary, and geography lessons. According to the most recent meeting notes the group has managed to convert all but 12 of the 60 lessons from Squeak to Karma.

Code from the Karma project is under the MIT license. Lessons and content are licensed under the Creative Commons Attribution Share Alike license, which allows sharing and remixing of content so long as attribution requirements are met and requires distributors to pass along remixes under the same license.

[Karma Plants Chart]

The most recent release came out in early January. The release debuted the Karma API that covers working with audio, images, and the canvas, and a bundle for the XO. For those interested in seeing the fruits of Karma without a code checkout and without an XO at their disposal, the project has several demos on the front page of the Karma site that run just fine in a standard browser.

The Karma web site is a bit disorganized; it spreads out information between the main page and sub-pages hosted on the Karma Education blog, SugarLabs on Gitorious, and discussions on Google Groups. Once the project is farther along, some work on making it easier to get the tools and create a setup without scouring so many different pages will be a real help to developers.

The best way to get started currently is to do a quick Git checkout of the mainline Karma repository and follow Berry's tutorials. For developers familiar with JavaScript and Web development, it shouldn't be difficult at all to start developing lessons. For educators who aren't familiar with Web development, it will be a bit more difficult, though probably easier than PyGTK or other development frameworks.

The roadmap has 0.3 being released around March 31st. This release is slated to have full i18n support, several lessons that have been translated into three languages each, the Chakra browsing layout — which is a template for designing lessons — and a Narwhal build script that will create a bundle with all of the lessons under the Chakra layout.

Want to help move the project forward? The Karma team is actively seeking developers to help out. Discussions are hosted on the Karma.js Google Group, and the developers hang out on Freenode in #sugar and #olenepal, though #olenepal was empty when we checked earlier this week.

The Karma project seems to be on a good track for providing an interactive educational framework. A standard bundle for educational materials that runs in any W3C-compliant browser should be useful not only for students in developing countries using OLPCs, but also suitable for educational use around the world.

Comments (none posted)

System Applications

Audio Projects

JACK 1.9.5 released

Version 1.9.5 of the JACK Audio Connection Kit has been announced. "Continuing the JACK2 series. - Dynamic choice of maximum port number. - More robust sample rate change handling code in JackCoreAudioDriver. - Devin Anderson patch for Jack FFADO driver issues with lost MIDI bytes between periods (and more). - Fix port_rename callback : now both old name and new name are given as parameters. - Special code in JackCoreAudio driver to handle completely buggy Digidesign CoreAudio user-land driver. - Ensure that client-side message buffer thread calls thread_init callback if/when it is set by the client (backport of JACK1 rev 3838). - Check dynamic port-max value. Fix JackCoreMidiDriver::ReadProcAux when ring buffer is full (thanks Devin Anderson)."

Comments (none posted)

Device Drivers

libshcodecs 1.0.0 released

Version 1.0.0 of libshcodecs has been announced, it includes code cleanup and documentation work. "libshcodecs is a library for controlling SH-Mobile hardware codecs. The [SH-Mobile] processor series includes a hardware video processing unit that supports MPEG-4 and H.264 encoding and decoding. libshcodecs is available under the terms of the GNU LGPL."

Full Story (comments: none)

Telecom

Harald Welte: In six weeks from bare hardware to receiving BCCHs

Harald Welte writes about progress in creating an open GSM mobile telephone protocol implementation on his blog. "So, just to be clear on this: Neither OpenEZX, nor gnufiish nor Openmoko were ever about writing Free Software for the GSM baseband processor, i.e. the beast that exchanges messages with the actual GSM operator network. But this is what we're working on right now. [...] It's about time, don't you agree? after 19 years of only proprietary software on the baseband chips in billions of phones, it is more than time for bringing the shining light of Freedom into this area of computing."

Comments (8 posted)

Web Site Development

moin 1.8.7 released - important security and bug fixes

Version 1.8.7 of the moin wiki engine has been announced. "MoinMoin 1.8.7 is a security bug fix release. Please update as soon as possible. See http://moinmo.in/MoinMoinDownload for the release archive and the change log."

Full Story (comments: none)

Nagare web framework 0.3.0 released

Version 0.3.0 of Nagare web framework has been announced, it includes a number of new capabilities. "Nagare is a components based framework: a Nagare application is a composition of interacting components each one with its own state and workflow kept on the server. Each component can have one or several views that are composed to generate the final web page. This enables the developers to reuse or write highly reusable components easily and quickly."

Full Story (comments: none)

Desktop Applications

Animation Software

The Morevna Project: Anime with Synfig and Blender (Free Software Magazine)

Free Software Magazine takes a look at the Morevna Project, which is creating an animation based on a Russian folktale using free software tools. "From a more selfish perspective: this is a great opportunity to learn more about animation. The workflow this group has already created is already making huge bounds in defining paths for free-software-based animation production. Not only is this approach free as in freedom, it’s also free as in beer: anyone who’s worked with proprietary animation tools should realize this is worth a lot in itself. [...] From an advocacy perspective: engaging, fun, high profile free culture projects are amazingly good marketing, not just for free culture (the film will be released under the Creative Commons Attribution 3.0 license), but also for free software (everything used to make it is free software: Synfig, Blender, Inkscape, Gimp, Pencil, and so on)." (Thanks to Paul Wise).

Comments (3 posted)

Desktop Environments

GNOME 2.29.90 development release is out

Development release 2.29.90 of the GNOME desktop environment has been announced. "This is the first beta release release towards 2.30, which will happen in March; time flies. Please try this release and report your findings, so that we can make 2.30 smooth and polished. Please note that this milestone marks the beginning of the UI freeze."

Full Story (comments: none)

GNOME Software Announcements

The following new GNOME software has been announced this week: You can find more new GNOME software releases at gnomefiles.org.

Comments (none posted)

KDE Software Announcements

The following new KDE software has been announced this week: You can find more new KDE software releases at kde-apps.org.

Comments (none posted)

Xorg Software Announcements

The following new Xorg software has been announced this week: More information can be found on the X.Org Foundation wiki.

Comments (none posted)

Electronics

Kicad 2010-02-09-RC1 released

Version 2010-02-09-RC1 of Kicad, a circuit board CAD application, has been announced. Changes include: "All: Lot of changes, mainly in Pcbnew. Doc updated (French and English). Pcbnew: Lot of enhancements. Support of Netclasses (Please (re)read the on line documentation). Better DRC."

Comments (none posted)

Mail Clients

Sylpheed 3.0beta8 released

Development version 3.0beta8 of the Sylpheed mail client is available. Changes include: "* The new filter match type 'is in addressbook' was added. This can be used from filtering, query search and quick search. * The new account setup dialog was implemented. It also supports easy Gmail setup. * The address completion was modified. * The spell-checking and PGP settings are preserved for draft messages now. * The crash problem when trying to check PGP signatures while GnuPG was not available was fixed."

Comments (none posted)

Music Applications

MMA 1.5c now up and testing

Version 1.5c of MMA has been announced. "A development snapshot, version 1.5c, of MMA--Musical MIDI Accompaniment is available for downloading. Included in this release: - A new track, the Plectrum. This can generate a realistic MIDI guitar. Getting a realistic guitar sound using MIDI has been notoriously difficult as calculating the notes in each chord and strumming patterns can be very tricky. Now the MMA PLECTRUM pattern takes care of most of this for you so all you have to do is to enter the chords names and how when you want each string to be strummed or plucked."

Full Story (comments: none)

Rosegarden 10.02 released

Version 10.02 of Rosegarden, an audio and MIDI sequencer and musical notation editor, has been announced. "With this release, we finally bring an end to the long and difficult job of transforming Rosegarden from an obsolete KDE 3 application into a modern Qt 4 application. There was no precedent for an application following this upgrade path, and so we had to begin this process by writing our own custom porting tools. From there, we spent an entire year chipping away at an immense mountain of compiler errors before we could even get a glimpse to see if our new code was going to work. From that first peek until now swallowed the biggest part of a second year, digging into every dusty corner, and putting everything back in order."

Full Story (comments: none)

Office Applications

Gnumeric 1.10 is out

Version 1.10 of the Gnumeric spreadsheet has been released after nearly two years of development. New features include better ODF support, sheets that can be larger than 256 columns and 65,536 rows, improved graphs, better Excel compatibility, faster evaluation, a new ssgrep tool, and much more. Click below for the full release announcement.

Full Story (comments: 15)

Office Suites

OpenOffice.org 3.2 is available

Version 3.2 of the OpenOffice.org office suite has been announced. "At the start of its tenth anniversary year, and with over three hundred million downloads recorded in total, the OpenOffice.org Community today announced the release of the latest version of its personal productivity suite. OpenOffice.org 3.2 gets to work faster, offers new and improved functions, offers better compatibility with other office software, and fixes bugs and potential security vulnerabilities. In just over a year from launch, OpenOffice.org 3 had recorded over one hundred million downloads from the central download site alone, and the number continues to rise."

Full Story (comments: 2)

OpenOffice.org Newsletter

The February, 2010 edition of the OpenOffice.org Newsletter is out with the latest OO.o office suite articles and events.

Full Story (comments: none)

Video Applications

Announcing the dvd_menu_animator DVD authoring tool

Lawrence D'Oliveiro has launched the dvd_menu_animator project. "The idea is that you use Inkscape to do the main design of your menu, with all the design tools that that makes available. You also add information to the Inkscape drawing indicating the placement and names of the menu buttons. You then bring the drawing into DVD Menu Animator, assign additional colours for the highlighted and selected states of the buttons, then save the results in a form that can be fed to the spumux tool in the dvdauthor suite."

Full Story (comments: none)

Gnash 0.8.7 released

Version 0.8.7 of Gnash, a flash video player, has been announced. "Improvements since the 0.8.6 release are: * Automatic and spontaneous screenshots support in all GUIs * Significant memory savings in parsing large XML trees and in some function calls * Enhancements in video streaming * Non blocking load of bitmaps, movies, data * Refactoring to eliminate most static data and get closer to re-entrant VM * Cygnal now supports multiple network connections, handling multiple video streams * Cygnal now supports plugins for server side scripting in C/C++ * Improved packaging support for deb and rpm".

Full Story (comments: none)

Languages and Tools

Caml

Caml Weekly News

The February 16, 2010 edition of the Caml Weekly News is out with new articles about the Caml language.

Full Story (comments: none)

Perl

Parrot 2.1.0 released

Version 2.1.0 of Parrot, a virtual machine aimed at running all dynamic languages, has been announced. "- Core changes: + GC performance and encapsulation were greatly improved. + PMC freeze refactored. + More Makefile and build improvements. - API Changes: + The Array PMC was removed. + Several deprecated vtables were removed. + The OrderedHash PMC was substantialy improved. - Platforms + Packaging improvements on some operating systems. - Tools + Some cases in pbc_merge are now handled. + Improvements were made to the dependency checker. + New tool nativecall.pir added."

Full Story (comments: none)

Python

Celery 1.0 released

Version 1.0 of Celery has been announced. "Celery is a task queue/job queue based on distributed message passing. It is focused on real-time operation, but supports scheduling as well. The execution units, called tasks, are executed concurrently on one or more worker servers. Tasks can execute asynchronously (in the background) or synchronously (wait until ready). Celery is already used in production to process millions of tasks a day. Celery was originally created for use with Django, but is now usable from any Python project."

Full Story (comments: none)

CodeInvestigator 0.22.0 released

Version 0.22.0 of CodeInvestigator has been announced. "CodeInvestigator 0.22.0 was released on Feb 13. I have changed the recording process to make it run faster. CodeInvestigator is a tracing tool for Python programs."

Full Story (comments: none)

unicode 0.9.4 released

Version 0.9.4 of unicode has been announced. "unicode is a simple python command line utility that displays properties for a given unicode character, or searches unicode database for a given name. It was written with Linux in mind, but should work almost everywhere (including MS Windows and MacOSX), UTF-8 console is recommended."

Full Story (comments: none)

Tcl/Tk

Tcl-URL! - weekly Tcl news and links

The February 12, 2010 edition of the Tcl-URL! is online with new Tcl/Tk articles and resources.

Full Story (comments: none)

XML

pyxser 1.4.2r released

Version 1.4.2r of pyxser has been announced. "I'm pleased to announce pyxser-1.4.2r, a python extension which contains functions to serialize and deserialize Python Objects into XML. It is a model based serializer."

Full Story (comments: none)

Editors

Leo 4.7 rc1 released

Version 4.7 rc1 of Leo has been announced. "Leo 4.7 rc 1 fixes all known serious bugs in Leo; minor nits remain. Leo is a text editor, data organizer, project manager and much more."

Full Story (comments: none)

Test Suites

Linux Desktop Testing Project 1.7.2 released

Version 1.7.2 of the Linux Desktop Testing Project has been announced. "This release features number of important breakthroughs in LDTP as well as in the field of Test Automation. This release note covers a brief introduction on LDTP followed by the list of new features and major bug fixes which makes this new version of LDTP the best of the breed.".

Full Story (comments: none)

Linux Desktop Testing Project 2.0.3 released

Version 2.0.3 of the Linux Desktop Testing Project has been announced. "Changes in this release: Return always unicode string in gettextvalue, required to fix automated test script in VMware Workstation Fix ooldtp compatibility with LDTPv1 as reported by Mago [1] team Patch by James Tatum for getallstates compatible with hasstate function Fix bug b.g.o#608413 Fix Firefox preference accessing bug, reported by Aaron Yuan".

Full Story (comments: none)

Version Control

Git 1.6.6.2 released

Version 1.6.6.2 of the Git distributed version control system has been announced, it includes numerous bug fixes and documentation work. "The latest maintenance release Git 1.6.6.2 is available at the usual places".

Full Story (comments: none)

Git 1.7.0 released

Version 1.7.0 of the Git distributed version control system has been announced. "The latest feature release Git 1.7.0 is available at the usual places".

Full Story (comments: 2)

Page editor: Forrest Cook

Announcements

Commercial announcements

Aava Mobile's "fully open" handset

Aava Mobile has announced the upcoming availability of what it claims to be "the world's first fully open mobile device." "Functioning Aava Mobile devices measure 64mm by 125mm and only 11.7 millimeters thin-making them the world's thinnest x86 based smartphone devices. The reference design provides support for Linux-based Moblin 2.1 and Android OSs today, with plans to support MeeGo in the future." Some pictures of this new toy have been posted as well.

Comments (7 posted)

Ettus Research acquired by National Instruments

Ettus Research has announced its acquisition by National Instruments. "What does this mean for GNU Radio? Ettus Research will continue to support and contribute to GNU Radio, and the combination of GNU Radio software and USRP hardware will remain our core focus. The additional resources that a large company like NI can provide will allow us to focus even more energy on improving the overall capabilities of the system. Two of the core GNU Radio developers, Matt Ettus and Josh Blum, are employed by Ettus Research. In the future we will also likely be providing GNU Radio drivers for additional hardware from National Instruments. What does this mean for LabVIEW? The Universal Hardware Driver will allow us to produce high-quality, officially supported LabVIEW drivers for all of our hardware."

Full Story (comments: 1)

The Linux Box to market Ubuntu to US enterprise users

The Linux Box and Canonical Ltd. have announced a partnership. "As an official Canonical Silver Solution Provider Partner, The Linux Box will sell, install and support customized Ubuntu-based solutions to organizations running Linux systems. It will also provide businesses with large-scale migration deployment support and training services for cloud computing infrastructures and enterprise desktop alternatives."

Full Story (comments: none)

Moblin and Maemo to merge

Intel and Nokia have announced that the Moblin and Maemo projects will be merging into a single mobile platform called MeeGo, which, like Moblin, will be hosted at the Linux Foundation. "MeeGo blends the best of Maemo with the best of Moblin to create an open platform for multiple processor architectures. MeeGo builds on the capabilities of the Moblin core OS and its support for a wide range of device types and reference user experiences, combined with the momentum of Maemo in the mobile industry and the broadly adopted Qt application and UI framework for software developers."

See also: Quim Gil's post on the merger.

Comments (93 posted)

Articles of interest

Who Is Developing KVM Linux Virtualization? (ServerWatch)

Sean Michael Kerner covers KVM development. "Five years ago, the open source Xen hypervisor was the primary technology that big vendors like IBM and Red Hat were adopting and pushing. In 2010, that's no longer the case as the rival KVM effort is now getting the attention of both IBM and Red Hat, as well as many others in the Linux ecosystem."

Comments (21 posted)

SGI spins up Cyclone HPC cloud (The Register)

The Register reports on SGI's new linux-based Cyclone HPC cloud offering. "Try it, then rent it or buy it. That's the new mantra from supercomputer maker Silicon Graphics this morning as it launches its own supercomputing on demand offering, dubbed Cyclone. If cloud computing means virtualized server instances, then technically speaking - as if SGI could speak any other way - the Cyclone service is not a cloud. But if cloud means buying server and storage capacity on demand to run preloaded applications or homegrown ones and only paying for what you use - what some of us still call utility computing - then Cyclone is a cloud."

Comments (none posted)

Staples Launches Business IT Services (InformationWeek)

InformationWeek reports that the office supply giant Staples is getting into the Linux support business. "With its Staples Technology Solutions offering, Staples envisions providing "one stop for IT solutions" in delivering products and services that act as an extension of in-house IT departments or even manage entire IT operations. Managed services range from on-site and remote service and desktop support for Apple, Microsoft, and Linux platforms, to supplying engineers with certifications from leaders like Cisco, Citrix, and Linux. Data center offerings include sub-floor cleaning and 24/7 data center emergency supply service."

Comments (1 posted)

Resources

A Few Billion Lines of Code Later: Using Static Analysis to Find Bugs in the Real World (CACM)

The developers of the Coverity checker have published a lengthy article in the Communications of the ACM detailing the lessons they have learned. "No bug is too foolish to check for. Given enough code, developers will write almost anything you can think of. Further, completely foolish errors can be some of the most serious; it's difficult to be extravagantly nonsensical in a harmless way."

Comments (61 posted)

Blog Postings

Open source: dangerous to computing education? (opensource.com)

Over at opensource.com, Greg DeKoenigsberg looks at a blog posting from Mark Guzdial, chairman of the ACM education board. Guzdial argues that commercial software development is somehow better for students, with some rather poor arguments that DeKoenigsberg deconstructs: "First, let's talk about breadth of opportunity. Mark seems to assume that every student developer has the opportunity to engage in commercial development. This is demonstrably untrue. It may be true that an elite school like Georgia Tech provides these kinds of opportunities to most of their [computing] students — but what about everywhere else? For that matter, what about the kids at Georgia Tech who, for whatever reason, don't make the cut?"

Comments (75 posted)

Education and Certification

LinuxCertified Announces Linux System and Network Administration BootCamp

LinuxCertified has announced a new Linux System and Network Administration BootCamp. "LinuxCertified,Inc. a leading provider of Linux training, will offer weekend Linux system administration bootcamp on February 27th - 28th, 2010 in South Bay (CA). This workshop is designed for busy information technology professionals and is designed to cover the most important Linux administration areas."

Comments (none posted)

LPI at CeBIT 2010 in Hanover, Germany

The Linux Professional Institute will hold training at CeBIT 2010. "LPI Central Europe will host a full program of activities at CeBIT 2010 in Hanover, Germany. Open Source will be a top theme at this years edition of one of the world's leading trade fairs for the ICT industry".

Full Story (comments: none)

Upcoming Events

Save the date: SciPy 2010 June 28 - July 3

SciPy 2010 will be held in Austin, TX on June 28 - July 3. "The annual US Scientific Computing with Python Conference, SciPy, has been held at Caltech since it began in 2001. While we always love an excuse to go to California, it's also important to make sure that we allow everyone an opportunity to attend the conference. So, as Jarrod Millman announced last fall, we'll begin rotating the conference location and hold the 2010 conference in Austin, Texas."

Full Story (comments: none)

Spanish DebConf 9-11 April, Coruña and Debian Work Session

There will be a Spanish DebConf April 9 - 11, 2010 in Coruña, Spain. "The event is primarily in Spanish and oriented to the Spanish community, but it is not limited to."

Full Story (comments: none)

Events: February 25, 2010 to April 26, 2010

The following event listing is taken from the LWN.net Calendar.

Date(s)EventLocation
February 17
February 25
PyCon 2010 Atlanta, GA, USA
February 27
February 28
The Debian/GNOME bug weekend Online, Internet
March 1
March 5
Global Ignite week Online, Online
March 2
March 4
djangoski Whistler, Canada
March 2
March 5
FOSSGIS 2010 Osnabrück, Germany
March 2
March 6
CeBIT Open Source Hannover, Germany
March 5
March 6
Open Source Days 2010 Copenhagen, Denmark
March 7
March 10
Bossa Conference 2010 Recife, Brazil
March 13
March 19
DebCamp in Thailand Khon Kaen, Thailand
March 15
March 18
Cloud Connect 2010 Santa Clara, CA, USA
March 16
March 18
Salon Linux 2010 Paris, France
March 17
March 18
Commons, Users, Service Providers Hannover, Germany
March 19
March 21
Panama MiniDebConf 2010 Panama City, Panama
March 19
March 21
Libre Planet 2010 Cambridge, MA, USA
March 19
March 20
Flourish 2010 Open Source Conference Chicago, IL, USA
March 22
March 26
CanSecWest Vancouver 2010 Vancouver, BC, Canada
March 22 OpenClinica Global Conference 2010 Bethesda, MD, USA
March 23
March 25
UKUUG Spring 2010 Conference Manchester, UK
March 25
March 28
PostgreSQL Conference East 2010 Philadelphia, PA, USA
March 26
March 28
Ubuntu Global Jam Online, World
March 30
April 1
Where 2.0 Conference San Jose, CA, USA
April 9
April 11
Spanish DebConf Coruña, Spain
April 10 Texas Linux Fest Austin, TX, USA
April 12
April 15
MySQL Conference & Expo 2010 Santa Clara, CA, USA
April 12
April 14
Embedded Linux Conference San Francisco, CA, USA
April 14
April 16
Linux Foundation Collaboration Summit San Francisco, USA
April 14
April 16
Lustre User Group 2010 Aptos, California, USA
April 16
April 17
R/Finance 2010 Conference - 2nd Annual Chicago, IL, US
April 16 Drizzle Developer Day Santa Clara, CA, United States
April 23
April 25
FOSS Nigeria 2010 Kano, Nigeria
April 23
April 25
QuahogCon 2010 Providence, RI, USA
April 24
April 25
OSDC.TW 2010 Taipei, Taiwan
April 24
April 25
BarCamb 3 Cambridge, UK
April 24 Festival Latinoamericano de Instalación de Software Libre Many, Many
April 24
April 25
Fosscomm 2010 Thessaloniki, Greece
April 24
April 25
LinuxFest Northwest Bellingham WA, USA
April 24 Open Knowledge Conference 2010 London, UK
April 24
April 26
First International Workshop on Free/Open Source Software Technologies Riyadh, Saudi Arabia
April 25
April 29
Interop Las Vegas Las Vegas, NV, USA

If your event does not appear here, please tell us about it.

Page editor: Forrest Cook


Copyright © 2010, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds