|
|
Subscribe / Log in / New account

LWN.net Weekly Edition for February 18, 2010

Python and PostgreSQL

By Jonathan Corbet
February 17, 2010
As some LWN readers will know, this site is implemented with a combination of free technologies, including the Python language and the PostgreSQL relational database management system. Anybody who has tried to combine those two tools will have encountered the variety of modules out there intended to serve as the glue between them. It's the sort of variety that nobody wants, though: lots of options, none of which has the full support of the community or works as well as one might like. The good news is that this situation may not last a whole lot longer.

The conversation started when Bruce Momjian noted that the state of Python/PostgreSQL support was not as good as it could be. The PostgreSQL Python page and the Python PostgreSQL page agree on one thing: there are several adapters available, none of which is truly dominant, but many of which are seemingly unmaintained. How, he asked, is a developer to choose between them? Your editor, who has tried a number of them, could only nod in sympathy. Bruce requested:

What is really needed is for someone to take charge of one of these projects and make a best-of-breed Python driver that can gain general acceptance as our preferred driver. I feel Python is too important a language to be left in this state.

The purpose of a database adapter module is to make the capabilities of the database available to Python applications. To that end, it accepts SQL queries from the application, passes them to the database, and hands the results (if any) back to the application. Application writers like the idea of database independence; it holds the promise of being able to move easily to a different database should the need arise. To enable this independence, language development communities define standards for how database adapters should operate. The Python version of this standard is the Python Database API Specification, often called DB-API.

One of the problems, as identified by Greg Smith, is that the DB-API fails to cover much of the needed functionality, meaning that each adapter ends up making its own (incompatible, naturally) extensions. One of your editor's favorite quirks is the specification of five different "styles" by which parameters can be passed into queries; the application is expected to support all five and use whichever one the adapter is prepared to accept that day. The end result of all this is that adapters tend to diverge from each other, portability between them is problematic, and none becomes the standard.

That said, there currently seem to be two serious competitors in this area:

  • Psycopg almost certainly has the widest support among Python applications. It is reasonably solid and performs well, but some potential users may have been daunted by the fact that its web page took the form of an anti-Trac rant for some time (it's still in the Google cache as of this writing).

  • PyGreSQL has been around for a long time; it predates the DB-API and still does not implement it completely. Development on the code has been slow for some time, and its performance is not as good as Psycopg.

One might think that Psycopg would be the clear leader among these two, and it would have been, except for one little problem: Psycopg is GPL with a bunch of exceptions. The PostgreSQL community feels fairly strongly about permissive licenses, to the point that a GPL-licensed adapter is seen as a deal breaker. So Greg lamented:

If everybody who decided they were going to write their own from scratch had decided to work on carefully and painfully refactoring and improving PyGreSQL instead, in an incremental way that grew its existing community along the way, we might have a BSD driver with enough features and users to be a valid competitor to psycopg2. But writing something shiny new from scratch is fun, while worrying about backward compatibility and implementing all the messy parts you need to really be complete on a project like this isn't, so instead we have a stack of not quite right drivers without any broad application support.

As a way toward a solution, Greg put together a Python PostgreSQL driver TODO page describing the issues with both Psycopg and PyGreSQL. For Psycopg, wishlist items included some testing, refactoring, and documentation work. The list for PyGreSQL is longer and more daunting. The conclusion found on that page is:

A relicensed Psycopg seems like the closest match to the overall goals of the PostgreSQL project, in terms of coding work needed both in the driver and on the application side (because so many apps already use this driver).

Authors of GPL-licensed code often tend not to react well to requests for relicensing. That can be true even in cases like a database adapter, which is normally a relatively small component in a much larger application. In this case, though, Psycopg hacker Federico Di Gregorio acknowledged that, perhaps, GPL wasn't the best license for this module. So, he has announced, the next Psycopg release will carry the LGPLv3 license (plus the obligatory exceptions involved in using libssl) instead. That is probably enough to tip the scales in that direction and, finally, lead to a situation where there is an obvious default choice for developers.

There will be, beyond doubt, no end of lessons from this episode on how to run a successful free software project. There is one which stands out, though: until well into this discussion, there had been little or no communication between the PostgreSQL development community and the people working on Python adapters. Given how tightly coupled the two efforts are, a lack of communication for years can only make the creation of top-quality adapters harder. Once the relevant developers started talking to each other, it only took a few days to find a path toward a satisfactory conclusion.

Comments (17 posted)

FOSDEM'10: distributions and downstream-upstream collaboration

February 17, 2010

This article was contributed by Koen Vervloesem

For the first time in its ten-year history, FOSDEM didn't organize individual developer rooms per distribution, but it opted for a joint 'mini-conference' in two distribution developer rooms, with talks that specifically target interoperability between distributions, governance, common issues that distributions are facing, and working with upstream projects. A couple of them piqued your author's interest.

Debian and Ubuntu

As a Debian and Ubuntu developer since 2006, Lucas Nussbaum knows the relationship between these two distributions inside and out. He has attended DebConf and Ubuntu Developer Summit, has friends in both communities, and is involved in improving collaboration between both projects. In his talk, he discussed the current state of affairs from his point of view and what could be done to improve matters.

Ubuntu has a lot of upstream projects, like Linux, X.org, GNOME, KDE, and so on, but it has one special upstream: Debian. Integrating a new project or a new release in Ubuntu regularly requires changes, such as toolchain changes and bug fixes. It is often not possible to do this work in Debian first. Lucas gave some statistics from Ubuntu Karmic Koala (9.10): 74% of the projects come directly and unmodified from Debian, 15% come from Debian but are modified with Ubuntu patches, and 7% come directly from upstream projects. (For those that are puzzled why these numbers don't add up: the missing 4% is when Ubuntu packages a newer upstream release than the release that Debian has. This package can be based on the Debian version or fully repackaged.)

Managing this divergence is not trivial. Keeping local changes in Ubuntu requires a lot of manpower, and the changes need to be merged again when Debian updates the package. This is already a strong incentive to push changes to Debian. Bug reports are the main vehicle for pushing changes, but this is where problems can start. Lucas summarized it neatly:

Ubuntu users that want to file a bug, have the choice between three options. They can file a bug upstream, where they might get flamed; they can file a bug in Debian, where they are very likely to get flamed; or they can file a bug in Ubuntu's Launchpad, where there are very likely to get ignored.

There is already some collaboration on bugs today. For example, some bugs get filed in Debian by Ubuntu developers: about 250 to 400 bugs per Ubuntu release cycle, mostly upstreaming of Ubuntu patches. There is also a link to Ubuntu patches and bugs in Launchpad on the Debian Package Tracking System (PTS), although Lucas admitted that at the moment the data is imported using a fragile hack.

The second part of Lucas's talk was about his view on the current state of the relationship between Debian and Ubuntu. Historically, many Debian developers have been unhappy about Ubuntu because of the feeling of the distribution being "stolen" and due to some problems with Canonical employees that tend to reflect on Ubuntu as a whole. However, according to Lucas things have improved considerably and many Debian developers see some good points in Ubuntu: Ubuntu brings a lot of new Linux (and Debian derivative) users, it also brings new developers to Debian, and it serves as a technological playground. Examples of the latter are dash as the default /bin/sh, boot improvements, hardening GCC flags, and so on. Having these things tested first in Ubuntu makes it much easier to import them into Debian later.

On the Ubuntu side, Lucas sees a culture where contributing to Debian is the right thing to do, and as a result many Ubuntu developers contribute to Debian. However, there is often not a lot to contribute back on the package level: many bug fixes are just workarounds. Also, Canonical is a company, which contributes back when there are benefits for it, so Debian shouldn't expect many free gifts.

However, Ubuntu's rise also causes some problems to Debian, and Lucas called the most important problem "the loss of relevance of Debian". Not only has the Debian user base (or at least market share) decreased, but, for many new users, Linux equals Ubuntu. Recent innovations have usually happened in Ubuntu, so even if Debian is now the basis of a major distribution, it becomes less relevant. However, Lucas thinks collaborating with Ubuntu is the right thing to do for free software world domination, because Debian fights for important values and takes position on technical and political issues such as the Firefox trademark issue.

Lucas's proposal to make Debian relevant again and still help Ubuntu is to behave like a good upstream and to communicate on why Debian is better. The former means that collaboration with Ubuntu should be improved. Not only should there be more cross-distribution packaging teams, Lucas also maintains that Debian could help Ubuntu to maintain Debian's packages, e.g. by notifying Ubuntu of important transitions and by bug triaging and fixing directly in Launchpad if time permits this. He also stressed that Debian should acknowledge high-quality work that is done in Ubuntu. This could then be imported into Debian: "Importing packages doesn't have to be one-way only."

According to Lucas, Debian fails at communicating that it is better. It even needs external people like Bradley Kuhn sometimes to do that communicating. Debian is a volunteer-based project where decisions are made in the open, and it advocates the free software philosophy since 1993. In contrast, Ubuntu is a project controlled by Canonical, where decisions like Ubuntu One or the switch to Yahoo! as a search engine are imposed. Moreover, Ubuntu advocates proprietary web services such as Ubuntu One, the installer recommends proprietary software, and there is the controversial copyright assignment to contribute to Canonical projects.

Lucas went further and claimed that Debian is a better distribution because many package maintainers (e.g. of scientific software) are experts in their field, and the emphasis is on quality. In contrast, most of Ubuntu's packages (the 74 % he mentioned before) are just synchronized from Debian, with many maintainers that have no real knowledge about the packages. The conclusion of his talk was that Ubuntu is a chance for Debian to get back in the center of the FLOSS ecosystem, but that the distribution should be more vocal about Ubuntu's issues.

How to be a good upstream

Petteri Räty, a member of the Gentoo Council, talked about how to be a good upstream. Or rather, he gave some "dos and don'ts" to bootstrap a discussion with the audience. The bottom line was: if a project is a good upstream, then distributions need a minimal amount of iterations to package the software. One ground rule to prevent a lot of problems is: "Never change a once released file". Without releasing a new version of the software, that is. If an upstream project violates this rule, bug reports don't make any sense, because upstream and downstream no longer know which version of the file the user is referring to. Another thing to watch for when releasing files is that a release should not build in debug mode by default. There should also be no -Werror in CFLAGS, as the project should respect the user's choice. Moreover, changes that are relevant for distributions should be documented in changelogs, Petteri stressed: "For example, say it explicitly when there's a security fix."

How the upstream project handles dependencies also has consequences for distributions. For example, they should link dependencies dynamically and use libtool in autoconf. They also should never bundle dependencies: for example, if they would include zlib, security problems in the library wouldn't get solved when the distribution updates the "global" zlib. And last but not least, the upstream project should allow downstreams to configure build support for optional dependencies.

Petteri also emphasized that the "Release early, release often" software development philosophy is really important: that way, end users get the project's code faster, which means that the code is tested more quickly, by more people. Not all releases will end up in distributions, but the best ones (tested by distributions' developers and maintainers) will. It's also important to have consistent version numbers: going from 0.10 to 10.0 and then to 11.1 is not the way to go. The audience grinned understandably when Petteri mentioned that a 4.0 version number should only be given to a stable release.

Working with GNOME upstream

As a GNOME and openSUSE contributor, Vincent Untz was the perfect speaker to lead a session about collaboration between GNOME upstream and downstream. This was really an interactive session where the audience gave a lot of suggestions. For example, one downstream packager said that it would be nice to have the same predictable 6-months schedule for GNOME applications, like Rhythmbox, as for the GNOME desktop environment. There is already a precedent: at the end of January, the Banshee music player developers announced that they will align their release schedule with GNOME's: Banshee 1.6 will be released together with GNOME 2.30. Another member of the public found it inconvenient that new GNOME features sometimes depend on arbitrary upstream choices such as X running on the first virtual terminal.

Big changes in GNOME upstream also have an impact on stability. Vincent pointed out the migration from GnomeVFS to GVFS in GNOME 2.22, which maybe happened too early. GDM 2.24 also had too many changes, even to the point where many distributions still use GDM 2.20 now. The change from AT-SPI (Assistive Technology Service Provider Interface) built upon CORBA (Common Object Request Broker Architecture) to AT-SPI2 based on D-Bus is also a big change that will be a challenge for distributions. An example that merits following according to Vincent is the move from HAL to DeviceKit in GNOME Power Manager: Richard Hughes maintained the two branches, which was good for distributions that didn't want to migrate immediately. GNOME 3 will obviously also have a big impact. Vincent welcomes downstream distributions to tell GNOME which "old" libraries they would like to keep using for a time, so that upstream can maintain these to give distributions some time.

With respect to patches, Vincent applauded Debian, which is working on a format where a patch has information in the comments about where the patch has been sent upstream, if it is accepted but not yet released, and so on. Distributions that have an online patch tracker also help upstream maintainers. Again it is Debian (with patch-tracker.debian.org) and Ubuntu (with patches.ubuntu.com) that are helpful to upstream maintainers.

Packaging Perl modules

Gabor Szabo, who has been involved in the Perl community for 10 years, talked about packaging Perl and CPAN (Comprehensive Perl Archive Network) modules for distributions. The problem from the end-user's perspective is that many Perl applications are not packaged in the user's distribution, but are only available as a CPAN module that has to be built and installed in another way. The problem becomes even uglier when users want to install a Perl script that needs several CPAN modules that are not in the distribution's repository.

Gabor calls this a major issue, and he gave some numbers to put this into perspective: CPAN has around 20,000 packages, while Ubuntu 9.10 has about 1,900 of them, which is roughly 10 %. The numbers are even worse for Ubuntu 8.04, a Long Term Support (LTS) version that is used on many web servers: this release has about 1,200 CPAN packages or roughly 6 % of them. "Other distributions have roughly the same numbers, with FreeBSD having quite a bit more in their ports collection. Of course we don't need all CPAN modules packaged in distributions, but we definitely need a lot more than we have now."

So why do most distributions have such a low percentage of Perl packages? According to Gabor, the number one reason is that users just don't ask for more modules, maybe because many users are not used to talking to their distribution. Another obvious reason is that it's time consuming to package and maintain a Perl module. This issue could be solved by further automation and better integration of the packaging tools and the CPAN toolchain. "We should also catch license or dependency issues earlier. As far as I know, only Fedora and Debian have a dedicated Perl packaging mailing list: Fedora-perl-devel-list and debian-perl, respectively." But on the other hand, it's not the quantity that counts, and many modules are not worth packaging: "Therefore, the Perl community needs a better way to indicate the quality and the importance of a CPAN module, as a guideline for distributions. Importance can be measured in different ways: by popularity, by the number of external (non-CPAN) dependencies, by the number of packages depending on this package, and so on."

Apart from solving the usual occasional communication problems (upstream CPAN authors that don't respond or even disappear, patches and bugs reported to downstream that don't always reach upstream, and so on), Gabor has some suggestions for both upstream and downstream. For example, distributions could supply more data directly to CPAN, such as the name and version of the CPAN modules they include as packages, the list of bugs reported to the distribution, and the list of downstream patches. The Perl community could then gather this data and display it on one web site. CPAN itself could also be improved for better downstream integration. For example, there should be a "pass-through CPAN installation" where CPAN.pm should use the distribution's native package manager where possible. Using this installation type, CPAN could also report missing packages and gather statistics about these modules for distributions.

RPM packaging collaboration

Pavol Rusnak of the openSUSE Boosters Team shared his view on RPM packaging collaboration. The biggest issue here is that many RPM-based distributions use their own distribution-specific macros in RPM spec files. However, a couple of them have already been unified between distributions. For example, Fedora and openSUSE first had defined completely different macros for Python packaging, but the Fedora macros have now been adopted in upstream RPM and openSUSE 11.2 is also using them. The make install differences have also been solved: while, at first, openSUSE, Mandriva and Fedora used different macros, RPM upstream introduced the make_install macro which is equivalent to make install DESTDIR=%{?buildroot}.

However, there are still a lot of differences that make porting RPM spec files a challenge. For example, handling desktop files happens differently in Fedora and Mandriva than in openSUSE: they have other BuildRequires dependencies and the content of the install macro is different. Pavol suggested unifying this procedure and pushing things to RPM upstream if some macro is still needed. Ruby packaging also happens with different macros, and here too, Pavol suggests creating common macros in upstream and using them consistently in distributions. "However, different distributions have a different mindset, so it's not always easy to find a solution that everyone likes." The same song goes for parallel builds: Fedora starts them as make %{?_smp_mflags}, while openSUSE uses make %{?jobs:-j%jobs}. And so on.

Pavol ended his presentation with some ideas for future RPM development. For example, Panu Matilainen is working on file triggers, which will help packagers to get rid of a large number of scriptlets in spec files, like calling ldconfig, gtk-update-icon-cache, and so on. File triggers, which allow running some scripts when some file has been added or removed, are already implemented in Mandriva. Pavol also suggests introducing two new scriptlets: %preup and %postup. These will be called when updating a package. The %preun, %postun, %pre and %post scriptlets will not be run in that case. That way, the package writers don't have to write weird code such as if [ "$1" -eq "0" ]; any more to detect whether an operation is an upgrade or an install. In the future, scriptlets will also get more information about the running transaction, which makes it possible to detect more precisely what is happening so that, for example, it could convert a configuration file when upgrading from Apache 1.x to Apache 2.x.

Learning from each other

When it was announced that distributions wouldn't get their own developer room at FOSDEM 2010 but would meet together in the cross-distribution rooms, many people were skeptical about the idea. Some of them liked the idea of cross-distribution rooms, but were disappointed to see the separate distribution-specific rooms go away. But all in all, many visitors seemed to find the new concept a great idea, and there were good cross-distribution discussions at FOSDEM 2010, although in many of the talks there were not enough people from enough distributions to really get the ball rolling. Packaging and working with upstream are important activities where all distributions are confronted with the same issues and where they can learn from each other and share good ideas. Your author has heard rumors that a lot of developers didn't come to FOSDEM this year because their favorite distribution didn't get a dedicated room, but this is a pity: being able to learn from other distributions would have made their own distribution even better.

Comments (7 posted)

MeeGo: the merger of Maemo and Moblin

February 16, 2010

This article was contributed by Nathan Willis

The mobile Linux world is about to get simpler, as Nokia's Maemo platform for handheld mobile devices and Intel's Moblin project for netbooks are merging. The combined "MeeGo" stack will still differentiate between device types at the "user experience" level, but will share the same system-level components and, hopefully, unite developer communities by offering a common base. The announcement was made on February 15th, with content and details continuing to roll out on the meego.com site.

Nokia started Maemo in 2005, which was first delivered on a series of WiFi-connected pocket tablets without cellular connectivity, but eventually moving into mobile phones with the 2009 launch of the N900. Moblin was launched in 2007 by Intel, targeting netbooks running on Intel Atom processors. In April of 2009, Intel signed governance of the project over to the non-profit Linux Foundation, but continued to guide its development. Moblin publicly released 2.0 "beta" code for netbooks in May, but previewed an update of the stack running on Atom-powered phones in September.

The mobile industry trade press, understandably, is reporting on the MeeGo announcement as a defensive move to counter the challenge posed by Google's Android (and, to a lesser extent, ChromeOS). Android has seen tremendous growth in the past year, with more than two dozen products now shipping from a variety of device manufacturers. But while Android may seem like a more direct challenge to Maemo, it is not limited to phones. Several products have been announced that edge further into the device space sought by Moblin, including netbooks, tablets, and e-book readers. Nor is Google the only mobile operating system vendor producing a Linux-based platform for portable devices; Palm, Samsung, and the multi-manufacturer consortium called the LiMo Foundation all produce competing offerings.

However, within the spectrum of mobile Linux operating systems, Moblin and Maemo were already the two projects with the greatest overlap in terms of technical design and governance structure. Android, ChromeOS, and Palm's webOS all use the Linux kernel and portions of the same software stack found on graphical desktops, but provide limited APIs for application deployment. While source code for upstream components is available for all Linux-based operating systems, the platforms still differ in terms of openness. Android and ChromeOS are freely available to outside platform developers, and accept patches and bug reports. The LiMo Platform includes components contributed by member handset makers, including some that are proprietary and is governed by its member contributors. Palm's webOS and Samsung's newly-announced Bada are single-vendor products.

Maemo and Moblin both started with existing desktop Linux distributions: Maemo was originally based on Debian, Moblin on Fedora, although both incorporated technology from other distributions as well. Underneath, however, they ran strikingly similar middleware platforms. Both used X, GLib, D-Bus, Pango, Cairo, GStreamer, Evolution Data Server, PulseAudio, Mozilla's Gecko HTML rendering engine, Telepathy, ConnMan, and a host of other common utilities. GNOME's Dave Neary even commented on the similarity in response to concerns that there were "too many" mobile Linux platforms. In addition, both projects sought to make their platforms fully accessible to third-party developers, without the need to license an SDK or, in most cases, to code to different APIs. Both worked to develop active, open communities around the code base.

In retrospect, though, perhaps the clearest harbinger of the merge is oFono, the open telephony stack project launched jointly by Intel and Nokia in May of 2009. The closed source telephony stack included in the N900 was a lightning rod for criticism from free software advocates — though Nokia justified its decision to write a new stack from scratch — but, prior to Monday's announcement, Intel's involvement in oFono stood out as an oddity. Now it makes sense.

Merging, nuts and bolts

In spite of the similarities, Maemo and Moblin did have their differences, particularly in the choice for top-level toolkit. Moblin used GTK+ and Clutter as its preferred toolkits, while the latest Maemo release was in the process of switching over to Qt.

As posted on the MeeGo web site, the combined platform closely resembles the Moblin base, but with Qt as the interface toolkit. The architecture diagram provided is vague, listing components only as "Internet Services," "Media Services," "Data Management," and so forth, but according to additional information provided by MeeGo, most of the stack remains unchanged from Moblin 2.0.

Both Qt and GTK+/Clutter blocks are shown in the MeeGo architecture diagram, but the Qt block is three times larger, and the "getting started" documents in the developers' section of the site only address Qt. Still, the FAQ explicitly says that both toolkits will be supported, so the simplest interpretation of the diagram may just be that Nokia, as corporate owner of Qt, is expected to contribute a larger share of developer-time via its toolkit.

More interesting is the fact that the combined MeeGo platform will support at least two processor architectures, ARM and Atom — in addition to any others championed by the community. The hardware enabling process outlines what the MeeGo project has in mind, with platform maintainers for each architecture overseeing responsibility for kernel, X, bootloader, and other hardware-specific adaptations.

Different devices appear to diverge at top of the stack, however, which MeeGo describes as the user experience (UX) level. The two UXes discussed on the site are the Handheld UX and the Netbook UX, which correspond neatly to the previous Maemo and Moblin product spaces. What is contained in each UX block is not as clear, however; each definitely includes the basic user interface and application suite, but according to the architecture diagram also incorporates a UI framework. Intriguingly, several places on the MeeGo site mention other UXes, including in-vehicle computers and connected televisions in particular — one possibility may be from Genivi, a non-profit industry group working on "In-Vehicle Infotainment" systems.

In the past, the UX layer formed a minor point of contention in the Maemo community, because Nokia kept the source code closed to several of its top-level applications. While most accepted the situation, there were always some who objected to the presence of any closed components in the distribution. Nokia responded by providing a detailed list of the closed components, the reasons why each was not opened, and a process through which developers could request the opening of a particular component.

To see precisely what MeeGo devices will include as UX components, one will have to wait for products to ship. The first release of the MeeGo platform itself is slated for the second quarter of 2010, with products following later in the year. Nokia's Quim Gil says that the Maemo 6 release previously scheduled for 2010 will proceed as planned; Maemo will be rebranded as MeeGo, but will not incorporate any changes to the software.

Communities

Arguably a bigger challenge than producing a new mobile Linux distribution will be merging the existing Moblin and Maemo user and developer communities. Maemo, being the older, has the more established community, with an active forum, a community council, and extensive documentation and processes for independent application developers to get their software tested and packaged for public release.

Moblin's community is smaller, centered around a pair of mailing lists, and it has a smaller garage of third-party applications. On the other hand, the majority of third-party Moblin applications are Moblin builds of existing desktop Linux applications; in contrast many Maemo projects are either standalone works or heavily-modified applications tailored for the handheld user experience.

Discussions on how to merge the communities have already started within both Moblin and Maemo. The MeeGo site has IRC and mailing list options in place already, but has not yet launched developer documentation or bug tracking. It does, however, outline the participation process for working on MeeGo itself, through the licensing policy and contribution guidelines.

Governance will be provided by a Technical Steering Group, to be headed by Imad Sousou from Intel and Valtteri Halla from Nokia, and meeting publicly biweekly. There is no admission or membership process to become a MeeGo contributor, and contributing code will require no copyright assignment:

MeeGo project will neither require nor accept copyright assignment for code contributions. The principle behind this is on the one hand to avoid extra bureaucracy or other obstacles discouraging contributions. On the other hand the idea is to emphasize that contributors themselves carry the rights and responsibilities associated with their code. MeeGo is a common concern of its project community and all participants should represent themselves and continuously influence the result through their own contribution.

Code contributions are encouraged to take place rather in the upstream component projects than in MeeGo. Project focus is in integrating existing upstream components into a platform release rather than in new code development. Therefore the objective is to minimize MeeGo patch against upstream projects and to avoid accumulating patches which serve other purpose than release integration and stabilization.

Finally, MeeGo raises one other community challenge, that as the only Linux distribution hosted by the vendor-neutral Linux Foundation itself, it could be perceived as an endorsed threat against the mobile Linux offerings of other Foundation members — such as Google.

Executive Director Jim Zemlin says the Foundation has a long history of industry collaboration on technical, legal, and community efforts, and cites other direct competitors such as Red Hat, Ubuntu, and Novell, that participate willingly in projects together, and benefit from the work. The Linux Foundation's job is to protect the ecosystem and the community, he said, and by hosting the MeeGo project it will provide a neutral forum for progress in development, specifications, and governance. This makes MeeGo similar to other Linux Foundation projects, he added, such as Linuxprinting.org, Carrier Grade Linux, accessibility, the Linux Standard Base, and others.

There is no denying that mobile is top-priority target for the Linux community. Android, webOS, and ChromeOS fans may perceive MeeGo as a threat, but when one looks beyond the brand names, the playing field is no different than that on which desktop and server Linux distributions compete: all have access to the same kernel, the same graphics subsystem, utilities and toolkits. No competitor is at a technical disadvantage.

What does make MeeGo unique in the mobile line-up is that it follows the desktop Linux model so closely. Individually, the Moblin and Maemo projects used that to their advantage, rapidly building robust communities and products. The odds are that their combined effort will play much the same way; if other, less open Linux-based mobile stacks see their sales threatened as a result, the best response will be for them to change their game.

Comments (50 posted)

Page editor: Jonathan Corbet

Inside this week's LWN.net Weekly Edition

  • Security: Trust, but verify; New vulnerabilities in fetchmail, kernel, mod_security, openoffice,...
  • Kernel: Merging kdb and kgdb; How old is our kernel?; A new series on huge pages.
  • Distributions: "Easy" is in the eye of the beholder; NetBSD 5.0.2; Fedora 13 and rawhide diverge; Live Hacking CD; Element.
  • Development: Karma targets easier creation of educational software, Welte on GSM, Morvena animation, new versions of JACK, libshcodecs, moin, Nagare, GNOME, Rosegarden, Gnumeric, OO.o, dvd_menu_animator, Gnash, Parrot, LDTP, Git.
  • Announcements: Aava Mobile handset, NI acquires Ettus, Linux Box markets Ubuntu, Moblin and Maemo to merge, Static Analysis finds bugs, KVM development, SGI's Cyclone HPC cloud, open source dangerous to education, SciPy 2010, Spanish DebConf.
Next page: Security>>

Copyright © 2010, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds