User: Password:
|
|
Subscribe / Log in / New account

LWN.net Weekly Edition for November 13, 2008

Fedora release cycles: longer or shorter?

By Jonathan Corbet
November 12, 2008
The Fedora 10 release is currently planned for November 25 - somewhat later than had been originally intended. Delays in Fedora releases are certainly not unheard-of, even when the project isn't coping with a major compromise of its fundamental infrastructure (the full story of which, it should be noted, still has not been told). Fedora 10 looks like it will be worth the wait, but the project is not waiting for the release to start thinking about its upcoming release cycles. A couple of discussions related to this topic provide some interesting insights into the pressures being felt by Fedora's leadership.

A recent video review of Fedora 10 was seen by the project as being something other than entirely favorable. But the biggest complaint expressed by the project is on a different subject: credit for work which is done by Fedora developers. Quoting Fedora leader Paul Frields:

Another point that had me scratching my head was the same host indicating that Fedora had a lot of features that were in Ubuntu 8.10. This is certainly true, but the differentiator is that many of these features were *built* by Fedora contributors, inside and outside Red Hat. It's important for us to keep emphasizing this fact.

Subsequent discussion indicates that a number of Fedora developers feel that other distributions - Ubuntu in particular - are stealing Fedora's thunder by shipping Fedora-developed improvements first. This is not the first time this kind of concern has been raised; it has been asserted that Novell's behind-closed-doors XGL work was done that way to keep Ubuntu from shipping it first. Fedora does not appear to be considering pulling its development from public view - that would run counter to the project's open nature - but some other responses are being discussed.

More than anything else, the Fedora project would like to ensure that the world knows about the work its developers are doing. Initiatives like the feature list for each release help to get information out ahead of the actual software release. There is also talk of more aggressive blogging, outreach to news sites, etc. The project has even posted a proposed marketing schedule which would help to ensure that all the right marketing activities are happening at the right points in the release cycle.

Former Fedora leader Max Spevack had a different suggestion to offer:

If "features" and "first" are hurting because of where we are in the calendar compared to the Ubuntu release, allowing them the chance to release their new distro first and to receive a lot of credit for new features when reviewers and press don't understand where the upstream work is being done (in Fedora, for example), then Fedora Marketing should ask the Fedora Board to think about altering our "May Day" and "Halloween" release targets by a little bit, so that Fedora's cycle finishes before Ubuntu's.

This proposal brings to mind a vision of distributors racing to be the first to release, leading to ever-shorter cycles and a corresponding decrease in release quality. It is hard to imagine that the first mover has such an overwhelming marketing advantage; there must be a better way.

It does not look like Fedora will attempt a "first post" counterattack anytime soon. In fact, if the recently-posted Fedora 11 release schedule proposal is adopted, the exact opposite will happen. In the past, Fedora has responded to a much-delayed release by shortening the following release cycle in an attempt to get back on schedule. For Fedora 11, it would appear that this will not happen; there will be no attempt to go for a "May Day" release.

The reasoning against shortening the Fedora 11 cycle comes down to this:

Fedora 11 will be extremely important to Red Hat Enterprise Linux (otherwise known as RHEL). RHEL 6 planning has looked to use Fedora 10 and Fedora 11 as releases to work out new technologies and features that are desired in RHEL 6. This includes a lot of upstream work that is being done, and targeted to land in these two releases.

So a shortened Fedora 11 cycle would make it harder to get all of the changes planned for RHEL6 in. That's problematic for Red Hat, and, since Red Hat pays for much of Fedora's existence, Red Hat's problems become Fedora's problems. Beyond that, though, it seems that a number of core Red Hat engineers will be working on Fedora during the next cycle to help get RHEL6-targeted features into shape. If the next cycle is shorter, Fedora will get less attention from those developers. Fedora would like to avoid that situation and take advantage of the RHEL team's attention while it can.

So the proposal is to retain the six-month cycle for Fedora 11 and release around the beginning of June. The Fedora 12 cycle, though, would be shortened to get the project back to the original schedule. The hope is that the advance notice will make it easier to plan for a short release cycle; Jesse Keating also suggests that the project "could even focus more on polish issues in F12 than large sweeping features." The more cynically-minded among us might conclude that Fedora 11 will be stuffed full of bleeding-edge new stuff that the RHEL team wants to evaluate, and Fedora 12 will be the release where all of that work is actually stabilized. But your editor would never want to be cynical.

The initial response to the proposed schedule is almost entirely positive, so it seems likely that things will go that way. Some Fedora developers may feel that releasing behind Ubuntu gives the project a public relations disadvantage, but other concerns are seen as being more important. Since those "other concerns" can be seen as "take the time to focus a lot of work on pulling together new features for an upcoming stable release," this set of priorities seems hard to argue with.

Comments (31 posted)

NLUUG/ELCE: Embedded devices and free software

By Jake Edge
November 12, 2008

On successive days, Harald Welte and David Woodhouse gave different views of the relationship between embedded companies and the free software communities whose code the companies are increasingly using. Their outlooks were not contradictory, but instead complementary; each came at the topic from a different direction. Welte looked mostly at what companies, particularly chip vendors could do better, while Woodhouse looked at what things the community could do to improve.

Welte and Woodhouse spoke at the co-located NLUUG autumn Mobility conference and Embedded Linux Conference Europe in Ede, the Netherlands, November 6 and 7. The Congrescentrum De Reehorst facility was excellent, well-suited to an event of this type which is not surprising as NLUUG has been holding two events there each year for the last ten years or so. In addition, the conference was well-organized and run; clearly displaying the experience that comes from the 26 years that NLUUG has been in existence.

[ The following covers Welte's presentation, Woodhouse's talk will be covered in a subsequent article. ]

[De Reehorst]

Welte kicked things off on Thursday with a talk entitled "How chipmakers should (not) support free software". As the conference got a bit of a late start and was already 15 minutes behind at that point, Welte said that he would make the time up because "everyone can understand gzip compressed speech". More seriously, he outlined his experience as a member of the Linux community, embedded developer, chip manufacturer from his recent work with Via, as well as a customer of consumer-grade embedded devices for gpl-violations.org; all of which result in multiple relevant points of view.

Linux is being found in more and more devices today—some less than obvious. Welte listed fairly well-known things like mobile phones and in-flight entertainment systems, but then noted that there are DSL Access Multiplexers (e.g. DSLAMs), payphones, ATMs, as well as vending and exercise machines that also run Linux.

Vendors of those devices are using free and open source software (FOSS) because of its strengths, which he outlined. There is a great deal of innovative and creative development done in FOSS because the barriers to entry are fairly low: the codebase is easy to read—at least in comparison to closed source—and there are standard development tools that are freely available. Because development is done in the open, developers will be embarrassed if their software architecture or code is bad. This also results in better security because of the code review that takes place.

[Auditorium]

The outcome of using FOSS this way is that "we should have a perfect world" with tons of embedded products, all secure and maintainable, that allow for additional or alternate functionality via third parties. The first of those, many embedded products, has been achieved, but we are still waiting for the other two, Welte said.

He contrasted a user's experience with Linux on PCs today with the experience provided by most embedded devices. For PCs, you can download the kernel, build it and it will run, with most hardware supported. You can choose from multiple distributions, any of which will have a kernel close to that of a mainline kernel and provide regular security updates. These are "things we are used to for many years", but things are not that way in the embedded space.

In the embedded world, every CPU or system-on-a-chip (SoC) has its own kernel tree, typically based on some ancient version of the kernel, that never gets cleaned up or submitted for mainline inclusion. So, they get no benefit from new features or security fixes in the kernel. There are no distributions to choose from, either for users or board makers and, even if updates are generated, there is generally no packaging system to use to update the code; re-flashing the entire device is required.

In Welte's words, "this sucks!" The embedded vendors get unstable and unmaintainable software with "security nightmares" and no innovation from elsewhere. The vendors have kernels that have diverged so far from the mainline that new features or fixes can't be backported, nor can their kernels get merged upstream. This is because the vendors tend to be very short-sighted, only focusing on getting one particular device out the door.

From Welte's perspective, embedded vendors do not understand the real potential of FOSS. They do not think in terms of creating platforms that others can build atop. In general, "they would rather sell a new [device] rather than improve the existing one". So, the vendors compete on the basis of the features their proprietary competitors implement rather than figuring out how to take advantage of the true strengths of FOSS. If, instead, they used FOSS to its fullest, they could outcompete the proprietary vendors in ways that could not be matched—except by using FOSS.

Turning to the chip vendors, Welte points out that there are two types of customers: Linux-aware and Linux-unaware. The Linux-aware customers—whose numbers are growing—will seek out vendors whose Linux support is better. It is already relatively late in the game: "if you don't have proper FOSS support, you will lose the 'openness competition'".

Chip manufacturers should be engaging in "sustainable development" by releasing kernels developed against the mainline in cooperation with the community. One large mistake these vendors make is to think their customers are only the tier-one companies that buy chips directly. There are many more downstream users of a chip once it has been integrated into other hardware; the buyers of those devices are also important as they will determine the success or failure of the product.

Unsurprisingly, Welte recommends that the development be done in the open, with a public development tree. Releases should not just be stable snapshots or big code drops; "post early, post often" should be the governing principle. FOSS is not just a technology, as chip vendors tend to think, it is a research and development philosophy that needs to be integrated into both the internal and external processes of the chip vendor.

On the external side, making documentation available, without a non-disclosure agreement (NDA)—or at worst a FOSS-friendly NDA—is essential. Internally, there is normally quite a bit of learning required to understand the FOSS philosophy. This will require training for engineers as well as product management folks. Having a clear FOSS support strategy, with clear goals, is important for making it work.

Product management needs to understand that supporting Linux is mostly a process of understanding the development model. The Linux APIs are not a particularly big hurdle, but understanding the community and how to work within it can be. Supporting Linux should mean supporting the mainline, not just N distributions, as N will grow over time, which leads to more problems. It is important to recognize that Linux-aware customers care as much about the quality of the code as they do about price and performance.

Engineering management needs to encourage engineers to communicate with the community, which requires real internet access. When faced with adding functionality to some FOSS code, they should be looking at ways to cooperate with others who have similar needs, rather than reinventing the wheel. Engineers need to figure out how and where to ask the right kinds of questions. They also need to learn that code is written to be read, not just executed; "this is something new to many people".

The community also has responsibilities to help the chip makers by providing "non-partisan" documentation because these manufacturers often have "no clue where to start or who to talk to" when they start considering supporting Linux. Commercial embedded distributors have a different perspective from the community so documentation from the community viewpoint is required. Welte says that various Linux Foundation sponsored efforts are helping in this area, but more needs to be done. A mentoring program of some sort might help by having FOSS developers willing to work with engineers to walk them through the process of getting their code upstream. The community must also work to keep from scaring chip vendor engineers away by being overly rude or terse; it is important that valid criticism be fully explained.

Welte sees a number of current or looming problems for chip vendors in supporting Linux, mostly involving patents or technology licensing issues. Various licensing regimes (like those for MPEG or Sony's memory stick) impose requirements that essentially preclude the development of free software drivers to talk to devices that implement those technologies. Everyone in the industry has these problems, though, so Welte suggests that they band together to present a case to the license holders; with enough smaller players working together, their voice can be heard.

On the whole, Welte is somewhat pessimistic about where embedded devices are headed. He certainly sees more FOSS being used in devices in the future, but expects to see them still be restricted so that they cannot leverage the full potential of FOSS. He does see "some very dim light at the end of a very far tunnel" with projects like Openmoko, but also efforts by some chip vendors, notably Intel, to fully support Linux.

It was not that many years ago when the desktop Linux situation looked as bleak as the embedded space does today, so there is hope. Presentations like Welte's can only help to bring that about. The audience contained many embedded developers, hopefully they can help their company's management see the benefits that Welte outlines so that his perfect world comes about sooner, but if the desktop is any guide, it will come about eventually.

Comments (18 posted)

NLUUG/ELCE: Embedded Linux and the community

By Jake Edge
November 12, 2008

As one of two embedded maintainers for the Linux kernel, David Woodhouse is in an excellent position to see where the community is failing to keep up its end of the bargain. At the recent co-located NLUUG and Embedded Linux conferences, his keynote on the second day made it very clear what areas he sees that need improvement. We fairly regularly hear about things that companies should be doing—see the report on Harald Welte's first day keynote—but the community should certainly keep an eye on its behavior as well. In his presentation, Woodhouse notes multiple projects that are not upstreaming their changes; he also notes things that individuals could do to make Linux better.

He started by pointing out that "it's not entirely clear what 'embedded' means", as there are many kinds of devices that have embedded attributes. Things like headless, handheld, low power, small size, limited ram, or limited persistent storage tend to be a part of the description of embedded devices, but there is "no real definition that I'm aware of that makes any sense".

Woodhouse then went on to see if he could define what an "embedded maintainer" is and does. He doesn't see the role as chasing patches to get them included upstream, it is more of an advocate role. Keeping an eye out for stupidity in the kernel using Bloatwatch and other tools as well as encouraging people—in various companies as well as in different parts of the community—to work together on solutions to problems they have in common are all part of the job.

From Woodhouse's perspective, companies are "getting a lot better" in terms of their Linux support. Less promising is the community: "We suck, really". He looked at a number of community embedded projects—like OpenWrt, Maemo, Moblin, and OLPC—to see how well they work with upstream; what he found was rather discouraging.

By looking at several concrete criteria, such as how many unsubmitted local kernel patches there were, how accessible their source is, and how old the kernel is that the project is using, Woodhouse is judging those projects the same way that companies are measured. Of the four projects that he looked at, only one, OLPC, was "mostly OK", the rest varied from "less good" to "FAIL".

Moblin for example, only had 23 outstanding patches, but those were against kernel 2.6.24. OpenWrt had a better kernel version, 2.6.27, but had 160 outstanding patches, plus an extra 425 files weighing in at 125,000 lines of code, which prompted a "sorry!" from an OpenWRT developer in the audience. OLPC has just a few outstanding patches against 2.6.27.4, while Woodhouse couldn't even find the kernel source for Maemo.

Getting work upstream is extremely important. Running older kernels and backporting fixes and features may seem like it saves time, but "it never works in the long run, it's a false economy". Woodhouse listed the usual suspects as reasons to get things upstream: code review, compile testing, updates for kernel API changes, and automated bug checking. He also mentioned the Kernel Janitors, whose efforts are generally useful, even though they are "often a little misguided, sometimes they don't engage their brain before sending patches". All of these benefits only come from getting code into the mainline.

The theme of the talk is summed up in one statement: "Divergence is pain"

The theme of the talk is summed up in one statement: "Divergence is pain". Any time that your code is not current with the most recent kernels or your patches are not making their way upstream, it should be felt as pain because diverging from upstream will end up causing exactly that. The pain may not be felt until later, but Woodhouse wants developers to recognize the problems caused by divergence so that they are averse to it right from the start.

Looking at the reasons why code is hoarded is instructive, he says. One of the reasons that is often heard, as well as Woodhouse's opinion, are summed up in a bullet point on one of his slides: "too hard to write decent code get code accepted". Another reason is that there is not enough time in the schedule for getting code merged. Many "see it as an extra part of the process after the driver is complete", which is the wrong way to look at it. Drivers and other features should be shared early on the appropriate mailing list so that any problems are dealt with near the beginning of development.

An issue related to code quality is that many times drivers are developed for ancient versions of the kernel, but that really shouldn't be a barrier as any "decent code will port relatively easily". Sometimes there is resistance to changes by the upstream developers. An example he noted was a feature that allowed multicast to be optionally removed from the IPv4 networking stack. It saved a fair amount of space for embedded devices that did not need that functionality, but David Miller and other networking developers were not very interested. This is where the embedded maintainer role can come into play as Woodhouse can step in to try to help convince the upstream developers.

Woodhouse had specific suggestions for making the situation better. "For a start, put everything in git trees" as it allows others to look at and test the code. Each feature should have its own topic tree that gets pulled into the main tree and developers should regularly assess the outstanding code to determine if it is ready to be moved upstream. Working with the upstream developers, getting them involved, and getting them to care about the feature or driver is crucial. In cases where a logjam develops, call on Woodhouse or Andrew Morton, they "can't promise any miracles, but often it can help".

Something that Woodhouse would like to see more developers do is to adopt a driver. There are countless drivers in SourceForge and elsewhere that are not upstream, so he suggests that folks "pick one driver, just tidy it up and make it acceptable upstream". Incidentally, Woodhouse is no fan of SourceForge: "I don't think I wrote 'don't use SourceForge' on any of the slides, but pretend that it's there". He mentioned the -staging tree as a possible destination for adopted drivers, though he is skeptical of the tree, "but it exists, we should see if we can get something from it".

Woodhouse summed up his talk with a simple statement: "We need to work better as a community before we can point fingers at companies who don't play nicely". It is certainly true that the community needs to set a good example for companies to follow. By highlighting some of our failures, Woodhouse has done the community a great favor, we can and, with luck, will do better.

Comments (9 posted)

Page editor: Jonathan Corbet

Inside this week's LWN.net Weekly Edition

  • Security: Storm botnet used to study spam; New vulnerabilities in acroread, flash-plugin, gnutls, wordpress,...
  • Kernel: Tracking of testers and bug reporters - a status report; /dev/ksm: dynamic memory sharing; The sad story of the em28xx driver.
  • Distributions: The shape of Fedora to come; OpenSolaris 2008.11 RC1; Debian Pure Blends
  • Development: The Gumstix Overo - a miniature X Window System platform, new versions of oVirt, FlameRobin, Hibernate, BusyBox, NASPRO, SoX, GNOME, LyX, Wine, MediaInfo, guitarix, Tapeutape, PeaZip, Task Coach, EMC, TakeNote, LLVM, TCPDF, RPyC, XPL editor, dlib, bzr, GIT.
  • Press: Preserving Network Neutrality without Regulation, Creative releases Linux GPL X-Fi drivers, EC FOSS procurement guidelines, Bilski decision analyzed, Booting Debian in 14 seconds, Specialty Linuxes reviewed, Smolt review.
  • Announcements: Fixstars acquires Terra Soft, Movial releases Browser D-Bus Bridge, Novell's transition program, Cisco AXP dev contest, TPF Hague Grants, O'Reilly Java certificates, DOCHS cfp, FOSDEM cfp, SCALE cfp, UKUUG cfp, ERP5 World Forum, ETech 2009 program, LAC 2009 - Italy, Pure Data - France.
Next page: Security>>

Copyright © 2008, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds