The first Linux Foundation Collaboration Summit was held June 13
to 16 on Google's campus in Mountain View, California. This event
could be thought of as the coming-out party for the Linux Foundation, the
organization which resulted from the merger of the Open Source Development
Labs and the Free Standards Group. Your editor was able to join this
group, moderate a panel of kernel developers, and present his "kernel
report" talk to an interested subset of attendees. This event has been
well covered by many others, so your editor will focus on his
particular impressions. Some other reports worth reading include:
Your editor has been to a lot of Linux-oriented events over the years. The
collaboration summit was nearly unique, however, in the variety of people
who attended. It was certainly not a developer's conference, but quite a
few free software developers were to be found there. It is not a business
conference along the lines of OSBC, but plenty of executive-type business
people were in the room. Throw in a certain amount of media (on the first
day), a handful of lawyers, high-profile users from Fortune 500
companies, and some PR people and you get a cross-section of the Linux
ecosystem from developers of low-level code through to the people trying to
make that code work in serious business settings. It is rare that people
from the wider community get together and talk in this sort of setting.
The stated purpose of the event was to promote collaboration across this
wider community. The first step toward collaboration is understanding; the
summit was almost certainly successful in helping members of the community
understand each other better. For example, the kernel panel was a useful
exercise in communicating the developers' thoughts to their user
community. But a comment your editor heard more than once was that the most
interesting part of the panel was just seeing how those developers interact
with each other. Users, vendors, lawyers, and more were all able to
discuss the ups and downs of Linux from their point of view. The bottom
line is that things are going great, but they could be made to go quite a
bit better yet.
Ubuntu founder Mark Shuttleworth was the keynote speaker for the first day
of the summit. His talk covered a number of topics, but the core point,
perhaps, was this: while we have many tools which promote collaboration
within projects, we lack tools to help with collaboration
between projects. Wouldn't it be nice to have one distributed bug
tracking system and a comprehensive, distributed source management system?
Maybe the kernel developers and the enterprise distribution vendors could
get together and designate an occasional kernel development cycle as being
targeted toward enterprise release - and, thus, put together with a larger
emphasis on stability. In general, there is a great deal of friction
within the system; removing that friction will be an important part of our
Some themes were heard many times. There is a lot of interest in GPLv3 and
the impact it will have on the industry. The message from the summit was
that little will happen in a hurry, and that the best thing to do is to sit
and watch. Everybody wants better power management and better device
driver coverage. There is real stress between the enterprise customers'
desires for stability, security fixes, and new features. Freedom matters:
it is fun to hear a manager from Motorola talk about how using Linux makes it
possible for the company to create interesting new products that couldn't
have been done on "somebody else's stack." And, some press headlines
notwithstanding, large proprietary software vendors were absent from the
room - both physically and from the discussions which were held. This was
not a meeting intended to design a "counterattack"; it was a way for the
larger free software community to promote cooperation and understanding
Finally, the summit was clearly intended to help the Linux Foundation
figure out what role it should really be playing. This organization is
still relatively new; it has a short period of time to prove that it will
be worth the fees that its members pay into it. The Foundation is settling
into three basic roles: promoting the development of Linux, protecting
Linux from threats, and working to standardize the platform. There
appeared to be wide agreement that, by organizing events like this summit,
the Foundation is off to a good start.
Comments (2 posted)
The Firefox 3.0 (FF3) development team has been busy, releasing a steady
stream of alphas over the last six months, in preparation for a final
release late this year.
The latest release is
which seems like a good time to check in on the project and see what
changes have been made. The project, codenamed Gran
Paradiso, maintains an extensive set of documents on its planning center wiki. These documents are
worth a look for anyone interested in what features are planned, but also
provide insight into the planning process itself.
The first thing to notice is that there is not much different from Firefox
2.0, at least in the main window. The familiar buttons and bars are present in
their usual locations, the menus remain essentially the same, though the
performance seems a bit snappier. The main window is likely to remain the
same through the final release, but much of the rest of the UI will be
tweaked. So far, the team has focused more on the underlying code, while
pages to mock up the UI.
Much of the new functionality is under the covers in the Gecko 1.9
rendering engine. A specific goal of the engine development team was to
Acid2 browser test
and they have succeeded in doing that. Switching the engine to use the Cairo 2D graphics library will provide
support for SVG, PostScript, PDF and other formats. Performance
enhancements and a more native look, especially for the Mac, are also on
tap for FF3.
The biggest new feature for users has not yet appeared in the browser.
Places is a feature
meant to unify bookmarks, history and RSS feeds, while providing a means to
tag them to help organize them. In order to do that,
FF3 is storing the Places information in an SQLite database. This database will also
be available to Firefox Add-ons which can then offer other ways to view and
Using SQLite for bookmarks has been enabled for Alpha 5, with numerous
warnings about making a backup of your bookmarks file before running it.
Tagging, history and RSS feeds are still awaiting a UI before their storage
in the SQLite database is enabled.
One UI element that has been updated is the page info popup (image at
left), which received
an overhaul bringing its look more in line with other tabbed popups,
Preferences for example. More work of that sort can be expected as
consistency within the UI is definitely a goal
for FF3. The content handling interface is part of that work.
Earlier versions had different dialog boxes depending on how the
content was retrieved, which caused some confusion in users, so FF3 will unify
those dialogs into one consistent view.
Security is another area where the developers are putting in significant
effort. Providing users with feedback, about the security of a site, without
overwhelming them with warnings and popups, is a difficult problem, but some
interesting ideas are emerging. With fairly simple UI changes, user
confusion can be reduced. Modifying the location bar to remove the
"favicon" (which some malicious sites set to the lock icon) and to
highlight just the domain portion of the URL can go a long way towards
helping users determine what sites they are visiting. Mozilla is also
working with Google to generate a list of sites delivering malicious
content and FF3 will block access to those sites.
One worrisome development is the removal of the "same domain" restriction
on XMLHttpRequest (XHR) calls. XHR is the workhorse of the AJAX style of
browser interaction and web designers have long chafed under the
proposals on lifting that restriction by using "access control" lists and
the FF3 team plans to
implement them. The current restrictions have served us well, at least from a
security perspective; hopefully this change has been well thought out.
Another big addition, still in the "coming soon" category, is the addition
of more offline capabilities to the browser. Being able to run web
applications when not connected to the internet is one of the main goals.
In order to do that, the history of pages will have to include the state of
embedded in the page. With a big enough browser cache, this would allow
enough context to re-browse pages from weeks ago, even when offline.
Overall, FF3 looks like an exciting release with a wide variety of new
features. The current alpha does not really provide even an approximation
of the full feature set, but it is still worth a look. At roughly the
halfway point in FF3 development, great strides have been made with more to
Comments (21 posted)
One of the main selling points touted by many Linux-oriented vendors is
stability. Once a customer buys a subscription for an "enterprise" Linux
or embedded systems product, the vendor will fix bugs in the software but
otherwise keep it stable. The value for customers is that they can put
these supported distributions into important parts of their operations (or
products) secure in the knowledge that their supplier will provide updates
which keep the system bug-free and secure without breaking things. This
business model predates Linux by many years, but, as the success of certain
companies shows, there is still demand for this sort of service.
So it is interesting that, at the recently-concluded Linux Foundation
Collaboration Summit, numerous people were heard expressing concerns about
this model. Grumbles were voiced in the official panels and over beer in
the evening; they came from representatives of the relevant vendors, their
customers, and from not-so-innocent bystanders. The "freeze and support"
model has its merits, but there appears to be a growing group of people who
are wondering if it is the best way to support a fast-moving system like
The problem is that there is a great deal of stress between the "completely
stable" ideal and the desire for new features and hardware support. That
leads to the distribution of some interesting kernels. Consider, for
example, Red Hat Enterprise Linux 4, which was released
in February, 2005, with a stabilized 2.6.9 kernel. RHEL4 systems are still
running a 2.6.9 kernel, but it has seen a few changes:
1 added a disk-based crash dump facility (requiring driver-level
support), a completely new Megaraid driver, a number of block I/O
subsystem and driver changes to support filesystems larger than 2TB,
and new versions of a dozen or so device drivers.
2 threw in SystemTap, an updated ext3 filesystem, the in-kernel
key management subsystem, a new OpenIPMI module, a new audit
subsystem, and about a dozen updated device drivers.
- For update
3, Red Hat added the InfiniBand subsystem, access control list
support, the error detection and correction (EDAC) subsystem, and
plenty of updated drivers.
4 added WiFi protected access (WPA) capability, ACL support in
NFS, support for a number of processor models and low-level chipsets,
and a large number of new and updated drivers.
The end result is that, while running uname -r on a RHEL4
system will yield
"2.6.9", what Red Hat is shipping is a far cry from the original
2.6.9 kernel, and, more to the point, it is far removed from the kernel
shipped with RHEL4 when it first became available. This enterprise kernel
is not quite as stable as one might have thought.
Greg Kroah-Hartman recently posted an
article on this topic which makes it clear that Red Hat is not alone in
backporting features into its stable kernels:
An example of how this works can be seen in the latest Novell
SLES10 Service Pack 1 release. Originally the SLES10 kernel was
based on the 2.6.16 kernel release with a number of bugfixes added
to it. At the time of the Service Pack 1 release, it was still
based on the 2.6.16 kernel version, but the SCSI core, libata core,
and all SATA drivers were backported from the 2.6.20 kernel.org
kernel release to be included in this 2.6.16 based kernel
package. This changed a number of ABI issues for any external SCSI
or storage driver that they would need to be aware of when
producing an updated version of their driver for the Service Pack 1
Similar things have been known to happen in
the embedded world. In every case, the distributors are responding to two
conflicting wishes expressed by their customers: those customers want
stability, but they also want useful new features and support for new
hardware. This conflict forces distributors to walk a fine line, carefully
backporting just enough new stuff to keep their customers happy without
The word from the summit is that this balancing act does not always work.
There were stories of production systems falling over after updates were
applied - to the point that some high-end users are starting to reconsider
their use of Linux in some situations. It is hard to see how this problem
can be fixed: the backporting of code is an inherently risky operation. No
matter how well the backported code has been tested, it has not been
tested in the older environment into which it has been transplanted. This
code may depend on other, seemingly unrelated fixes which were merged at
other times; all of those fixes must be picked up to do the backport
properly. It is
also not the same code which is found in current kernels;
distributor-private changes will have to be made to get the backported code
to work with the older kernel. Backporting code can only serve to
destabilize it, often in obscure ways which do not come to light until some
important customer attempts to put it into production.
All of this argues against the backporting of code into the stabilized
kernels used in long-term-support distributions. But customer demand for
features, and (especially) hardware support will not go away. In fact, it
is likely to get worse. Quoting Greg again:
For machines that must work with new hardware all the time (laptops
and some desktops), the 12-18 month cycle before adding new device
support makes them pretty much impossible to use at
times. (i.e. people want you to support the latest toy they just
bought from the store.) This makes things like "enterprise" kernels
that are directed toward desktops quite uncomfortable to use after
even a single year has passed.
So, if one goes on the assumption that the Plan For World Domination
includes moving Linux out of the server room onto a wider variety of
systems, the pressure for additional hardware support in "stabilized"
kernels can only grow.
What is to be done? Greg offers three approaches, the first two of which
are business as usual and the elimination of backports. The disadvantages
of the first option should have been made clear by now; going to a "bug
fixes only" mode has its appeal, but the resulting kernels will look
old and obsolete in a very short time. Greg's third option is one which
your editor heard advocated by several people at the Collaboration summit:
the long-term-support distributions would simply move to a current kernel
every time they do a major update.
Such a change would have obvious advantages: all of the new features and
new drivers would come automatically, with no need for backporting.
Distributors could focus more on stabilizing the mainline, knowing that
those fixes would get to their customers quickly. Many more bug fixes
would get into kernel updates in general; no distributor can possibly hope
to backport even a significant percentage of the fixes which get into the
mainline. The attempt to graft an old support model better suited to
proprietary systems would end, and long-term support Linux customers would
get something that looks more like Linux.
Of course, there may be some disadvantages as well. Dave Jones has expressed some
discomfort with this idea:
The big problem with this scenario is that it ignores the fact that
kernel.org kernels are on the whole significantly less stable these
days than they used to be. With the unified development/stable
model, we introduce a lot of half-baked untested code into the
trees, and this typically doesn't get stabilised until after a
distro rebases to that kernel for their next release, and uncovers
all the nasty problems with it whilst it's in beta. As well as
pulling 'all bugfixes and security updates', a rebase pulls in all
sorts of unknown new problems.
As Dave also notes, some mainline kernel releases are better than others;
the current 2.6.21 kernel would probably not be welcomed in many stable
environments. So any plan which involved upgrading to current kernels
would have to give some thought to the problem of ensuring that those
kernels are suitably stable.
Some of the key ideas to achieve that goal may already be in place. There
was talk at the summit of getting the long-term support vendors to
coordinate their release schedules to be able to take advantage of an
occasional extra-stable kernel release cycle. It has often been suggested
that the kernel could go to an even/odd cycle model, where even-numbered
releases are done with stability as the primary goal. Such a cycle could
work well for distributors; an odd release could be used in beta
distribution releases, with the idea of fixing the resulting bugs for the
following even release. The final distribution release (or update) would
then use the resulting stable kernel. There is opposition to the even/odd
idea, but that could change if the benefits become clear enough.
Both Greg and Dave consider the effects such a change would have on the
providers of binary-only modules. Greg thinks that staying closer to the
upstream would make life easier by reducing the number of kernel variants
that these vendors have to support. Dave, instead, thinks that binary-only
modules would break more often, and "This kind of breakage in an
update isn't acceptable for the people paying for those expensive support
contracts." If the latter position proves true, it can be seen as
an illustration of the costs imposed on the process by proprietary modules.
Dave concludes with the thought that the status quo will not change anytime
soon. Certainly distribution vendors would have to spend a lot of time
thinking and talking with their customers before making such a fundamental
change in how their products are maintained. But the pressures for change
would appear to be strong, and customers may well conclude that they would
be better off staying closer to the mainline. Linux and free software have
forced many fundamental changes in how the industry operates; we may yet
have a better solution to the long-term support problem as well.
Comments (50 posted)
Page editor: Jonathan Corbet
Inside this week's LWN.net Weekly Edition
- Security: Red Hat and IBM get certified; New vulnerabilities in evolution-data-server, gd, iscsi-initiator-utils, mplayer...
- Kernel: Scenes from a flame war; btrfs and NILFS; Getting the message from the kernel.
- Distributions: Access Control - What is it good for?; SUSE Linux Enterprise 10 SP 1, openSUSE 10.3 Alpha5,Slackware 12.0 RC 1, YDL v5.0.2; RHEL certified at EAL4+; EOL for SUSE Linux 9.3 and Fedora Core 5
- Development: Google Summer of Code Series, OpenMRS, Python 3000 status update,
new versions of SQLite, Pixy, REMO, Qucs, SQL-Ledger, ChessX, Wine, CAPS,
Free Music Instrument Tuner, Muxi, GPSMan, Pootle, binutils, GIT,
- Press: Doc Searls on privacy and advertising, problems with the Peer to Patent
Project, Joomla and proprietary extension modules, Linux Foundation
Collaboration Summit coverage, QuickBooks for servers but not desktops,
neuroimaging with Linux, interviews with Lars Knoll, Fred Miller and
Jeff Mitchell, survey on free documentation, EncFS HOWTO,
Google Browser Sync extension, reviews of JackLab, Liferea, X-Wrt, RPM
and FUSE, the PlayOgg campaign.
- Announcements: US court rules email is private, OSS open-sourced, Linbox Secure Control,
Linspire does the Microsoft deal, Mandriva won't do Microsoft deal,
Linux Digital Photo Frame Kit, Wind River in space, O'Reilly to sell
book chapters online, FLOSS License Slide, FOSS licensing in medicine,
RailsConf report, Piksel07 cfp, WORM cfp, Enterprise Architecture
Practitioners conf, OO.o conf registration opens, VMworld 2007,
WebGUI users conf, HyperFORGE launched.