Web browsers tend to be fast-moving targets. They are frequently updated
both for security holes and to add new functionality. Two of the most
popular free software browsers, Firefox and Chromium, also have fairly
short lifecycles, which often requires distributions to backport security
fixes into unsupported releases. Both Mozilla and Google are more focused
on the Windows versions of their browsers—where application updates with bundled
libraries are the norm—but it makes life difficult for Linux
distributions. So difficult that it appears Debian will be dropping
Chromium from the upcoming 6.0 ("Squeeze") release.
On September 1, Giuseppe Iuculano posted to debian-release asking that the
release team allow him to replace Chromium version 5, which was in the
testing repository, with version 6. Version 5 will no longer receive
updates from Google, but Iuculano was concerned that it would too difficult
to backport patches that fix some "security
issues" with SVG in WebKit because of major refactoring in that
codebase. Roughly a week a later, he uploaded Chromium 6 and asked that
the team either unblock that from being added to unstable or remove Chromium
5 from testing. The team opted for the latter.
One of the big problems is that Chromium uses a WebKit version that is
bundled into the browser source, rather than using a particular released
version of the library. Each time Google updates the browser, a new WebKit
comes with it, and the old browser goes into an unsupported state. In
order to keep using the v5 browser, any security fixes from the new
browser and bundled libraries would have to be reflected into the older
code—not a small task by any means.
In addition, Chromium versions come fast and furious: v5 was only supported
for roughly two months, so the release team was worried that the same would
be true of v6. Meanwhile, Squeeze has been frozen, which means that new
features are not being added. For a release that prides itself on
stability, there really was no choice but to drop Chromium.
In response to
Debian project leader Stefano Zacchiroli's request for information to better understand
the decision, release assistant
Julien Cristau put it this way:
We were given a choice between removing chromium-browser from testing,
or accepting a diff of
22161 files changed, 8494803 insertions(+), 1202268 deletions(-)
That didn't seem like much of a choice. I don't have any reason to
believe the new version won't have the same problem 2 months (or a year)
from now, and as far as I know neither the security team nor the stable
release managers usually accept that kind of changes in stable.
Zacchiroli noted that he is a Chromium
user and wants to have a clear story for why the browser is not in Squeeze,
both for users and for upstream.
While it won't be available in the Squeeze repository (testing right now,
but stable once it is released), Chromium will likely still be available in
official backports repository. That led Michael Gilbert to suggest a slightly different interpretation of
what "supported in stable" might mean:
I think that this need is justification to declare backports "officially
supported by the debian project". Thus when asked this question, you
can point to the fact that chromium is indeed supported on stable, just
via a different model than folks are used to. That is of course
assuming someone is willing to support the backport. I may do that if
Giuseppe isn't interested.
Having chromium not present in stable proper helps the security team
While it may help the security team, there are still some things to be
worked out for users of the backports repository. Currently, packages that
come from backports do not get automatically updated when doing an
apt-get upgrade (or its GUI equivalent). That would mean
that users would have to remember to go grab the latest Chromium whenever a
security update came out. Since backports has become an official part of
Debian, there is thought that changing the behavior to pick up updates from
there would make sense.
It's not just Chromium that is affected however. Squeeze will ship with an
Iceweasel—Debian-branded Firefox—in its repository, but there
seems to be a belief that over time, as the shipping Iceweasel version
falls further and further behind, it too will be coming from
backports. That would give further reason to make backports updates
automatic and that seems to be the consensus on the debian-release list.
This is a problem that we haven't heard the last of. For any distribution
with a long-lasting support window (like Ubuntu LTS, Debian, or any of the
enterprise distributions), it is going to be very difficult to keep
supporting older browsers. The alternative, which is the direction that
most are taking, is to update to the latest upstream version throughout
the distribution's support window.
New browser versions often require newer system libraries, though, which
conflict with other application's requirements. Either the browser can be
backported to use the libraries that shipped with the distribution, or the
newer libraries can be bundled with the browser. Ubuntu has taken the
latter approach, choosing
to bundle some libraries for 10.04 LTS.
It's also possible that eventually it may be more than just web browsers
that adopt a fast release schedule with fairly short support
windows. If that happens, it seems likely that distributions will need to
pool their resources to backport fixes into the older releases or just
follow the upstream releases. It will be especially prevalent for
cross-platform software, where the Windows or Mac OS X versions are the
most popular. Bundling libraries is the usual path taken by applications
on those platforms, so they don't suffer under the same constraints that
Linux systems do.
Keeping up with what seems to be an ever-increasing pace of development,
maintaining stability for users, is a tricky problem to manage.
Distributions may find that it is one they can't manage alone—or at
all. If the latter, distributions will increasingly have to rely on
stability coming from the upstream projects, but there is tension there as
Fast-moving projects are likely to make changes to fundamental components,
changing both the program's behavior and its interface. While that doesn't
necessarily make the application unstable in the usual sense of that term,
it does change things in ways
that stability oriented distributions try to avoid. It's a difficult
balancing act and we'll have to see how it plays out.
Comments (47 posted)
As part of the discussion that is currently raging in the Fedora community
about its target audience, specifically with regard to its update policy,
Máirín Duffy came up with some user
archetypes to put faces to the different kinds of users Fedora might
target. But that much-commented-on blog posting also had a "bonus" section that
pointed out how confusing the update interface can be for users who aren't
technically savvy (i.e. "Caroline Casual-User"). She suggested one way
that Fedora could potentially reduce the confusion and, as it turns out,
Richard Hughes had already been working on a similar
idea called app-install for the past few months. App-install could
help alleviate much of the confusion that casual users have when faced with
updates—no matter how many or few there are.
The problems that Duffy identified mostly centered around what gets
presented to users in the PackageKit GUI. She has an annotated version of
the interface that points out all of the confusing information that is
displayed. Even folks who are relatively new to Linux, but have some amount
of technical background, will probably find her notations to be somewhat
silly ("A streamer? Are we having a party? With "G's"?" for
GStreamer for example). But that is part of her point—there are many
things that technical users quickly recognize and at least partially
understand that will completely go over the heads of casual users.
One could argue that targeting these casual users does not make sense for
Fedora—many do—but Duffy's vision is that Fedora should reduce the
complexity to a "platform update", at least for some users. That update would contain
all of the different pieces that might normally be distributed on a
package-by-package basis. That way, a casual user sees just one update,
presumably fairly infrequently, that encompasses all of the underlying
package thrash that seems to occur frequently with Fedora.
As she readily admits in the posting, it may not be quite that simple.
In fact, she identified three groups of packages ("base platform", "core
desktop", and "applications") that could have bundled updates. Each of
those bundles might have updates released on different schedules, and each
would be rolled into the monthly platform update.
There are some other considerations as well, though, particularly security
fixes. Security updates come rather frequently for Fedora and most
distributions that keep up with upstream vulnerabilities. Her plan for
those is a bit vague, but the central idea is to produce less churn, less
frequently, so that system breakage (i.e. base platform and to a lesser
extent core desktop) is rare. That would also help reduce one problem that
casual users suffer from: they are forced to make decisions about updates
based on long lists of strange package names with largely incomprehensible
reasons for the update.
While app-install is not a complete solution to what Duffy is proposing, it
is a step down that path. The basic idea is that casual users want
"applications" rather than packages. Application is defined to only
include programs that are shipped with desktop files and include an
icon—probably exactly what less-technical users expect. In addition,
the applications are listed by the name users are used to, rather than by a
package name that can be different for each distribution (i.e. Firefox
vs. mozilla-firefox)—and sometimes changes between releases of the same
It is not a coincidence that app-install seeks to standardize these names
across distributions. It is specifically geared to be a cross-distribution
tool and Fedora's Hughes was joined by two other distribution hackers,
SUSE's Roderick Greening and
Sebastian Heinlein of Ubuntu, to create the specification and tools. At its
app-install is a data format that contains the information needed to
display applications in friendlier, localized way. The data gets stored in
a SQLite database file that can be distributed along with a tarball of the
Hughes has created two demonstration applications, an installer and an
updater, that use the app-install data. This allows applications to be
listed with their icons and descriptions extracted from the desktop files.
2 of the app-install schema, which is currently under development, will
add the ability to display application ratings and screen shots to help
users have a better understanding of what the application is and does
before installing it. It is a vision that is clearly influenced by things
like the Android Market and Ubuntu Software Center.
The app-install data is extracted from the desktop files of the
applications, which, as the README file notes,
has "nice translations gently massaged and QA'd by upstream".
By extracting the data into a SQLite file, and pulling out just the icons
needed, much less disk space is used while still providing icons and
descriptions for all of the applications that a distribution ships.
There are currently tools available to extract the needed metadata from
Ubuntu and Fedora packages, and it should be fairly easy to add the ability
to extract it from other distributions.
Hughes announced app-install in a fedora-devel posting on September 7 and
there were immediate questions about the benefits to Fedora from a
cross-distribution implementation. Hughes had a ready answer to that:
Fedora is *not* a
big enough ecosystem to drive fully localized and feature rich user
experiences. Working with other distros mean we can work as one big
team and share the burden of translation, bug-fixes and writing new
common code. I certainly don't want to write software for Fedora, but
rather write software for Linux, and then write the small amount of
Fedora interface code.
There were also questions about using Hughes's definition of an
application, but the clear focus of the app-install project is for casual
users: "If you know what an epoch is, it's probably not for
you", he said in the announcement. But by the same token,
PackageKit (and yum or apt-get) will continue to exist, "so panic
not". The idea behind app-install should be seen as a potential
broadening of the reach for distributions, so that they can attract more
One could certainly imagine integrating some of Duffy's ideas into this
framework, with an app-install-based installer possibly replacing the
applications bundle she described, while
adding functionality to support the core desktop and base
platform bundles. It is far from clear that Fedora will adopt an update
strategy along the lines that she described, but it is clear that
there are likely to be changes to Fedora's update policies. The ideas
described by Duffy, as well as Jon
Masters and others, along with the code from the app-install folks may
just find a place as part of those changes.
It would also seem likely that, unlike some other cross-distribution
initiatives, the app-install code will actually be used in multiple
places. Hughes notes that Heinlein is driving many of the changes in the Ubuntu
Software Center, so that as app-install matures it will find its way into
that code. One would guess that having Greening involved means
that openSUSE is looking at using it as well.
Because it doesn't really change anything for existing users—they'll
still have the tools they've always used—but adds a "simplicity
layer" for new users, there should be relatively little resistance to the
app-install idea. There is a burden for distributions to maintain the
app-install data, but it's something that could be automated pretty
easily. Since it arguably updates Linux into the world of "Apps", which is
certainly a popular paradigm at the moment, its adoption by at least some
consumer-oriented distributions is not terribly far-fetched.
Comments (19 posted)
Your editor had the good fortune to be able to attend the first LinuxCon
Brazil event, held in São Paulo. There were a number of interesting
talks to be seen, presented by speakers from Brazil and far beyond. This
article will cover three in particular which were interesting as a result
of the very different views they gave on how Linux users work with their
Jane Silber is the (relatively) new CEO at Canonical; she went to Brazil to
deliver a keynote on the "consumerization of IT" and, in particular, its
implications on open source. What she was really there to talk about, of
course, was the interesting stuff that is being done with the Ubuntu
distribution. Linux serves the needs of expert users very well, but,
according to Jane, the future of Linux is very much in the
hands of "consumers," so we need to shift our focus toward that user base.
There are a number of things being done in the Ubuntu context to make that
At the top of the list is "fit and finish," which she described as "the
sprinkling of fresh parsley" that makes the whole meal seem complete.
There have been incredible advances in this area, she says, but our
software still has a lot of rough edges to it. Consumer-type users can
usually figure things out, but it does not build confidence in the
software. To make things better, Canonical is sponsoring a lot of
usability research and working with upstream projects to improve their
usability. Results are posted on design.canonical.com. Projects like
100 papercuts have also
helped to improve the usability of free software.
Another area of interest is the distribution pipeline; the key here is
"speed and freshness." At one point, that idea was typified by the
six-month release cycle which, according to Jane, was innovative when
Ubuntu started using it. But now a six-month cycle is far too slow; the
example we need to be emulating now is, instead, the Apple App Store.
Apple's distribution mechanism would be rather less popular if it only got
new software every six months. The Ubuntu Software Store has been set up
in an attempt to create a similar mechanism for Ubuntu users. It will
provide much quicker application updates and - of course - the ability for
users to purchase software.
The end result of all this work is intended to be a "fit, finished, fresh
pipeline" of software, creating "a wave of code that consumers can surf on
top of." This may not be quite the way that most LWN readers think about
their use of Linux, but it may yet prove to be an approach which brings
Linux to a whole new community of users.
Vinod Kutty represents a very different type of consumer: the Chicago
Mercantile Exchange, whose Linux-based trading platform moves unimaginable
amounts of money around every day. His talk covered the Exchange's move to
Linux, which began in 2003. There was a fair amount of discussion of
performance benefits and cost savings, but the interesting part of the talk
had to do with how Linux changed the way that the Exchange deals with its
software and its suppliers. According to Vinod, we have all become used to
buying things that we don't understand, but open source changes that.
He described an episode where a proprietary Unix system was showing
unwanted latencies under certain workloads. With the Unix system, the only
recourse was to file a support request with the vendor, then wait until it
got escalated to a level where somebody might just be able to fix it.
With an enterprise Linux distribution, the support request is still made,
but the waiting time is no longer idle. Instead, they can be doing their
own investigation of the problem, looking for reports of similar issues on
the net or going directly into the source. Chances are good that the
problem can be nailed down before the vendor gets back with an answer.
A related issue is that of quality assurance and finding bugs. According
to Vinod, we are all routinely performing QA work for our vendors. The
difference with Linux is that any time spent chasing down problems benefits
the community as a whole; it also benefits CME when the fix comes back in
Linux, Vinod said, has become the "Wikipedia of operating systems"; it is a
store of knowledge on how systems should be built. Taking full advantage
of that knowledge requires building up a certain amount of in-house
expertise. But having that expertise greatly reduces the risk of
catastrophic problems; depending on outside vendors, instead, increases
that risk. The value of open source is that it allows us to move beyond
being consumers and know enough about our systems to take responsibility
for keeping them working.
Once upon a time, it seemed like it was simply not possible to run a
Linux-related conference without Jon 'Maddog' Hall in attendance.
Unfortunately, we don't see as much of Maddog as we used to; one reason for
that is that he has been busy working on schemes in Brazil. One of those
is Project Cauã. Maddog
cannot be faulted for lack of ambition: he hopes to use Linux in a plan to
create millions of system administration jobs, reduce energy use, and
increase self-sufficiency in Brazilian cities.
Maddog has long been critical of the One Laptop Per Child project which, he
says, is targeting children who are too young, too poor, and too far from
good network access. He sees a group which can benefit more from direct
help: children living in large cities in countries like Brazil. These kids
live in a dense environment where network access is possible. They also
live in an environment where many people and businesses have computers, but
they generally lack the expertise to keep those computers working
smoothly. The result is a lot of frustration and lost time.
The idea behind Project Cauã is to put those kids to work building a
better computing infrastructure and supporting its ongoing operation. In
essence, Maddog would like to create an array of small, independent
Internet service provider businesses, each serving one high-rise building.
The first step is training the people - older children - who will build and
run those businesses. They are to be trained in Linux system
administration skills, of course, but also in business management skills:
finding customers, borrowing money, etc. The training will mostly be
delivered electronically, over the net or, if necessary, via DVD.
These new businesspeople will then go out to deliver computing services to
their areas. There is a whole network architecture being designed to
support these services, starting with thin client systems to put into homes
or businesses. The idea behind these systems is that they have no moving
parts, so they are quiet and reliable. They can be left on all the time,
making them far more useful. Maddog is working with a São Paulo
university to design these thin clients, with the plan of releasing the
designs so that any of a number of local businesses can manufacture them.
Despite equipping these systems with some nonstandard features - a digital
TV tuner and a femtocell cellular modem, for example - Maddog thinks they
can be built for less than $100 each.
The project envisions that each building would have an incoming network
link from a wholesale ISP and at least one local server system. Three
sizes of servers are planned, with the smallest one being made of two thin
clients tied together. These servers will run a local wireless network and
will be tied into a city-wide mesh network as well. Applications will
generally run in virtual machines - on either the client or the server -
and will all use encrypted communications and storage.
Project Cauã trainees will be able to buy the equipment using loans
underwritten by the project itself; they then should be able to sell
network access and support services to the tenants of the buildings they
cover. If all goes according to plan, this business should generate enough
money to pay off the loans and provide a nice income. Unlike the OLPC
program which, according to Maddog, has a ten-year payoff time at best,
Project Cauã will be able to turn a Brazilian city kid into a
successful businessperson within two years.
A project like this requires some funding to get off the ground. It seems
that Brazil has a law requiring technical companies to direct 4% of their
revenue toward fast development projects. Project Cauã qualifies,
and, evidently, a number of (unnamed) companies have agreed to send at
least some of their donations in that direction. With funds obtained in
this way, Maddog is able to say that the project is being launched with no
government money at all.
This project is still in an early state; the computer designs and training
materials do not yet exist. Some people have expressed doubts as to
whether the whole thing is really feasible. But one cannot deny that it is
an interesting vision of the use of free software to make life better for
Brazilian city dwellers while creating many thousands of small
businesspeople who understand the technology and are able to manage it
locally. This is not "cloud computing," where resources are pulled back to
distant data centers. It is local computing, with the people who
understand and run it in the same building.
Linux is being pushed in a lot of different directions for a wide variety
of users. Beyond doubt, there will be people out there who want to deal
with Linux-based systems in a pure "consumer" mode. For them, Linux
differs little from any other operating system. Others, though, want to
dig more deeply into the system and understand it better so that they can
fix their own problems or run it in the way that suits them best - or
empower others to do the same. Linux, it seems, is well placed to serve
all of these different classes of users.
Comments (22 posted)
Page editor: Jonathan Corbet
Inside this week's LWN.net Weekly Edition
- Security: Where are the non-root X servers?; New vulnerabilities in kernel, Mozilla products, MySQL, sudo,...
- Kernel: Prefetch; Further notes on stable kernels; Working on workqueues; Another old security problem.
- Distributions: Looking at Fedora 14 and Ubuntu 10.10; Debian 5.0 update, Mint Debian, openSUSE 11.4 M1, Ubuntu 10.10 Beta, ...
- Development: Matterhorn 1.0; bzr, cairo, gdb, postgresql, ...
- Announcements: CodePlex.com donates $25,000 to Mercurial project, Mozilla Public License Alpha 2, ...