The conversation started innocuously enough - or maybe it didn't. Rahul
was this: given recent
decisions in the U.S. Supreme Court, might Fedora actually be able to point
at repositories containing codecs which are said to infringe upon
U.S. software patents? And, more to the point, regardless of what Red
Hat's legal department says, does Fedora want
to do such a thing?
Fedora leader Max Spevack responded
to answer this question, "the Fedora Board needs to reaffirm its
larger strategy about Multimedia.
" There was some digression on how
firmware does (or does not) differ from proprietary codecs. Then Mike
McGrath broadened the scope
further with a
What is our target market supposed to be?
The following is a quote from Bill Nottingham's response, but his message is worth reading in
We don't have one! Seriously, I have yet to see anything that shows
that we have a coherent market, a plan for attack, or *anything*
along those lines.
So, we muddle along. Since no one has a plan or a target market, we
implement whatever features the developers happen to think of, or
random features vaguely relating to future enterprise
development. Or we just incorporate the latest upstream....
Right now we don't have any overriding set of goals. So we never
really say 'no, that isn't what we want Fedora to do' to anything
that fits our simple 'uses open source, isn't completely targeted
to obsolete things' mantra, and we attempt to do all of these
things... which means we'll probably fail at all of them.
This message clearly resonated among the Fedora developers, none of whom
stood up to say that he or she had a clear idea of who the target market
is. Fedora hackers are looking over at Ubuntu, which has adopted a
focused view of what it is trying to do and which has had significant
success as a result. The Fedora project is seen as lacking that focus;
it's not sure of what it's trying to do. As the distribution matures, its
community is starting to ask itself some hard questions about where it is
trying to go. It's a sort of free software project mid-life crisis.
Initially, Fedora's mission was seen - at least by outsiders - as serving
as a proving ground for software destined to go into Red Hat Enterprise
Linux and as a way to keep the venerable Red Hat Linux product around. So
the target market will have been Red Hat itself, along with the Red Hat
Linux users that Red Hat believed - almost certainly correctly - were an
important part of making its enterprise offerings successful. There was no
painful introspection in those days; Fedora mostly did what Red Hat wanted
done - integrating Xen, for example - with the result that users began to
despair of it ever being a truly community-oriented distribution.
The situation has since changed considerably. Red Hat still holds
considerable sway over what Fedora does by virtue of paying a large number
of engineers to work on it. But the distribution has become much more open
and more driven by what its community wants it to be - should the community
decide what that is.
There is a certain interest in turning Fedora into a polished desktop
distribution. Doing so would require making some hard decisions: focusing
on a single desktop, for example. It would require some sort of solution
to the patent-encumbered codec problem. The support period - recently
lengthened to just over one year - would probably have to be made longer
yet. Much work would have to be done to make the various components of the
distribution work together better; the tug-of-war between the two ways of
configuring network interfaces (system-config-network and NetworkManager)
was mentioned a few times.
Maybe, instead, Fedora wants to be a solid base upon which others can
create finished distributions, much like the role Debian plays for Ubuntu.
There is a certain amount of pride over the project's revisor tool which makes it easy
to create derivative versions of Fedora. If this tool worked well with
external repositories, others could take on the work (and legal risk, if
any) of creating and distributing versions of Fedora with complete codec
support, binary-only drivers, or any of the other things which are not
consistent with Fedora's philosophy. Aside from the fact that Fedora is
still seen (by its developers) as needing more "polish" to serve in this
role, there is an interesting set of trademark issues which comes into play
once a derivative distribution has something other than Fedora packages in
Fedora's trademark policy is already seen as an
impediment by people making derived distributions (such as Dell's
firmware updates live CD). It will be even harder for people trying to
take Fedora into entirely new territory. The issues can be resolved by
simply removing all references to the Fedora name, but there are advantages
on both sides if derived distributions can claim to be based on Fedora.
There has been some talk on how the
policies could be changed, but anything concrete will happen some time from
now, if ever.
Alternatively, Fedora could be a distribution for developers who want
something close to the leading edge and who are less concerned with
"polish." It's a legitimate audience, but it is also limited in size.
A number of other scenarios have been presented, but what is really
required is for people to make the decisions and to get the work done to
implement those decisions. It seems that Fedora is currently short of
decision makers. Jesse Keating expressed
it this way:
We seem to have a lot of sous chefs which are busy doing what they
know, but no executive chefs with a grand vision of what will be on
Anybody who aspires to be an executive chef can, if they actually try to
make significant changes, expect a fair amount of resistance from elsewhere
in the community. But perhaps the time has come for somebody who looks
forward to that sort of challenge. The Fedora project has a solid base to
build on and an increasingly open community process to help it get to where
it wants to be. With the right focus on an interesting set of goals,
Fedora could surprise the world. This distribution should have no trouble
proving that it's not over the hill yet.
Comments (26 posted)
An "online desktop" is not exactly a new idea, as X-based thin clients
have been around for twenty years or more, but combining the desktop and
the web is an idea that is gaining some momentum, at least in the GNOME
community. The online-desktop
project is an attempt to define a mashup of Linux, GNOME and web
applications into something completely new. It is an ambitious goal, which
will be met with a fair amount of skepticism, likely by all of the
communities being mashed.
In a keynote at the recent GUADEC 2007
conference, Bryan Clark and Havoc Pennington laid out a vision
(slides in PDF format) of the online-desktop (OD) with the following
The perfect window to the
Internet: integrated with all your favorite online apps, secure and
virus-free, simple to set up and zero-maintenance thereafter.
Many people are or will be using online applications almost exclusively,
operating system just providing a platform to run the browser, at least
according to Pennington and Clark.
The OD would seamlessly connect the browser-based applications with any
native programs that remain, storing data locally and remotely. This would
allow users to access their data, including settings and preferences, from
any internet connected device. A user would be able to jump between multiple
computers and mobile devices, finding their entire desktop environment and
data available on each. A new disk and fresh install would no longer require
a tedious reconfiguration of preferences and restoration of backups,
a user would simply log in to the 'service' and pick them all up.
This network-centric view of computer usage is not particularly new
either - Sun's "the network is the computer" initiative is a famous
(or infamous) example. The keynote points to plans for the next version of
Windows, which will be more closely integrated with Microsoft's internet
services, as an indicator that the OD direction is the right one. In order
for Microsoft to play its usual lock-in game, it would need to provide most
or all of the kinds of web applications that people already use. OD
proposes to integrate with the existing applications, presenting a single
view that incorporates them and facilitates sharing between them,
without the lock-in.
The requisite demo during the keynote was of
Big Board, a GNOME
that prototypes portions of the OD,
using the Mugshot project. A high-level
implementation plan was also presented:
SEARCH AND DESTROY everything that leaves my data stranded
on a single computer.
INTEGRATE the best web applications with the desktop.
RETHINK the user experience to take advantage of live
connections to friends on the net.
CHANGE THE DEFAULTS so naïve users taking no special action
will create collaborative, backed-up, online data rather than local
By its very nature, OD has a very distributed architecture. It is meant to
talk to various servers to store data using the services (Flickr, Picasa,
Gmail, etc.) that the user is already using. But there will also be data
that needs to be stored, for instance preferences and configuration
information, for which a service will need to be created. This service
is envisioned to be decentralized, with at least some of the servers run
by the community. Like many parts of the project, it is still in the
The project is young, with thoughts and discussion starting to pop up on the
GNOME desktop-devel mailing list in April. Since the conference,
things have started to heat up, the website has moved from within Mugshot to
its own site, some
have been created and there has been a bit of a discussion about an
acronym. An obvious choice, using the first letters of GNOME Online Desktop
leaves something to be desired, so current candidates seem to be GOLD
(OnLine Desktop) or GOOD (Open Online Desktop). Others would rather see it
referred to as GNOME Online without an acronym; we stayed out of
it and used OD.
Another piece that is in the planning stages is an API for desktop
applications to be able to share HTTP state and cache information. With
multiple programs talking to some of the same websites, cookies, at least,
will need to be shared between them. Sharing data that has been cached
from websites, between the browser and other programs that use it, would be
useful to reduce traffic as well.
Mixing and matching different web application APIs and storing lots of
personal data on remote servers will require careful thought about
security. There is some mention of "strong cryptography" being used, but
the concerns mentioned so far seem mostly concerned about handling (and
losing) private keys. Overall, the security issue seems to be a low
priority. A post to
Pennington's blog seems to miss the point, comparing the OD security issues
to that of online banking. Banks only store the information they have, not
the sum total of all data one might have on their computer. In order to
fulfill the "secure and virus-free" portion of the goal statement, a lot
more thinking and effort will have to be focused there.
Folks typically carry more powerful computers, with more storage, in
their pockets today, than were even available to home users twenty years
ago. That trend seems to be continuing, at least for now, so there should
be ways to carry our own data with us. Desktops that were set up to handle
external, plugged-in storage devices and easily switch to an environment
stored there would remove the need to store that data on an internet
server, except, perhaps, for backups. This might be a simpler alternative
that removes some of the concern about loss of data control.
There are lots of opportunities to share and collaborate using web
applications, for pictures, text, video, music, etc. But there is also lots
of data folks may not want to share. Financial information, email,
contracts and work-related documents are just a few of the things that
people very well might want to keep private, naïve or no. It will be
very difficult to set up an environment that turns all data, by default,
into "collaborative, backed-up, online data", without sometimes exposing
sensitive data. Using the word processing tool to type a blog entry and a
love letter should not automatically expose both to the world.
An interesting, related development is an attempt to define what a "free" or
"open" web service is. If a user's personal data is to be stored somewhere
other than the local disk, potentially multiple places, it must be clear
what rights the user has to that data. The responsibilities of the service
must be clearly defined as well. Luis Villa has some
about the framework in which an Open Service Definition might come about.
The framework consists of sets of goals, preconditions and rights, each of
which can be thought of as a "sliding scale". He goes into some detail
enumerating each of the sets and discussing various settings that could be
made on the scales and the impacts that has on freedom and openness, for
both users and providers. By using OD as a test case while discussing various
settings with interested parties, Villa hopes to come out with a set of
definitions and licenses that, in many ways, parallel the Free Software and
Open Source definitions.
It is an issue that is much larger than the OD project and one that bears
The biggest question, perhaps, is whether this is the "right" direction for
GNOME and for desktops in general. Is personal computing finally headed toward a
AJAX, and the like up to the task? One is reminded of the wisdom of the
Answer hazy, ask again later. One advantage that free software has over
some of its competitors is its diversity; we are certain to see other
implementations of an online desktop
(Pyro for example) as well as
desktops that resist the close integration to the web. Free software will
truly give users the ability to choose the one that works for them;
users of proprietary systems may not be so lucky.
Comments (14 posted)
When Greg Kroah-Hartman talked about the provenance of Linux kernel code at
the Ottawa Linux Symposium, one member of the audience asked about whether
contributions from universities were tracked. The answer is that
universities were handled like any other source and tracked accordingly.
If code is contributed by somebody who works for the university (a faculty
member, in other words), the university is credited as having supported the
work. Contributions from students tend to be treated as "hobbyist" work,
but there are few significant contributors who fall into this category.
There is, in fact, very little code coming from the university environment
in general. Your editor was able to find exactly five files in the
2.6.23-rc1 kernel tree which contain a 2007 copyright credited to a
It was not always that way; universities used to be heavily involved in the
creation and distribution of free software (though it did not originally
carry that name). The BSD Unix distribution - the first to support virtual
memory and drive VAXen worldwide - came from the University of California
Linux became the master's thesis for one Linus Torvalds. The X Consortium
grew out of a project at MIT - it was part of Project Athena, which was the
source of much interesting work. The GNU project has its roots at MIT as
well. Alan Cox did much of his crucial early Linux work while at Swansea
University. Ted Ts'o, another important early contributor, was based at
Looking further back,
graybeards among us will remember the influential WATFOR Fortran compiler
from the University of Waterloo. Much interesting work (and code) came
from the Andrew project at Carnegie Mellon University.
Two of your editors got their start at the University of Colorado
working with a project called Toolpack, creating Fortran developer tools;
their names can be found in this
old report [PDF]. The list goes on at some length. Over the years, we
have all been the beneficiaries of a great deal of creativity (and code) to
come out of the university environment.
While there are still interesting projects happening at universities, the
flow of code has nearly stopped.
This seems strange; one need not dig too far into the
curriculum at most computer science departments to find operating systems
classes using Linux as a teaching tool, but these same computer science
departments are, as a whole, not contributing back changes to that tool.
This is a large and rather unremarked-upon change in how free software
works; it would be interesting to understand what force is driving this
Your editor has spent a few weeks querying contacts in the academic world,
but the amount of useful information coming back is surprisingly small. An
"I don't know" answer from a computer science department chair was not
expected. So, rather than provide definitive answers, your editor will
have to engage in some definitive handwaving.
One obvious change is that the amount of code coming from the
corporate environment has grown from nearly zero to something huge.
As the proprietary software idea took over the industry, the idea that a
company would give away its code came to look similar to the notion of
opening up its bank account to all comers. At the same time, individuals
rarely had the resources to develop and contribute code themselves, and the
supporting community was not there. So universities were about the only
real source for freely-circulated software. Thanks to the culture of
openness in academia, passing that code around (and improving it) seemed
like a natural thing to do.
Unfortunately, that code of openness has suffered somewhat in more recent
times. In many parts of the world, universities are able to privatize and
commercialize interesting work, even if that work was funded by public
money. University researchers have strong incentives to put their energy
(and their code) into startup companies instead of contributing that code
back to the community. Look, for example, at the story of the Stanford
Checker, which was initially built on gcc. Rather than contribute that
code, the developers created a private company (Coverity) to commercialize
it. The community has certainly benefited from Coverity's work, but we
still do not have a static analysis tool with anything near the power of
the erstwhile "Stanford Checker."
The same commercial forces almost certainly have the effect of drawing
effective developers out of the university environment. Talented students
who might once have gone on for advanced degrees or continued to work
within the university are likely to have plenty of more lucrative options
elsewhere. This will be especially true for those who have demonstrated
that they can create useful, production-quality code. So, perhaps, it is
not surprising that many of the most productive free software developers
are no longer found at universities.
Another disincentive for university contributors is that few free software
projects are interested in prototypical or overly experimental code. A
potential kernel contribution must be rock-solid, well-benchmarked, with
well-defined needs and users. A university project may explore an
interesting idea far enough to generate the required publications, but the
resulting code is likely to be far from ready for mainline inclusion. It
may well be that, for many university researchers, there is no real reason
to make the effort to get their code merged, even if the work would be
useful in a more practical environment. Funding agencies and tenure
committees do not normally consider community contributions when making
Code contributed to the community also requires ongoing maintenance,
something which many university environments are not well prepared to
support. Graduate students move on to other challenges, and faculty go on
to the next project. It is hard to write a successful grant application
for maintenance work. So interesting code has a real chance of simply
being dropped once the research objectives have been achieved - or the
funding has run out.
So there are a number of reasons for the reduction in university
participation in the development process. That participation has certainly
not fallen to zero. We can thank the University of Michigan for much of
our NFSv4 code. A lot of USB work has come out of the Rowland Institute at
Harvard. Much of the early eCryptfs work happened at Stony Brook
University. The University of Waikato has contributed to the DCCP protocol
implementation. The Helsinki University of Technology works with the IPv6
code, as have the University of Tokyo and Keio University. These are just
a few recent contributions to the kernel; clearly, the scope of university
contributions to the community goes far beyond that. But these
contributions are buried by the code coming from other sources. For better
or for worse, the period when universities were the source of a large
portion of our free software code base would appear to have passed. But
that period left us with a strong foundation on which to build the systems
we have today.
Comments (96 posted)
Page editor: Jonathan Corbet
Inside this week's LWN.net Weekly Edition
- Security: Cache poisoning vulnerability found in BIND; New vulnerabilities in bind, clamav, kernel, tcpdump...
- Kernel: 2.6.23 stragglers; SDIO; fault(); Still waiting for swap prefetch.
- Distributions: Skolelinux/Debian-Edu and LinEx; openSUSE 10.3 Alpha6, Gutsy Gibbon Tribe 3, Launchpad 1.1.7, easyfedora
- Development: Store data on paper with Twibright Optar,
new versions of Linux-HA, SQLite, Apache SpamAssassin, Tramp, RPM, PyKota,
RSBAC, Midgard, QjackCtl, openPlaG, KDE 4 for amd64, Ezstream, Mozilla
Thunderbird and SeaMonkey, Diet Tracker, GCC, FXT.
- Press: The Seeing Yellow campaign, the failure of ODF, BBC considers open-source
support, Xandros buys Scalix, UK Greens push for free software,
What Linspire Agreed To, Con Kolivas interview, reviews of Eclipse GMF,
Ubuntu's Landscape, Navicore on the N800, OLPC XO laptop, Mozilla Sunbird,
- Announcements: Akaza gets NIH grant, Marcus Rex becomes Linux Foundation CTO, new Linux Fund
Visa, Ingres joins Eclipse, new GIMP book, Best of SugarCRM, Study:
evaluate on Windows, deploy on Linux, classmate PC at aKademy, PgDay Portland,
Red Hat at GITEX, Software Freedom Day, Firefox Support knowledge base,