Documentation for free software is generally a problem area, both for users
and developers. But developers at least have the code to consult, whereas
most users are left poking around through menu items and consulting multiple
web pages. The FLOSS Manuals
project is using techniques similar to those used in free software
development to produce manuals for users.
The project seeks to create the kind of manuals that users may be used to
from proprietary software packages. The project's About page describes the
manuals being produced:
FLOSS Manuals make free software more accessible by providing clear
documentation that accurately explains their purpose and use. Each manual
explains what the software does and what it doesn't do, what the interface
looks like, how to install it, how to set the most basic configuration
necessary, and how to use its main functions. To ensure the information
remains useful and up to date the manuals are regularly developed to add
more advanced uses, and to document changes and new versions of the
There are a wide variety of
manuals in progress, covering graphics and audio tools, OpenOffice,
Firefox, WordPress for blogging, and more. The most recent addition is a
set of eight manuals for the One Laptop Per Child XO. These were created
as part of a XO/Sugar
book sprint held in August in Austin, Texas. The manuals cover the XO
hardware and Sugar interface as well as six different activities that are
as part of Sugar.
The use of a "sprint" is just part of the adoption of free software
development strategies. The project is set up to allow for collaborative
development by a community. FLOSS Manuals describes it this way:
The manuals on FLOSS Manuals are written by a community of people, who do a
variety of things to keep the manuals as up to date and accurate as
possible. Anyone can contribute to a manual – to fix a spelling
add a more detailed explanation, to write a new chapter, or to start a
whole new manual. The way in which FLOSS Manuals are written mirrors the
way in which FLOSS (Free, libre open source) software itself is written: by
a community who contribute to and maintain the content.
The manuals themselves are available in a variety of formats: HTML, PDF, as
well as dead tree. One of the more interesting features is the remix capability. Using an
AJAX interface, one can pick and choose from the
chapters of existing manuals to create a custom manual that includes only
the pieces required for some group of users. Remixers can choose their own
cover and title, then export it all as a PDF file. Instead, one can also
cut and paste
the page. In this way, the custom manual will always be up-to-date with the
latest changes made to the chapters.
FLOSS manuals clearly fill a niche that is needed in the free software
world. The manuals have a rather
look that will immediately stand out to users. There is a lot of work
to be done, but it would appear that the project has made an excellent
start. As one might guess, it is always looking for more interested folks
to write, edit, and proofread manuals.
(Thanks to LWN reader David Farning for suggesting we look at this project.)
Comments (13 posted)
At the Linux Plumbers Conference Thursday,
Arjan van de
Ven, Linux developer at Intel and author of
PowerTOP, and Auke Kok, another Linux developer at
Source Technology Center, demonstrated a Linux
system booting in five seconds. The hardware was
an Asus EEE PC, which has solid-state storage,
and the two developers beat the five second
mark with two software loads: one modified Fedora and one
They had to hold up the EEE PC for the audience,
since the time required to finish booting was less
than the time needed for the projector to sync.
How did they do it? Arjan said it starts with
the right attitude. "It's not about booting faster,
it's about booting in 5 seconds." Instead of saving
a second here and there, set a time budget for the
whole system, and make each step of the boot finish
in its allotted time. And no cheating. "Done booting
means CPU and disk idle," Arjan said. No fair putting
up the desktop while still starting services behind
the scenes. (An audience member pointed out that
Microsoft does this.) The "done booting" time did
not include bringing up the network, but did include
starting NetworkManager. A system with a conventional
hard disk will have to take longer to start up: Arjan
said he has run the same load on a ThinkPad and achieved
a 10-second boot time.
Out of the box, Fedora takes
45 seconds from power on to GDM
login screen. A tool called Bootchart,
by Ziga Mahkovec, offers some details. In a
Bootchart graph of the Fedora boot (fig. 1), the
system does some apparently time-wasting things.
It spends a full second starting the loopback
device—checking to see if all the network
interfaces on the system are loopback. Then there's
two seconds to start "sendmail." "Everybody pays
because someone else wants to run a mail server,"
Arjan said, and suggested that for the common
laptop use case—an SMTP server used only
for outgoing mail—the user can simply run ssmtp.
Another time-consuming process
on Fedora was "setroubleshootd," a useful
tool for finding problems with Security Enhanced
Linux (SELinux) configuration. It took five seconds.
Fedora was not to blame for everything. Some upstream
projects had puzzling delays as well. The X Window
System runs the C preprocessor and compiler on
startup, in order to build its keyboard mappings.
Ubuntu's boot time is about the same: two
seconds shorter (fig. 2). It spends 12 seconds running
modprobe running a shell running modprobe, which
ends up loading a single module. The tool for adding
license-restricted drivers takes 2.5 seconds—on
a system with no restricted drivers needed.
"Everybody else pays for the binary driver," Arjan
said. And Ubuntu's GDM takes another 2.5 seconds of
pure CPU time, to display the background image.
Both distributions use splash screens. Arjan and
Auke agreed, "We hate splash screens. By the time
you see it, we want to be done." The development
time that distributions spend on splash screens is
much more than the Intel team spent on booting fast
enough not to need one.
How they did it: the kernel
Step one was to make the budget. The kernel
gets one second to start, including all modules.
"Early boot" including init scripts and background
tasks, gets another second. X gets another second,
and the desktop environment gets two.
The kernel has to be built without initrd, which
takes half a second with nothing in it. So all
modules required for boot must be built into the
kernel. "With a handful of modules you cover 95% of
laptops out there," Arjan said. He suggested building
an initrd-based image to cover the remaining 5%.
Some kernel work made it possible to do
asynchronous initialization of some subsystems.
For example, the modified kernel starts the Advanced
Host Controller Interface (AHCI) initialization,
to handle storage, at the same time as the Universal
Host Controller Interface (UHCI), in order to handle
USB (fig.3). "We can boot the kernel probably in
half a second but we got it down to a second and we
stopped," Arjan said. The kernel should be down to
half a second by 2.6.28, thanks to a brand-new fix
in the AHCI support, he added.
One more kernel change was a small patch to support
readahead. The kernel now keeps track of which blocks
it has to read at boot, then makes that information
available to userspace when booting is complete.
That enables readahead, which is part of the early
How they did it: readahead and init
Fedora uses Upstart
as a replacement for the historic "init" that
traditionally is the first userspace program to run.
But the Intel team went back to the original init.
The order of tasks that init handles is modified
to do three things at the same time: first, an
"sReadahead" process, to read blocks from
disk so that they're cached in memory, second,
the critical path: filesystem check, then the D-Bus
inter-process communication system,
then X, then the desktop. And the
third set of programs to start is the Hardware
Abstraction Layer (HAL), then the udev
manager for hot-plugged devices, then networking.
udev is used only to support devices that might
be added later—the system has a persistent,
old-school /dev directory so that boot doesn't depend
The arrangement of tasks helps get efficient use
out of the CPU. For example, X delays for about
half a second probing for video modes, and that's
when HAL does its CPU-intensive startup (fig. 4).
In a graph of disk and CPU use, both are
at maximum for most of the boot time, thanks
to sReadahead. When X starts, it never has to
wait to read from disk, since everything it needs
is already in cache. sReadahead is based on Fedora Readahead,
but is modified to take advantage of
the kernel's new list of blocks read.
sReadahead is to be released next week on moblin.org,
and the kernel patch is intended for mainline as
soon as Arjan can go over it with ext3 filesystem
maintainer Ted Ts'o. (Ted, in the audience, offered
some suggestions for reordering blocks on disk to
speed boot even further.)
There's a hard limit of 75MB of reads in order
to boot, set by the maximum transfer speed of the
Flash storage: 3 seconds of I/O at 25MB/s. So,
"We don't read the whole file. We read only the
pieces of the file we actually use," Arjan said.
sReadahead uses the "idle" I/O scheduler, so that if
anything else needs the disk it gets it.
With readahead turned off, the system boots in seven
seconds, but with readahead, it meets the target of five.
X is still problematic. "We had to do a lot
of damage to X," Arjan said. Some of the work
involved eliminating the C compiler run by re-using
keyboard mappings, but other work was more temporary.
The current line of X development, though, puts more
of the hardware detection and configuration into the
kernel, which should cut the total startup time.
Since part of the kernel's time budget is already
spent waiting for hardware to initialize, and it
can initialize more than one thing at a time, it's
a more efficient use of time to have the kernel
initialize the video hardware at the same time it
does USB and ATA. X developer Keith Packard, in the
audience and also an Intel employee, offered help.
Setting the video mode in the kernel would not
let the kernel initialize it at the same time as
the rest of the hardware, as shown in figure 3.
The fast-booting system does not use GDM but boots
straight to a user session, running the XFCE desktop
environment. Instead of GDM, Arjan said later,
a distribution could boot to the desktop session of
the last user, but start the screensaver right away.
If a different user wanted to log in, he or she could
use the screensaver's "switch user" button.
In conclusion, Arjan said, "Don't settle for 'make
boot faster.' It's the wrong question. The question
is 'make boot fast'." And don't make all users wait
because a few people run a filesystem that requires
a module or sendmail on their laptops. "Make it
so you only pay the price if you use the feature."
Distributions shouldn't have to maintain separate
initrd-based and initrd-free kernel packages, he said
later. The kernel could try to boot initrd-free,
then fall back if for whatever reason it couldn't
see /sbin/init, as might happen if it's missing the
module needed to mount the root filesystem.
PowerTOP spawned a flurry of power-saving hacks
from all areas of the Linux software scene. The
combination of Bootchart, readahead, and a five-second
target looks likely to set off a friendly boot time
contest among Linux people as well. At the conference
roundup Friday, speaker Kyle McMartin announced that
both Fedora and Ubuntu have fixed some delays in
their boot process, and there was much applause.
FIGURE CREDIT: Arjan van de Ven and Auke Kok, Intel
Comments (167 posted)
Back in the early days of Linux, a developer wishing to meet his or her
peers at a conference had a relatively small number of alternatives. Two
of those - Linux Expo and the Atlanta Linux Showcase - were held in the
United States. But it has been a long time since the US has hosted a
serious developer-oriented conference - especially for developers who are
working on the lower layers of the system. The US-based conferences died
out as a result of a combination of a number of factors, including poor
management, competition from the
Ottawa Linux Symposium and (yes, really) LinuxWorld, and a feeling among
certain developers that becoming the next Dmitry Sklyarov would not be a
fun way to spend the rest of the year.
There is a certain appeal to overseas events, but that appeal fades more
quickly than one might expect. The need for long-haul travel also excludes
US-based developers who are unable to arrange funding. So, for some years,
the development community in the US has been wishing for a local
conference. More recently, a dedicated group of Portland-based developers
led by Kristen Carlson Accardi,
with some help from the Linux Foundation, decided to do something about
it. The result was the first edition of the Linux Plumbers Conference,
held September 17 to 19. Staging this conference in a world
which does not lack for conferences was a bit of a risk, and the organizers
added a few risks of their own to the mix. Looking back, your editor can
say that those risks were well repaid; the first Linux Plumbers Conference
was a great success.
The "plumbing" focus of this event was well chosen. While it is still
possible to run a system with a bare kernel and a shell as the
init process, Linux systems used for real work increasingly have a
layer of user-space software tightly wrapped around the kernel. Quite a
bit of kernel-based functionality only works properly in the presence of a
tightly-coupled user-space component; examples include system
initialization, 3D graphics, and much more. The kernel, along with its
collection of user-space software, makes up the "plumbing" layer which
makes everything else work. Kernel developers have had ample opportunities
to get together in recent years, but there has been no concerted effort to
bring together the developers for the full plumbing layer until now.
The other significant change made by the LPC organizers was to do away with
the "everybody delivers a paper" format used by most conferences. Instead,
the conference was planned as a series of 2.5-hour "microconferences," each
with a specific focus. Each microconference, which had its own "runner,"
was able to select its own mode of operation. They generally included a
certain number of presentations on relevant topics; in this sense, the
microconferences resemble the topic-specific tracks found at many academic
Where things differ, though, is that most of the microconferences were explicitly
oriented toward discussion and problem solving. The best speakers did not
(just) talk about their own project; they raised challenges for the group
as a whole to address. It worked spectacularly well. Throughout the
event, your editor saw rooms full of people who were fully engaged in the
work at hand. The discussions had wide participation, most of the necessary
people were generally in the room, and there were relatively few bored
people checking email. And, most importantly, a lot of real work got
done. Developers came out of the sessions with a clear idea of what needs
to be done, agreement with others on how it was to be done, and, sometimes,
So, what did all of these developers talk about?
- Developers interested in storage talked about the iogrind tool and a
number of outstanding problems; some
notes from the session have been posted.
- The Audio microconference covered a wide range of issues; see this LWN article for a
- A session on tracing saw presentations by developers of a number of
competing technologies, followed by a focused effort to design a
unified low-level shared relay buffer.
- The video input session, for all practical purposes, continued on and
off through the entire conference; that group of developers, which had
never met before, set in motion some major redesign efforts for the
- The bootstrap and initialization session was dominated by Arjan van de
Ven's five-second boot
demonstration; having been given that challenge, developers from
multiple distributions set about the
task of getting their systems to boot quickly.
- A session on server management looked for solutions to a number of
challenges facing Linux administrators.
- Kernel/user-space APIs were the topic of another lively session which,
while perhaps concluding little, raised a lot of issues on how those
APIs should be designed.
- The power management session concluded that the suspend/resume problem
is solved ("if you disagree, you bought the wrong hardware") and made
progress on a number of other problems; now, they say, all that is
left is the coding.
- The "future displays" session pounded out the path toward kernel-based
graphics mode setting and quite a bit more.
- And the desktop integration session, while reaching "not a lot of
conclusions," examined a number of relevant issues; the discussion on
Upstart from that session will be covered here separately.
Beyond that, LPC attendees could choose from a handful of more traditional
presentations, a provocative
keynote from Greg Kroah-Hartman, a rather less provocative kernel
update from your editor, a git tutorial taught by some guy named Linus, and
no shortage of evening celebrations. All told, the Linux Plumbers
Conference was one of the most productive, interesting, and generally
worthwhile events your editor has been to in quite some time - and your
editor has been to rather more than the usual number of events. There will
be a lot of interesting developments kicked off by this gathering, once the
exhausted attendees get some rest. This conference is off to a good start.
And it is just a start; the organizers are already working on the 2009
edition. It will, once again, be held in Portland. The general format
will likely remain the same, but there will be no kernel summit before the
2009 event (the summit will be in October 2009 in Tokyo). Instead, there
is a reasonable chance that a more traditional, presentation-oriented
conference will be planned to coincide with the 2009 Plumbers Conference.
With this new event, the active local community, and the success of this
year's conference, LPC2009 looks promising already.
After 2009, the Plumbers team hopes to take a page from the linux.conf.au
playbook and pass the event onto a new set of volunteer organizers
somewhere else in North America. This form of organization has helped to
keep linux.conf.au vital and interesting for many years; it makes sense to
do something similar with the Linux Plumbers Conference. Now might be a
good time for any North American community which would like to host this
event in 2010 to start thinking about how it could be done.
Comments (18 posted)
Radio talk show and podcast host Leo
Laporte doesn't think operating systems or network infrastructures should
ever be proprietary. He's the host of The Tech
Guy radio show, which airs every weekend on stations around the United
States, and of FLOSS Weekly, a regular
podcast in which Laporte discusses different aspects of the Free, Libre, and
Open Source software community. On The Tech Guy show, Laporte answers
questions from computer users who call in to get advice and find ways to make
their computers run better. Most of his callers are Windows users, but
way to mention Linux and other open source software during the course of his
Laporte says he has been writing software for decades, and that he has always
shared the source code, even before he had a notion of open
source. "It was
public domain then. But even then, I understood that if you're programming,
the most interesting part is to see other people's code and be able to modify
it. That's just a natural way to work." His first shot at
installing Linux was
back in 1994 when he got his hands on a copy of Slackware. "It was
but it opened my eyes to the growing open source world."
At the time, Laporte was the host of a cable television show called Tech TV.
"We were the first television show to install Linux live."
On that show,
Laporte hosted some of the biggest names in FLOSS, including Linus Torvalds
and Richard Stallman, during Tech TV's run. "The longer I worked as a computer
journalist, the more obvious it became to me that proprietary software is a
bad idea. It's not natural to be secretive and it doesn't make sense." Laporte
says that especially in the enterprise, the technological infrastructure
should be open. "That should never be proprietary. Protocols, standards, and
code need to be open."
When it comes to applications, Laporte is a bit more flexible. "If you want to
write an app that is closed source, I can see there are reasons why one might
want to do that and that's fine with me. But closing the operating system
makes no sense, and it is bad for everybody."
Laporte, a Twitter user with over
fifty-five thousand followers, recently announced he would no longer use
Twitter, but would instead now throw his support behind
Laconica, the open source micro-blogging
platform on which Identi.ca is built. Laporte
spoke extensively about Laconica on FLOSS Weekly last month when he chatted
with Evan Prodromou, the original
author of Laconica and the person who maintains identi.ca.
"Laconica is identical to Twitter, but it's open, which is huge,
and, more than open just in terms of it being open source."
says open standards are just as important in this case, and that the protocols
for micro-blogging should become commoditized so that others can build
on top of the infrastructure instead of having to start from
scratch. Laconica also offers users the option to release all their
micro-posts under a Creative Commons attribution license, making the service
about as "open as you could hope for," writes Dan Brickley, co-founder of the
Friend of a Friend project (FOAF).
With Laconica, different micro-blogging services can communicate with each
other since the platform is open, unlike Twitter's service. This makes it
possible for different communities to form their own branded services in which
users can still search for and follow users in other communities, tying them
together in what has become known as a "federation." Right now, Laconica is
of disparate servers, whose users can all subscribe to each others'
updates. Laconica is built using the
specification, which is completely open, free, and independent of any one
central maintenance authority, unlike Twitter's proprietary protocol.
Laporte believes that this kind of federation, which could be called
distributed micro-blogging, is the key to overcoming scalability issues that
have plagued Twitter, resulting in frequent outages for the popular service.
"If you can't scale, that's another reason to have a more
distributed system. Maybe we shouldn't have two million people on one
Twitter. Maybe we
should have five thousand people on four hundred 'twitters.' I have three
thousand people on my system, and that's just about right."
Laporte's system is called the TWiT
Army, [Note that the web site is currently down]
named after another of his podcasts known as This
Week in Tech, or TWiT. "The conversation [there] has been very
cohesive. The conversation is with people you know. With Twitter, it
turns into a broadcast medium instead of a conversation. Now, it is a very
useful way to get a message out to all those people. But I would love to have
all those people all in their own communities, able to search across the
federation by keyword, and if I post something of interest they'll find out
Laporte says he is not trying to go "head to head" against Twitter. But he is
convinced that Laconica is a better way to do micro-blogging. "One of my
problems with Twitter is that I contribute a lot of content and they shut down
access to it. I want to be part of an open platform — that's where the
innovation is going to occur."
Laporte says that features Twitter previously offered but has shut down,
including instant messaging and
two of the most valuable features that Twitter offered. "Comcast realized a
huge value from Track," he says. Comcast customer service agents were tracking
Twitter posts to monitor complaints or issues posted by users, and then
following up directly with those people. "Twitter was saying, 'well it's too
demanding,' but the conspiracy theory is that they realize this is where the
real value of Twitter is and they want to try to monetize it." With Laconica,
Laporte says, these types of features can remain open and accessible, not
subject to the whims of proprietary ownership.
Laporte, Prodromou, and others including RSS pioneer
Dave Winer, are talking about a
collaborative effort to standardize and open the protocols for micro-blogging.
The group is planning a
for all who are interested in the concept of open micro-blogging, called the
BearhugCamp. Laporte says, "we would very much like to
encourage Twitter to become a part. The idea is to get all the
players to the table and encourage them to support the
Extensible Messaging and Presence Protocol
(XMPP) (developed by Jabber). We're creating
a new messaging medium with emerging open standards, in new and exciting ways.
It's not really about Twitter at all – Twitter gave us this idea of
micro-blogging, and now we're onto the next thing: let's make it open."
Comments (1 posted)
Page editor: Jonathan Corbet
Inside this week's LWN.net Weekly Edition
- Security: Mobile phone or penetration tool?; New vulnerabilities in ed, firefox, mantis, phpmyadmin,...
- Kernel: e1000e and the joy of development kernels; LPC: The future of Linux graphics; New kernels and old SELinux policies.
- Distributions: LPC: Upstart 1.0 plans: manifesto for a new init; Information on the e1000e corruption bug; Foresight Kids Edition; openSUSE 11.1 Beta 1; Intrepid Alpha 6; Fedora 10 likely to slip again
- Development: LPC: Linux audio: it's a mess, Django Debug Toolbar intro, CapPython launched, new versions of zc.async, LiquiBase, PostgreSQL, SQLite, ZMySQLDA, WSO2, Rockbox, GNOME, X.org, gEDA/gaf, OpenSkyNet, wxWidgets, Wine, Elisa, Firefox, Parrot, PyKerberos, libpng, libSpiff, Harbour.
- Press: Google launches Android-based G1 phone, looking forward to LCA 2009, IBM's I.T. Standards Policy, Mozilla back tracks on Firefox EULA, Nokia Maemo to support 3G, Sybmian on mobile Linux, VMware's new VirtualCenter, Linux Foundation seeks individual members.
- Announcements: CME Group joins Linux Foundation, EFF on Secret IP Enforcement, SGI further opens OpenGL license, Stanford professor on competing with free software, PyCon 2009 cfp, Demonstrating Open-Source Healthcare cfp, Web 2.0 SF cfp, Maker Faire RoboGames, PyCon Chigaco, Medsphere becomes a forge, KDE UserBase wiki.