Bug reports from users are an important way for projects to find and
eventually fix problems in their code. For most Linux users, though, the
code from those projects makes its way to them via
distributions, so that's where they report any bugs they find. What
happens next is something of an open question, as a thread on the
debian-devel mailing list highlights. Should it be up to the bug reporter
to push the bug upstream if it is found to not be specific to the
distribution? Or should it be up to the package maintainer?
Brian M. Carlson started the thread by
noting that he has been frequently asked recently to forward bugs reported
to the Debian bug tracking system
(BTS) to upstream projects. For a number of reasons, he doesn't think
that's the right way to deal with these bugs:
I understand that maintainers' time is limited and that forwarding bugs
is not an enjoyable task. But I also understand that having a BTS
account for the upstream BTS of each of the 2405 packages I have
installed on my laptop (not to mention my other machines) is simply not
practical. I also don't have the benefit of the rapport that a
maintainer has with upstream and knowledge of upstream practices.
Essentially, Carlson is arguing that package maintainers already have the
requisite relationships and accounts to properly move the bug upstream,
whereas a regular user, even one who is fairly technically sophisticated,
will not. But, of course, it is the user who has the intimate knowledge of
the bug and can hopefully reproduce it or at least give the upstream
developers a better idea of how to. Inserting the package maintainer into
the middle of the bug triaging/fixing process may just slow things down.
Unfortunately, most bug tracking systems don't provide a way for
users without accounts to be copied on bug report traffic, which leads back
to the problem that Carlson identified: users aren't interested in creating
an account for each project's bug tracker, instead they rely on their
distribution to upstream bugs.
In addition, it may well be that the user doesn't really have the knowledge
necessary to interact with upstream, especially for distributions that
target less-technical users. Many of the bug reports that come into
any bug tracker (Debian's, other distributions', or the projects') are
inadequate for various reasons, so forwarding them or asking the user to
report them upstream is pointless—potentially counterproductive
because it annoys the upstream developers.
It comes down to a question of what the responsibilities of a package
maintainer are. Many seem to take a proactive approach by trying to
reproduce the problem and, if they can, reporting it upstream. Others,
though, are completely swamped by the sheer number of bugs that get
reported, finding it difficult to do more than try to determine if the bug
is Debian-specific and asking for it to be reported upstream if it isn't.
There is a large disparity in the size of the packages that are maintained
by Debian (or any other distribution), of course. Some smaller or
less-popular packages may allow their maintainers to keep up with the bugs and
shepherd those that deserve it upstream. While other, larger packages, like
the kernel, X, GNOME, KDE, LibreOffice, and so on, just don't have enough
people or time to do that. Thus it is very unlikely that there can be a
"one size fits all" solution; each package and maintainer may necessarily
have their own bug management style.
John Goerzen explained how the process
works for the Debian package of the Bacula backup program, noting that he
sits between the bug
reporter and the upstream project, but that it is often wasted effort:
I'm adding zero value here. Zero. It is a huge and frustrating waste of
my time. It is also frustrating for upstream, who would rather just talk
with the user directly and involve me if they think there's a
Debian-specific question. I don't understand why some users want it to go
this way, but many clearly do despite the fact that they get worse service.
Being a "copy and paste
monkey between emails and web forms" is not what he wants to be
doing. He points out that various unnamed major packages tend to just
ignore many of their bug reports for years at a time. Overall, he doesn't
think that being a maintainer should necessitate being a bug intermediary
But really, I think it is
nonsensical for an end user to expect me to do this because the user
doesn't want to spend 30 seconds creating an account on an upstream
BTS. That's not what Free Software is all about. And it has Debian
maintainers wasting time.
I think that promising that Debian maintainers will always shepherd bugs
upstream is promising something we don't actually deliver on very well, and
probably never have. Perhaps we should stop promising it.
Ben Finney (and others) see it as, at least partially, a technical problem
in the bug tracking software. While requiring bug reporters and those
copied on the
bug comments to be registered in the system may make sense—for
reducing spam problems if nothing else—it definitely puts up a
barrier for users:
Quite the opposite, from my position. A great feature of the Debian BTS
is that any user can interact with it through a standard interface
without maintaining multiple separate identities; heck, without having
to create a single new identity at all.
Having to create and maintain (or fail to maintain) identities with
balkanised upstream BTSen is bad enough as a package maintainer. As a
mere user of all the other packages on my computers, I consider it too
high a barrier.
Finney looks at the problem as an opportunity to try to find better
technical solutions. There may be ways to enhance the Debian BTS to file
bugs upstream and/or CC the Debian bug reporter on upstream questions, as
Don Armstrong suggests.
Ubuntu's Launchpad has made some efforts to link downstream and upstream
bugs to address some parts of the problem, but it is more than just a
technical problem as various posters pointed out.
Many upstream projects are largely uninterested in bugs reported against
older versions of their code. Unless the bug can be reproduced in the
latest release—or sometimes the latest commit to the project's source
code repository—it may not be investigated at all, even if the bug is
reported upstream. It is not just package maintainers that have far more
work, and bugs, than they can possibly deal with.
But distributions have something of a contract with their users to support
the package versions that make up the distribution release. It can be
very difficult to have a user test with updated versions of the code in
question to try to reproduce the problem. Bugs that can be easily
reproduced are obviously much easier to handle, but they are also,
seemingly, much less common.
Fedora has struggled with similar issues, including discussions last July
on the fedora-devel
mailing lists; undoubtedly other distributions have as well. Also, it
the distribution and user side that suffers, as there have been cases where bugs and fixes known to downstream
distributions haven't made their way upstream. It would seem that there is
an opportunity for projects and distributions to work more closely
together, but that won't be easy given the workloads on both ends of the
Bug reports are a difficult problem in general, as good reports are
sometimes hard to find. There may also be a few layers between the
reporter and someone who might be able to actually fix the problem, which
can cause all manner of frustrations for anyone involved. On the bright
side, though, the situation is far better than the proprietary software
world, where reporting a bug may require paying for a "trouble ticket" and
waiting for a release that actually addresses it. At least free software
allows an interested hacker to poke around in the code and fix it for
themselves (or their customers).
Comments (31 posted)
The Cr-48 is, according to Google, the "first of its kind - a
notebook built and optimized for the web.
" It is the next step in the
promotion of Chrome OS, Google's other Linux-based distribution. As a way
of showing off what it has accomplished and building interest in the
system, Google has distributed Cr-48 machines widely. Your editor was a
lucky, if late, recipient of one of these devices; what follows are his
impressions after some time playing with it. The Cr-48 and Chrome OS are
an interesting vision of where computing should go, even if that vision is not
The hardware itself is quite nice at a first glance. This machine is not a
netbook; it is a small notebook device which clearly has taken some
inspiration from Apple's hardware. Except, of course, that Apple's
machines are not jet black, with no logos or markings of any type. It
exudes a sort of Clarke-ian "2001 monolith" feel. There's an Intel Atom
dual-core processor, 2GB of memory, and a 16GB solid-state drive. The
silence of the device is quite pleasing; also pleasing is the built-in 3G
100MB/month of free traffic by way of Verizon (which, unsurprisingly, is
more than prepared to sell you more bandwidth once that runs out). Other
connectivity includes WiFi and Bluetooth (though there appears to be no way
to use the latter); there is no wired Ethernet port. There's
a single USB port, an audio port, a monitor port, and what appears to be an SD
card reader. Battery life is said to be about eight hours. Despite
the small disk, it's a slick piece of hardware.
Using Chrome OS
The operating system and the hardware work nicely together. A cold boot
takes a little over ten seconds; suspend and resume are almost
instantaneous. In normal use, one simply lifts the lid and the system is
ready to go; by default, the system does not even request a password at
resume time if
somebody is logged in - a setting that security-conscious users may want to
change. There is
a large trackpad with some simple multitouch
capability. Interestingly, there is no "caps lock" key; Google, in its
wisdom, replaced it with a "search" key. Happily, Google was also wise
enough to allow the key to be remapped by the user; it can be restored to
caps lock or, instead, as $DEITY intended, set to be a control key. Where
one would expect to find the function keys are more web-centric buttons:
Google has dedicated keys to operations like "back," "forward," and
"reload." Of course, they're really just function keys underneath as far
as the X server is concerned.
The system software is Linux-based, of course, but there's no way for a
casual user to notice that. The core idea behind Chrome OS is that
anything of interest can be had by way of a web browser, so that's all you
get. Like an Android phone, the system starts by asking for the user's
Google account; everything after that is tied to that account. Email is to
be done via GMail (there appears to be no way to read mail directly from an
IMAP server), document editing with Google Docs, conferencing with
Google Talk, and so on. Like an Android phone, a Chrome OS device is
meant to be a portable front-end to Google-based services.
That is why the Cr-48 comes with such a small SSD; very little is stored
there beyond the operating system image itself, and that image is small.
Most of the space, in fact, is set aside for a local cache, but it's
entirely disposable; everything of interest lives in the Google "cloud."
So if, as the startup tutorial says, the device succumbs to an
"unexpected steamroller attack," nothing is lost except the
hardware. The user can sign onto a new device and everything will be
The appeal of this arrangement is clear: no backups, no lost data, no
hassles upgrading to a new machine. Just browse the web and let Google
worry about all the details. Of course, there are some costs; the Cr-48
can do almost nothing which cannot be done via the web. There is no way to
get a shell (though see below) and no way to install Linux applications.
Even updates are out of the user's hands: they happen when the Chrome OS
Gods determine that the time is right.
There is a "web store" where browser-based applications can be had. At
this time there is a surprising variety of them, almost all of which are
free of charge. The application selection still falls far short of what is
available with a standard Linux distribution or on Android, though. It's
also not at all clear how many (if any) of these applications are actually
free software. The "no local installations" philosophy means that Chrome
browser plugins (which hook into the browser at a lower level than
"applications" do) cannot be installed; that, in turn, means that any
application which requires a plugin, while usable on regular Linux or
Windows, is not installable on Chrome OS. It turns
out that quite a few web store applications need plugins; annoyingly, the
only way to find out if any given application can be installed is to try.
Your editor wanted to take a screenshot or two of the system in operation.
The store offers a few screenshot applications, one provided by Google itself. The Google
tool, though, needs a plugin and thus refused to install. An alternative
install, but the "save" button, needing a plugin, was not able to save the
result anywhere. The application could, though, "share" the screenshot
through any of a number of web services - though the image itself (to your
editor's surprise) is stored on the web site of the company providing the
screenshot application. Something as simple as taking a screenshot should not be so
hard - and it should not broadcast screenshots to the world by default.
Under the hood
The Cr-48 is a locked-down system. Its firmware will only load
Google-signed images, so it's not possible for the user to make any
changes. The root filesystem is mounted read-only. The whole verified
boot mechanism is designed to ensure that the device's software has not
been compromised and that the user can trust it. That said, the design
goals are also expressed this way:
It is important to note that restraining the boot path to only
Chromium-project-supplied code is not a goal. The focus is to
ensure that when code is run that is not provided for or maintained
by upstream, that the user will have the option to immediately
reset the device to a known-good state. Along these lines, there is
no dependence on remote attestation or other external
authorization. Users will always own their computers.
The way this works on the Cr-48 is through a "developer switch," which is
cleverly hidden behind a piece of tape inside the battery compartment. The
describe a lengthy series of events that will happen when that switch is
flipped, including a special warning screen and a five-minute delay while
the system cleans up any personal data which may be cached locally. What
actually happened was a warning that the system is corrupted; hitting
control-D at that screen did manage to boot the system into the developer
Developer mode looks much like the regular operating mode with one
exception: the other virtual consoles are now enabled, allowing the user to
get to a shell and explore the system a bit. The system, it turns out, is
based on a 22.214.171.124 kernel; it's said to be based on
Ubuntu Gentoo, but any such
parentage is hard to find. It uses the trusted platform module for
integrity measurement, but it does not appear to be using the IMA or EVM
modules shipped with the mainline kernel. The devtmpfs filesystem is used
to populate /dev.
The system uses the ext3 filesystem for local data storage. There are two
sets of root filesystem partitions; one is in use while updates are loaded
into the other. It also uses eCryptfs to store user-specific data; in
theory that means that such data is safe from prying eyes when the user is
not actually logged into the system.
Given access to developer mode, one can go as far as installing an entirely
new operating system on the device. The instructions
for doing so are intimidating at best, though; Google has not gone out
of its way to make displacing Chrome OS easy. Your editor will probably
give it a try at some point, but the job did not look like something which
could be done within any sort of deadline. It sure would have been nice if
the system could just boot from an external device.
What it's good for
The appeal of a system like this is easy enough to understand. Here is a
computer which can access all kinds of web-based services, never needs to
be backed up, is highly malware-resistant, and which can be easily
replaced. It could be handed to one's children with minimal fear of the
consequences, and it is easily operated by people who are intimidated by
any sort of system management task. A Chrome OS device is the
contemporary equivalent of an X terminal; it is little more than a
window into services which are managed elsewhere.
Your editor, who is not afraid to
break manage his
systems, and who prefers more control over his data, does not find this
approach to computing to be hugely attractive. It is not useful for
software development at all, and the things it can do are contingent on
having network access. Google Docs might be able to handle a presentation,
but the idea of depending on a conference network to be able to give a talk
is frightening. There are those of us who will always want our systems to
be more self-contained and locally controlled.
That said, such machines are not without their applications. Thousands of
people, it seems, have had
their laptops searched at the US border; your editor, who crosses that
border frequently, has not, yet, had that experience. Should it ever come
to pass, it might be nice to have a laptop which contains no local data at
all. A throwaway Google account could be used for plausible deniability,
and, in the unlikely case of a border agent who knows about the developer
switch, any user-specific data on the system (which is encrypted anyway)
should be gone by the time it becomes accessible. "Data in the cloud"
systems have security concerns of their own (it would be nice if a
Chrome OS system could be backed up by providers other than Google,
for example), but
there are times when having all of one's data be elsewhere can be
The locked-down nature of Chrome OS is thus not without its value, but
locked-down is only good as long as the owner wants things that way. The Chrome OS
documentation suggests that Google wants all devices to include a
developer switch. In the real world, it would be unsurprising if some
vendors somehow never quite got around to adding that switch. Without full
access, one of these laptops becomes something more like a television:
useful for displaying content, but something short of a real computer.
Chrome OS is clearly not meant to be a "real computer" of the sort that LWN
readers are likely to want. The target user base is different, to say the
least. As such, it is an interesting exercise in what can be done to
package Linux for other classes of users. At the beginning of the year,
your editor predicted that Chrome OS
would struggle; who wants such a limited system when a real computer can be
so easily had? Based on this experience, your editor is not quite ready to
change his mind, but he is willing to admit that Chrome OS may be the
experience some people are looking for.
Comments (91 posted)
Michael Kerrisk's (relatively) new book, The Linux Programming
Interface (TLPI), is targeted at Linux system programmers, but it is
those folks who will find it useful. While it is a hefty tome ("thick
enough to stun an ox" as Laurie Anderson might say), it is eminently
readable, both by browsing through it or by biting the bullet and reading
it straight through. The coverage of the Linux system call interface is
encyclopedic, but the writing style is very approachable. It is, in short,
an excellent reference that will likely find its way onto the bookshelves
of user-space developers and kernel hackers—including some who
aren't necessarily primarily focused on Linux.
Kerrisk has been the maintainer of the Linux man pages since 2004, which
gives him a good perspective on the Linux API. As he says in the preface,
it is quite likely that you have already read some of his work in sections
2, 3, 4, 5, and 7 of those pages. But the book is not a
collection of man pages though it covers much of the same ground. The
style and organization is much less dry, and more explanatory, than a
typical man entry.
The book is some 1500 pages in length, which makes it a rather daunting
prospect to review. Once I started reading it, though, it was quite
approachable. Kerrisk's clear descriptions of various system calls and
other parts of the Linux API made it easy to keep reading. I set out to
pick and choose certain chapters to read, and just skim the others, but found
myself reading quite a bit more than that—which might partially
explain the lateness of this review.
The book is organized into 64 chapters of around 20 pages each, which makes
for nice bite-sized chunks that allow for reading the book around other
the focus is on Linux, Kerrisk doesn't neglect other Unix varieties and
notes where they differ from Linux. He also pays careful attention to the
various standards that specify Unix behavior—like POSIX and the Single
Unix Specification (SUS)—pointing out where Linux does and does not
follow those standards.
TLPI was written for kernel version 2.6.35 and glibc 2.12. In the text,
though, Kerrisk is careful to indicate which kernel version introduced a
new feature, so that those working with older kernels will know which they
can use. While it is primarily looking at the 2.6 series, 2.4 is not
neglected, and the text notes features that were introduced at various
points in the 2.4 kernel history.
The book starts with a bit of history, going all the way back to Ken
Thompson and Dennis Ritchie and then moving forward to the present, looking
at the various branches of the Unix tree. It then moves into a description
of what an operating system is, the role that the kernel plays, and some of
the overarching concepts that make up Unix (and Linux). While this
information may be unnecessary for most Linux hackers, it will come in
handy for those coming to Linux from other operating systems. The
ideas that "everything is a file" and that files are just streams of bytes
are described in ways that will quickly get a system programmer up to speed
on the "Unix way".
After that introductory material, Kerrisk launches into the chapters that
cover aspects of the system call interface. This makes up the vast
majority of the book and each of these chapters is fairly self-contained.
They build on the earlier chapters, but the text is replete with references
to other sections. In the preface, Kerrisk says that he attempted to
minimize forward references, but that clearly was a difficult task as there
are often as many forward as backward references in a chapter.
Navigating within the book is easy to do because there are frequent
and subsection headings, along with the chapter number on each page. Other
technical books could benefit from that style. There is also an almost too
detailed index that runs to more than 50 pages.
Each chapter comes with sample code that is easy to read and understand.
Importantly, the examples also do a good job of demonstrating the topic at
hand and some of them could be adapted into useful utilities. The code is
available from the TLPI web site and is
free software released under the Affero GPLv3. Each chapter also has a
exercises for the reader, some of which have answers in one of the
So, what does the book cover? It would be easy to say "all of it", but
that would be something of a cop-out, and a bit inaccurate as well. There
are multiple chapters on files, file I/O, filesystems, and file attributes,
extended attributes, and access control lists (ACLs). There is a chapter
covering directories and links, as well as one that looks at the inotify
file event notification call.
There are multiple chapters on processes, threads, signals, as well as
chapters covering process
groups and sessions, and process priorities and scheduling. Of particular
interest to me were a chapter on writing secure privileged programs and one
on Linux capabilities. There are two chapters on shared libraries, the
first of which is more about the ideas underlying libraries and shared
libraries along with how to build them, rather than the dlopen() system
call (and friends), which is covered in
There are, perhaps, too many chapters covering interprocess communication
(IPC), with separate chapters devoted to each System V IPC mechanism
(shared memory, message queues, and semaphores). There is also a chapter
for each of the POSIX variants of those three IPC types. Both POSIX and
System V IPC get their own
introductory chapter in addition to the chapters focusing on the details of
each type. Sandwiched
between the System V and POSIX IPC mechanisms are two chapters on
memory mapping and virtual memory operations that might have been better
placed elsewhere in the book. There is
also a chapter devoted to an introduction to IPC and one that looks at the
more traditional Unix pipes and FIFOs. In all, there are twelve chapters
on IPC before we even get to the sockets API.
After IPC, comes a chapter on file locking followed by six chapters
covering sockets. Those chapters look at Unix and internet domain sockets,
along with server design and advanced sockets topics. The book wraps up
with a chapter on each of terminals and pseudoterminals, with something of
an oddly placed "Alternative I/O Models" chapter in between them. It's an
interesting chapter, covering select(), poll(),
epoll(), signal-driven I/O, and a few other topics, but it seems
weird where it is.
There is more, of course, and looking at the detailed table of
contents will fill out the list. One thing that stands out from the
book is the vast size of the Linux/Unix API. It also points out some of
the warts and historical cruft that is carried along in that API. Kerrisk
is not shy about noting things like that where appropriate in the text:
"In summary, System V message queues are often best avoided."
There were two specific topics that I looked forward to reading about but
were only marginally covered by the book. The first is containers and
namespaces, which are very briefly mentioned in a discussion of the flags
to the clone() system call. A more puzzling omission is that
there is almost
no mention of the ptrace() system call. In the few places it does
come up, readers are referred to the ptrace(2) man page.
There are certainly other parts of the Linux API that could have been
covered, beyond the system call interface—sysfs, splice(),
and perf come to mind—but Kerrisk undoubtedly needed
to draw the line somewhere. Overall, he did an excellent job of that.
Technical books, especially those covering Linux, have a tendency to get
stale rather quickly, but TLPI shouldn't suffer from that as much as a kernel
internals book would, for example. There should really only be additions
down the road as the user-space API is maintained by the kernel developers
"forever", but updates will presumably need to be made eventually.
There are a handful of additional complaints I could make about the book,
but they are
all quite minor, as were those mentioned above. The biggest nit is that the
"asides" in the text, which are numerous, are really often much more than
just asides. Each is set off from the rest of text, indented and rendered
in a slightly smaller font (which is typographically a bit annoying to
me), and are meant to contain additional information that is not
necessarily critical to understanding the topic. In my experience, though,
many of them might best have been worked into the main text. See what I
mean about minor complaints?
This is a book that will be useful to application and system-level
developers, primarily, but there is much of interest for others as well.
Kernel hackers will find it useful to ensure their new feature (or fix)
doesn't break the existing API. Programmers who are primarily targeting
other Unix systems may also find it useful for making their code more
portable. I found it to be extremely useful and expect to return to it
frequently. Anyone who has an interest in programming for Linux will likely
feel the same way.
Comments (31 posted)
Page editor: Jonathan Corbet
Next page: Security>>