The afternoon discussion on the second day of the 2011 Kernel Summit
covered a wide range of topics, including shared libraries, failure
handling, the media controller, the kernel build and configuration
subsystem, and the future of the event itself.
Writing sane shared libraries
Lennart Poettering and Kay Sievers, it seems, have grown tired of dealing
with the mess that results from kernel developers trying to write low-level
user-space libraries. So they proposed and ran a session intended to
convey some best practices. For the most part, their suggestions were
common sense:
- "Use automake, always." Nobody wants to deal with the details of
writing makefiles. Automake is ugly, but the ugliness can be ignored; like
democracy, it is messy but it is still the best system we have.
Nobody, they said, wants to see a kernel developer's makefile
creativity. There is a bit of a learning curve, but, it was
suggested, it should be well within the capabilities of somebody who
has mastered kernel programming.
- Licensing: they recommended using LGPLv2 with the "or any later
version" clause. This drew some complaints from developers who
thought they were being told which license to use for their code. It
is really just a matter of compatibility with other code, though; a
GPLv2-only (or LGPLv2-only) library doesn't mix with many other licenses, so
distributors may have a hard time shipping it.
- Never have any global state. Code should also be thread aware, "but
not thread-safe." Thread-level locking can create problems at
fork() time, it is best avoided, especially in low-level
libraries. GCC constructors should be avoided for the same reason.
- Files should be opened with O_CLOEXEC, always. There is no telling
when another thread might do something and carry off a library's file
descriptors with it.
- Basic namespace hygiene: no exporting variables to applications, use
prefixes on all names, and use versioned symbols for all drop-in
library use. It is also best to use naming conventions that
application developers will expect.
- No structure definitions in header files; they will only cause trouble
when the ABI evolves in the future.
Kay and Lennart had more suggestions, but their time ran out. They have
developed a skeleton
shared library that they intend to post for developers to work from when
creating these libraries in the future.
Failure handling
Roland Dreier ran a session on failure handling; his core point was that
error paths in the kernel tend to be buggy, and those bugs are a problem:
things that should be a recoverable error turn into a system crash
instead. But, since error paths tend to get a lot less testing, we end up
shipping a lot of those bugs. He noted that he lacked any sort of
silver-bullet fix for the problem; his hope was that the group would fill
that in during the talk.
Roland's examples were mostly in the filesystem and storage area. He noted
that unplugging a block device can still bring the system down. The
interfaces in that area, he said, approach a score of -10 on the famous
Rusty Scale: they are nearly impossible to get right. A number of
filesystems also run into all kinds of problems if memory allocations
fail. It would be nice to do a better job there, he said.
Work is being done in that area. Dave Chinner noted that xfs is slowly
getting a full transaction rollback mechanism into place; once that
happens, it will be possible to return ENOMEM to user space for almost any
operation that fails with memory allocation errors. Whether that is a good
thing is another question: applications tend not to be on the lookout for
that kind of failure, so better out-of-memory handling in the filesystems
could turn up a lot of strange bugs in user space. Ted Ts'o said that he
is more than open to patches improving allocation failure handling in ext4,
but, in practice, those bugs tend not to bite users. What happens instead
is that the out-of-memory killer starts killing processes before
allocations start to fail within the filesystem.
Andrew Morton reminded the room that the kernel does have a nice fault
injection facility that works well for the testing of error paths. But, he
said, nobody bothers to use it. Meanwhile, in the real world, people are
not hitting bugs related to memory allocation failures.
Alan Cox asserted that the design of the block layer is wrong, that it
destroys data structures too soon when a device is removed. In fact, the
layer was designed to do the right thing: it tries to keep the relevant
structures around until there are no more users of them. The problem is
that all the reference counting logic was added late in the game -
pluggable devices were not an issue when a lot of that code was written -
and the job has not been done completely or well. There will be a fair
amount of work required to fix things properly; after some talk, it was
agreed that the design of an improved block subsystem could be handled over
email.
Media controller
Mauro Carvalho Chehab talked for a bit about the media controller subsystem. His main point
was that, while Video4Linux2 is the first user of the media controller, the
two are not equivalent. The media controller is a generic API that allows
the configuration of data paths from user space; it is applicable in a
number of places. There is currently interest in using it in the sound,
fbdev, graphics, and DVB subsystems; thus far, only ALSA has preliminary
patches available.
An upcoming challenge for the media controller is the advent of
programmable hardware, where the entities and their connections can now
come and go dynamically.
There was a question about integration with GStreamer; the answer is that
they have different domains (software for GStreamer, hardware for the media
controller), but that the media controller developers have tried to at
least match their terminology with that used by GStreamer. It wasn't
really discussed in the room, but the idea of using GStreamer pipeline
notation to configure hardware via the media controller seems like it could
be interesting: that sort of configuration is currently only done by
proprietary applications that understand a specific piece of hardware. It
would not be easy, but making the configuration more generic in this manner
could maybe make the whole thing more accessible to users.
Kbuild and kconfig
Michal Marek is the current maintainer of the kernel build system. He took
that position, he said, because nobody else wanted it; the room gave him a
round of applause for having stepped up to the job. The system "more or
less works," he said, but there are always things that could be done
better. He had a small to-do list to start with, but hoped that the group
would tell him about other things they would like to see improved.
For the most part, the discussion covered various little glitches and
annoyances. The one substantive discussion had to do with dependency
resolution. Anybody who has spent time configuring kernels knows how
irritating it can be when a specific option cannot be enabled because one
of its dependencies is missing; nobody disagreed with the notion that it
would be nice to turn on dependencies automatically instead of forcing
developers to go digging through the source to figure out what is missing.
There is a Summer
of Code project out there which hooks a SAT solver into the kernel
configuration system to automatically figure out dependencies, but it
hasn't gotten a lot of attention on the lists. Michal will try to pull
that code in and see what happens with it. There was a fair amount of talk
about whether the solver is overkill and whether it might bog down the
kernel build process; Linus noted that the history in this area is not
entirely pleasant. He seemed a bit frustrated that this problem has been
discussed many times, but no solution has emerged yet. A patch is said to
be coming soon; perhaps, this time, the necessary pieces for a real
solution will be there.
The future of the Kernel Summit
Kernel Summit program committee member Steve Rostedt ran a session on the
organization of the event itself. The format of the summit was changed a
bit for 2011; did those changes work out? Additionally, he said, finding
suitable topics for the summit has gotten harder over the years; there
aren't that many things that are of interest to the whole crowd. That, he
said, is why we end up talking about things like kbuild and git.
The discussion was unstructured and hard to summarize. Everybody agrees
that minisummits (which made up the first day of the event this year) are a
good thing, but it's not entirely clear if they should all be brought
together with the kernel summit or not. The closed session clearly has
some value and will probably continue to exist in some form, even though
Linus said he didn't think it worked all that well. The practice of
bringing in high-profile users - a common feature at previous summits - may
not return; if nothing else, the increasing presence of companies like
Google at the summit ensure that there is plenty of visibility into the
problems of large data centers.
It is hard to say what changes will come to the summit next year. About
the only thing that had widespread agreement was that more unstructured
time (including the return of the hacking session) would be a good thing,
as would more beer (not that beer has been in short supply in Prague).
Closing
The day concluded with elections for the Linux Foundation's technical
advisory board (TAB) and a key-signing party. The TAB election saw late
candidacies by James Morris and Mauro Carvalho Chehab, but, in the end, the
five incumbents (Alan Cox, Thomas Gleixner, Jonathan Corbet, Theodore Ts'o,
and Greg Kroah-Hartman) were re-elected for another two years. (The other
board members, whose terms end next year, are James Bottomley, Chris Mason,
John Linville, Grant Likely, and Hugh Blemings). The key signing will,
with any luck, result in a core web of trust that can be used to secure
access to kernel.org and to verify pull requests.
The attendees then rushed off for some unstructured time with beer while
surrounded by suits of armor in a downtown Prague restaurant.
(
Log in to post comments)