ELC/LFCS2009: A tale of two panels
Two kernel panel sessions were held last week in San Francisco, one for each of two conferences sharing facilities—and participants. In both cases, the kernel developers making up the panel were asked about various kernel features and developments, both from a historical and future perspective, but each had its own focus as well. The Embedded Linux Conference (ELC) panel was, unsurprisingly, focused on topics of most interest to the embedded community, while the Linux Foundation Collaboration Summit (LFCS) panel looked at more mainstream kernel concerns.
ELC: Embedded Linux Kernel Features and Developments
Besides the venue, the panel sessions also had another thing in common: LWN Executive Editor Jonathan Corbet, who moderated the LFCS panel and was a member of the ELC version. The ELC panel was moderated by CE Linux Forum (CELF) architecture group chair Tim Bird, while embedded maintainer David Woodhouse and Matt Mackall, developer of the SLOB memory allocator (along with various other kernel tasks), rounded out the panel. Bird asked most of the questions, but the audience also got the opportunity to ask a few too.
One of the themes of the discussion—as well as Woodhouse's earlier keynote—was the convergence of features between so-called "big iron" (servers and mainframes) and embedded devices. Corbet was amused to see "highmem" support recently added for ARM processors, noting that it was a controversial feature at the time it was added for servers; supporting a full 1GB of RAM on a 32-bit processor was once a "big iron" problem. Mackall also pointed to SMP and NUMA support moving into the embedded architectures. But things are not only moving in that direction, Mackall said, there is recognition from the big iron developers that there is value for their systems in some of the embedded features too.
Bird asked the panel about the proliferation of embedded distributions and
whether that was a good or bad thing. Woodhouse said "fragmentation
doesn't have to be bad
"; it's only bad when a distribution doesn't
work well with the various upstreams and goes off and does its own
"weird things
". Multiple distributions are one of the
"great strengths of Linux
", Corbet said, as it provides playgrounds
where folks can experiment with different approaches.
Mackall pointed to a lack of community involvement in the various embedded
Linux distributions, noting that the most successful desktop distributions
were those with a strong community. In the mobile space, the distributions
are "coming from the top down
", he said, for any of them to be
successful, they need to get community feedback.
The impact and usefulness of new "social networking" sites for Linux
developers—like MontaVista's meld and the LF's relaunch of the Linux.com community—was another question Bird
put to the panel. Woodhouse didn't really see the need, but
"communication is always good
". Mackall was concerned that
these other services not become a "substitute for talking to the
Linux kernel community through its normal channels
". Corbet noted
that there is value in "small town environments
", but there is
a risk that they can become inbred. "It rarely leads to good
things
" when a small community gets headed off in their own
direction, he said.
One of the more interesting exchanges centered around the question of what
a developer who just has a small amount of time can do to assist the larger
community. The discussion spread out from there, though. Woodhouse stated
that every developer needed to make sure that what they are working on can
go upstream even if their managers "need to be whipped to allow you
to do that
". But Mackall wanted to "back up a step
"
and ensure that developers are running Linux on their desktop.
Mackall said that developers should be running Linux at home as well; if
they are going to work with Linux, they should "live it
".
Making it work on a laptop is a good exercise; if it doesn't work, figure
out why and fix it. He has seen too many embedded Linux developers with
Windows desktops who don't understand Linux well enough to properly
develop on it. "They don't have the Linux mindset
", he said.
Those thoughts were echoed by Woodhouse as he related an anecdote about
some embedded developers who would FTP a file to the Windows box, edit it
using Notepad, then FTP it back to the Linux machine. It is not
efficient to do things that way, he said. Doing the development on Linux
will lead to a better
result, Mackall said. Doing everything on a Linux desktop will help that,
Mackall pointed out, "you should read your mail on it too
".
Towards the end of the hour-long session, Bird asked "have we
won?
", is embedded Linux unstoppable or "is it possible to
lose?
". Mackall and Corbet had similar thoughts, worrying about
the proliferation of devices running Linux that could not be modified by
their users. "We haven't won until I can put my code on my
phone
", Mackall said. Corbet echoed that: "If we end up
populating the world with locked-down Linux systems, then we've lost
".
In closing, Bird noted that embedded Linux has made an "awful lot of
progress
". This is the fifth year for ELC and he has been working
on embedded Linux for 17 years, over that time, "things have gotten
way better
", he said.
LFCS: The Linux Kernel: What's Next
Corbet opened the panel by having the participants introduce themselves to an audience of around 400 people. The panel consisted of X.org project lead Keith Packard of Intel, Andrew Morton the kernel "odd job man" from Google, USB maintainer Greg Kroah-Hartman of Novell, and Ted Ts'o of IBM who is currently on loan to the LF as its CTO. After that, Corbet got started by asking Kroah-Hartman about the -staging tree.
Approximately one-third of the code that was merged as part of the 2.6.30
merge window came in via the -staging tree, which Kroah-Hartman maintains.
Corbet said there was a lot of confusion about the tree and asked for an
explanation. Basically, it is a collection of drivers that used to live
outside of the tree, Kroah-Hartman said, consisting of bad code with bad API
usage and other major problems barring their acceptance into the mainline,
in other words, "crap
". But there is a lot of hardware in use
that requires those drivers and the code was not getting improved out of
the tree, so moving them gives a centralized location where people can get
them and hopefully improve them.
Kroah-Hartman said that there were several drivers that had graduated from
-staging and into the mainline, so the process seemed to be
working. "If you want to get involved in the kernel, that's a good
place to start
", he said. He noted that there are TODO files in
each driver's directory listing the kinds of changes needed before the
driver will be accepted into the mainline.
Corbet mentioned that he had been going to conferences for years hearing
about all the great things that were going to be done in the Linux graphics
area, but that we had now reached a point where much of it had actually
been done. He asked Packard to fill the audience in on what had been done
and "why it's cool
". Packard described how X.org had
"turned the graphics stack upside down
" by moving the device
configuration out of user space and into the kernel.
By doing that, X becomes just an API for existing applications, and other
APIs such as OpenGL or Wayland can be considered, he said. Support for
Intel graphics is good, and there is lots of work going on for Radeon (ATI)
chipsets, but NVIDIA is "not helping at all
". He pointed out
that Fedora 11 will be shipping with the Nouveau driver for NVIDIA hardware
because it has surpassed the free 'nv' driver in
capabilities. He also noted that moving the configuration and
initialization into the kernel allows people to experiment with graphics
acceleration without spending an inordinate amount of time figuring out how
to initialize the hardware.
Next, Corbet asked Ts'o about the status of the ext4 filesystem. Ts'o
reported that Fedora and Ubuntu would be shipping it in their next releases
that are coming within a few months. He said that the user community was
growing and
"to be brutally honest, that will sometimes find bugs
". He
said one goal is to get it into the next round of enterprise
distributions. He also noted that ext4 is a temporary solution, based on
BSD FFS, which is technology from the 70s. Btrfs, nilfs, and others were
where the interesting filesystem development is happening. All of those
make it an "exciting time
" to be a filesystem developer.
Morton responded to a query about the linux-next tree by saying that it is
working out well, overall, as a place for integration and testing. But, he
said that he
was "a bit disappointed with the uptake it has
", especially
from a testing perspective. Fewer
maintainers are taking advantage of the opportunity to integrate and test
using linux-next than he would like to see. It is often
the case that when there is a problem that shows up in Linus Torvalds's
tree, it is because the code never made into linux-next.
From the audience, ftrace developer Steven Rostedt asked about the pressure
to merge new code upstream into the mainline, but that there is major resistance
to certain things—he mentioned SystemTap and utrace—being
merged. He wanted to know what can be done to resolve that. Morton
responded that for device drivers or supporting new architectures, the path
is easier, but that the two examples Rostedt gave touch core kernel code.
Morton likened the utrace battle to an "incestuous family
struggle
", but noted that the code needed improvement before it
could go in.
One of the reasons that utrace didn't make it into the kernel was a lack of
an in-kernel user of the code, Rostedt noted. Morton responded that
having an in-kernel user for a feature is a "nice checkbox
",
because it gives the kernel community a means to test the code. But,
Kroah-Hartman pointed out that "changing core kernel code is hard, and it
should be
". Ts'o also pointed out that several core kernel
developers are helping out with utrace, which should significantly smooth its
path into the mainline.
That discussion led Corbet to ask about tracing, noting that there were
several tracing solutions that were still out of the tree, but that ftrace
got new tracers added for each kernel release. Morton would like to
"see evidence that people are using them and getting good
results
". Both he and Ts'o pointed at the lack of documentation for
various tracers, saying that adding that and making the tracing more usable
would help get more of that code into the mainline kernel.
The recently proposed nftables packet
filtering subsystem was raised by Corbet as an example of a place where a
user-space interface—the existing iptables—might be
supplanted. He asked how that transition could be accomplished. Morton
called it a "pretty traumatic transition
" that would require a
compatible set of tools, with several years of warning along with buy-in
from the
distributions. That takes three to four years according to Kroah-Hartman.
Ts'o called the packet filtering interface more of an administrative
interface that didn't have to be kept as stable as others, but that the
iptables command does need to be stable.
All of that led Packard to complain about the difficulties of keeping the
current user-space interface for X servers while moving modesetting into
the kernel. According to Packard, there are exactly two users of the
interface, both of which are under his control, so why does he need to
provide backward compatibility? Ts'o said that the problem would be for
distribution users who wanted to upgrade their kernel. Because the
distribution might use an old X server, that interface—which Packard
described as "open /dev/mem
"—needs to be
maintained. Kernel hackers want as much testing of new kernels as they can
get, so any barrier to that testing is problematic.
At the end of the session, LF Executive Director Jim Zemlin announced the first ever LF "Unsung Hero" award, which he then presented to Morton. He explained that Morton is an avid car racer, so the LF arranged for him to have a day at the track as a reward. It was no surprise that there was much applause for Morton—one of the few people actually able to follow the linux-kernel mailing list. He also reviews an incredible amount of the code that ends up in the kernel.
These sessions provide an interesting view into the thinking of the members
of the panel—one not easily derived from just keeping up with the
technical side of Linux development via the LWN Kernel page or even by
sifting through linux-kernel. They also give attendees a look at what's
coming in the future that can be hard to discern, though Corbet's
Linux Weather
Forecast is helpful there. In the end analysis, though, the biggest
benefit may just be putting kernel developers and users together in a
fairly informal setting so that both sides get a better feel for the
other. Faces and personalities don't necessarily jump out of the normal
communication channels, so panel sessions like those that went on in San
Francisco are useful well beyond their technical content.
Posted Apr 15, 2009 19:16 UTC (Wed)
by fuhchee (guest, #40059)
[Link] (4 responses)
This was mistaken. utrace does not touch core code, but only the hook
> Morton likened the utrace battle to an "incestuous family struggle", but
This is probably mistaken, since there have been no technical critiques
> One of the reasons that utrace didn't make it into the kernel was a
This is mistaken, since one proposed in-kernel user was posted right
We are working on two more utrace clients, as discussed later
Posted Apr 15, 2009 21:51 UTC (Wed)
by jake (editor, #205)
[Link] (3 responses)
I guess I am not sure if you are responding to the article or to Andrew here. I believe I captured what he said correctly, so if you disagree with it, I think you need to take it up with him.
> This is probably mistaken, since there have been no technical critiques
I think Andrew was looking at the whole history of utrace, not just the most recent submission. I believe there were technical critiques of earlier submissions, yes?
> This is mistaken, since one proposed in-kernel user was posted right
yes, but as has been discussed elsewhere (http://lwn.net/Articles/325180/), the ftrace-utrace engine was not considered a "real" user by Andrew and others. I know you disagree.
jake
Posted Apr 15, 2009 22:01 UTC (Wed)
by fuhchee (guest, #40059)
[Link] (2 responses)
Your summary was accurate; this is more for Andrew and affected listeners.
> [...] I believe there were technical critiques of earlier submissions, yes?
That's true, but it's old history that doesn't justify using the
Posted Apr 15, 2009 22:36 UTC (Wed)
by fuhchee (guest, #40059)
[Link] (1 responses)
Actually, it might not have been. I seem to recall someone on the panel
(I wish this sort of pedantry were not necessary, but words from important
Posted Apr 18, 2009 12:01 UTC (Sat)
by ebiederm (subscriber, #35028)
[Link]
utrace was merged in -mm for a while and the results were bad enough that it got yanked at least once.
As for the utrace ftracer. utrace predates ftrace by several years and it certainly did not exist before it was merged.
Posted Apr 16, 2009 8:59 UTC (Thu)
by Gollum (guest, #25237)
[Link] (2 responses)
http://en.wikipedia.org/wiki/General_Graphics_Interface
From the announcement of GGI 0.0.9, in 1998:
What is GGI?
GGI - The General Graphics Interface Project is an attempt to
Posted Apr 16, 2009 13:03 UTC (Thu)
by Kluge (subscriber, #2881)
[Link]
Posted Apr 17, 2009 0:27 UTC (Fri)
by flewellyn (subscriber, #5047)
[Link]
Thus we had a situation in which neither train can go until the other has passed.
If the GGI project had begun after the big Xodus (sorry!) to X.org, it might have had better success. As it is, it's hardly the first example of a great idea that was just unable to flourish in the environment at the time.
Posted Apr 16, 2009 15:27 UTC (Thu)
by bronson (subscriber, #4806)
[Link] (2 responses)
Good gravy. Yes, backwards compatibility is important but this interface sounds like an aberration. (I realize Packard is probably exaggerating a little...)
Would it be possible to have a well-planned step change?
1. make the new interface and the removal of the old an experimental feature that's off by default. Ensure userspace works with both interfaces.
Of course, this only works when there's a single userspace client of the interface, and assumes it's possible to write a userspace utility that can use both interfaces equally well. Luckily both apply here.
I just hate the idea of someone having to waste time rewriting and forward-porting kludgy interfaces that should go away anyway. Or that nasty interfaces would hold things up for years. (OTOH, X is well known for pinning itself into situations like this, it's nothing new!)
Posted Apr 19, 2009 1:44 UTC (Sun)
by jlokier (guest, #52227)
[Link] (1 responses)
Also it would be nice if kernel modesetting for X and DirectFB got along.
Posted Apr 19, 2009 20:58 UTC (Sun)
by oak (guest, #2786)
[Link]
The above mentioned GGI solved that issue over ten years ago and it seems
Btw. I too had been patching my kernel with KGI when I was still using
ELC/LFCS2009: A tale of two panels
> the path is easier, but that the two examples Rostedt gave [systemtap
> and utrace] touch core kernel code.
points already integrated in tracehook.h. Being a well-behaved user
of the module APIs, systemtap cannot possibly touch core kernel code
either.
> noted that the code needed improvement before it could go in.
of the recently submitted utrace code ... at all.
> lack of an in-kernel user of the code, Rostedt noted.
alongside utrace (the ftrace engine), and another one was hacked together
within days (seccomp replacement). So there were already *two* that
fulfill some basic testing coverage.
during the tracing sessions, so that should bring the potential
in-kernel user count up to four. We hope that will be sufficiently
greater than zero.
ELC/LFCS2009: A tale of two panels
> points already integrated in tracehook.h. Being a well-behaved user
> of the module APIs, systemtap cannot possibly touch core kernel code
> either.
> of the recently submitted utrace code ... at all.
> alongside utrace (the ftrace engine),
ELC/LFCS2009: A tale of two panels
present tense in casually dismissing the code.
ELC/LFCS2009: A tale of two panels
describing utrace as changing code "all over the kernel", which would be
different (and more mistaken). I guess once the LF video gets released,
our memories can be checked.
people carry weight, so they had better be correct.)
ELC/LFCS2009: A tale of two panels
ELC/LFCS2009: A tale of two panels
============
setup a general, fast, efficient and secure interface to graphics
and human-machine interaction hardware for UNIX-like operating
systems. It allows normal applications to have direct but
controlled access to the underlying graphics hardware without
compromising system stability. The basic design consists of
two parts. First a kernel part, which does all the critical
operations that may cause the system to hang or may cause damage
to the hardware. Second is a library, that translates the drawing
requests from applications into 'commands' for the kernel part.
ELC/LFCS2009: A tale of two panels
ELC/LFCS2009: A tale of two panels
ELC/LFCS2009: A tale of two panels
2. Continue like this for two or more releases while it stabilizes. (try to encourage distros to flip the switch early in their dev cycles and see if it sticks?)
3. On the planned day in the planned release cycle, flip the switch everywhere and remove the old interface.
ELC/LFCS2009: A tale of two panels
ELC/LFCS2009: A tale of two panels
about SVGAlib? :-)
to be still available:
http://packages.debian.org/lenny/svgalib1-libggi2
mostly console and still compiling my own kernels. KGI patched kernel was
so much nicer to somebody who'd been spoiled by other Unix like operating
systems that provided graphical consoles as KGI was much faster, provided
higher resolutions, higher CRT update frequency etc. than the crappy x86
text-mode based Linux console...