When last we covered a trademark
talk by Karen Sandler, she was a lawyer on staff at the Software Freedom
Law Center (SFLC), and part of her job was to deal with trademark issues
for free software projects. She is still a lawyer, of course, but has
switched her focus now that she is the executive director of the GNOME
Foundation, and that gives her some new perspectives on trademarks. She came
to the Collaboration Summit to talk about "Real World Trademark Management
for Free Software Projects" on April 4.
By way of an introduction, Sandler gave the usual disclaimers (I am not
your lawyer and this is not legal advice), while noting that lawyers are
also known for saying "it depends". While it can be somewhat annoying to
get that answer from a lawyer, she said, it really is true. Lawyers can
tell you what the "general situation is in the law", but each case is
Beyond her work for GNOME, she is also a pro bono counsel for SFLC
and the Software Freedom Conservancy (SFC). She is an advisor for the Ada Initiative, as well as a mentor for the GNOME
outreach program for women. She noted that the latter had recently
dropped the GNOME from its name when the SFC joined
the project. She is also a self-described cyborg and
interested in software transparency for medical devices.
What are trademarks?
There are a lot of misunderstandings in the community about trademarks, but
it is a fairly straightforward idea. A trademark is bound up in branding
and identity so that consumers can recognize the brand at a glance. A
trademark can be words, pictures, or both, but it needs to be incorporated
into the product itself (packaging, etc.) in order to make the association
in a consumer's mind.
Unlike copyright, which is granted as soon as the work is "fixed in a
tangible medium", a trademark actually needs to be used. If you make a
logo in your room, don't associate it with any product, and don't show it
to anyone, it's not really a trademark, while doing the same things
will get you a copyright on that logo. Even if you don't register a
trademark, you still get some protection based on it being used on a
product of some kind. Patents, of course, are completely separate as they
cover ideas and inventions.
There is an inherent tension between protecting trademarks and the
ideals of free software. Free software is all about remixing and building
on top of the work of others, and our licenses are very clear on that
point. But trademarks are different, and projects need to think about the
ways they want to allow their trademark to be used.
Trademarks and identity
Everything about trademarks is connected to identity. If someone
repackaged some parts of GNOME, with other, possibly proprietary or
malicious code, would there be confusion if it used the GNOME trademarks?
The tricky part is to allow all of the things that the project wants to
allow without letting people abuse the trademark. It is "really tough" to
draw that line, so her suggestion is that a project make policies that
explicitly say what is a permissible use of the trademark.
It is important to note that there are some trademark concepts that need to
be considered. One is the idea of "naked licensing", which comes into
play if a mark holder allows it to be used too widely. The example she
gave was a wine company that allowed other winemakers to use its name,
any real connection to its brand—in fact the trademark holder never
even sampled the wine in question. If that happens, one can lose control
of the trademark.
A related idea is that of "generic-izing" a name. If a brand becomes too
popular and the brand name is used to refer to a number of different
products in the same category, control over the trademark can be lost. The
classic examples of this (at least in the US) are Kleenex for facial tissue
and Xerox for photocopiers. In both cases, consumers and others started
using the trademark name generically ("xerox that document" rather than
copy or photocopy it), which meant that they were no longer associating it
with the brand. You can be "too successful and consequently lose your
mark", Sandler said.
Whatever policies a project devises, they will get tested "all the
time". There will be questions that live on the boundaries of the policy.
She handled some of that at SFLC and now does a lot of work on that for
GNOME. It is difficult to anticipate all of the ways that people might
want to use a trademark. She said that she is an optimist by nature, but
has been trained to be a pessimist when it comes to trademarks and other
It is best to have a policy with as many parameters as possible. Start by
stating exactly what can be done with the mark and different projects will
have their own ideas about usage of trademarks. For example, it might
state that one can use "based on GNOME" when it is substantially unmodified
from the upstream code. If it is modified, the policy may want to say that
the mark should not be used at all.
Another common problem is whether it is permissible to use the mark in
another name, like fooPlus or DifferentFoo. That's a particularly
problematic question she said, because you generally want to err on the
side of restricting the use of the mark, but you also want to ensure that
the software is freely usable. Another area that any policy should address
is merchandise (T-shirts, hats, stickers, etc.); can the logo or name be
used on those? It is good to put a kind of "catch-all" phrase in the
policy as well ("so long as there is no likelihood of confusion" for
example), which can catch a lot of edge cases.
For GNOME, both the name and footprint logo are trademarked. Each is a
separate registration and only applies in a certain field of use, which is
software for GNOME. The project cannot prevent all uses of the term
"gnome", like for garden gnomes or the band mr. Gnome, only for things in
the software realm. Again, the key is not confusing consumers.
Sandler gets all kinds of requests to use the GNOME trademarks, for
stickers, papers, web sites, domain names, and so on. She handles them on
a case-by-case basis and tries to work with the requester to find a
mutually agreeable solution. In the end, most of the people are excited
about GNOME, which is why they are asking, so it's important not to dampen
their enthusiasm while still protecting GNOME's mark.
One web site wanted to put the GNOME logo next to its own on the site, but
the GNOME logo was huge and all the way at the top, so it dwarfed the
site's logo. She suggested they make their own logo bigger, to put it
above GNOME's, and to add a disclaimer that it wasn't an official site.
Domain names are messy, she said, and she has not really seen a situation
made sense for a non-official site to have GNOME as part of its domain
Usually, once she outlines the problem, the domain owner turns it
over as a gift to the foundation.
With sites and domains, the problem is
whether someone new to the community will be confused when they land on the
site. Once that's explained, people are generally understanding, she said.
But, once in while, she does have to put "nastygrams" in the mail.
One of her favorite stories about the GNOME logo is when she heard from a
contributor about a company that had modified the logo and was using it on
their mobile pedicure-by-fish (having small fish eat dead skin from the
feet) van. The main part of the footprint was replaced by a fish (seen at
right with the GNOME logo from Sandler's slides
[PDF]) The logo itself has a free copyright license, so it is not a
copyright violation to use it, and it is clearly outside of the software
is exactly the kind of use that should be (and was) allowed. No one will
be confused that GNOME has suddenly veered off into the fish-pedicure world.
Companies often say that they are "forced" to defend their trademark. She
heard it frequently at the SFLC, but now that she is with the GNOME
Foundation, she can see the problem. The law itself is fairly simple, with
simple concepts, but there are some requirements to uphold. Most problems
are handled fairly easily; she asks someone to stop using the mark in an
inappropriate way and they do.
Another interesting situation arose from a combination of the Debian and
GNOME logos (seen at right). It is a "really cool" logo, she said, but is
a violation of the GNOME trademark policy. The problem is that it's
difficult for those who are unfamiliar with the communities to parse out
what it means. If you do know the communities, it's completely
clear what it means, but that's not the problem. There is also a question
as to whether it reduces the brand for both Debian and GNOME by combining
things that way. So far, that situation has not been resolved, she said.
There are some key factors that are usually considered when deciding
whether a trademark is being violated. The Debian GNOME logo is
complicated under those factors, while the fish pedicure logo is bit more
obvious. The first factor is the similarity of the
marks, which is clear in the fish pedicure example, but less so for Debian
GNOME. The markets for Debian and GNOME are quite similar at some level,
while fish pedicure clearly isn't, which is another factor to consider.
Like the "similarity" test, there is another for
"overall impression", which for both of these cases it is fairly clear that
the overall impression is similar to the GNOME logo.
Another factor that can be considered is whether there has been actual
confusion for consumers or in the market because of the possibly infringing
use. One can ask if there is evidence of real confusion. For trademarks,
there is also a notion akin to the "fair use" of a copyright: nominative
use, that is using the mark to identify a product. For example, it is
perfectly reasonable to take a photo of an Apple laptop, which shows the
Apple logo, and post it on web page to sell the laptop. You can also use
the name "Apple" in the text of your ad. Those are nominative uses.
Trademarks are not "just some legal detail" to avoid or ignore, even though
that's an attitude she finds in the community—and sometimes in
herself. Dealing with trademarks is an opportunity to recognize issues
with the brand of your project, and to clearly delineate the values that
your project holds. The Debian GNOME logo question is a perfect example of
that; the projects generally hold similar values, but neither wants to lose
its brand identity. In general, free software projects will want it to be
permissible and easy to use our software and brands, but we have to be
careful that some bad actor doesn't misrepresent our projects.
Community non-profits should band together to work on these kinds of
problems, she said. There should be more cross-communication between
projects. One area for collaboration might be an organization to hold
trademarks for projects, especially those that are newly formed.
In answer to a question from the audience earlier in the talk, Sandler said
that she thinks it's important to register trademarks early on in a
project's life. But, it is also important that those marks be held by a
neutral organization of some kind, as we have seen project disputes
because one party holds the trademark (often the founder) and wants to use
it in ways that other project members find objectionable. An organization
that held the mark and helped form and enforce policies on those marks
could be beneficial.
Comments (9 posted)
The Debian project recently debated the inherent trade-offs between making a bug reporting tool easy to use and turning it into a firehose that puts out more volume than the developers and maintainers can process. The impetus this time was false-positives caught by the Debian bug tracking system's (BTS) spam filter, but it is a question that the distribution — and indeed most distributions — grapple with regularly.
Why we foo
Michael Welle raised the issue on the Debian-devel mailing list, reporting
that he had attempted to file a bug and was surprised when the BTS rejected
his report because it contained a blacklisted URL. The surprise was that
describing the bug in question required him to use a URL, and he had chosen
what he thought was a general-purpose example: www.foo.org. But
evidently the foo.org domain is on the blacklist of the uribl.com filtering service, which the BTS uses to strain out incoming spam. "Interesting user experience, bug reporters will like that big time..." Welle said.
Martin Krafft and Andrey Rahmatullin quickly replied that only RFC 2606-defined example URLs should be used in bug reports, to which Welle asked whether it was unreasonable to expect users to read an RFC before reporting a bug. Rahmatullin responded that users should "try not to use suspicious URLs."
At that point, Russ Allbery said
that the root of the problem "is that foo.org is a real domain, and
one that appears to be owned by one of those domain parking companies that
quite likely could be doing lots of grey things with the domain. A lot of
those companies are at the least spammers." In all likelihood, he
added, foo.org really was used for spam at some point, although he
conceded that it could be a false positive, due to others, like Welle,
choosing it for its placeholder meaning (ironically, "foo" is documented
as a placeholder in RFC 3092).
To that, Welle replied that he found the reliance on a real-time blacklist managed by a third party problematic. Fernando Lemos asked whether there was really any way to fix it. "We certainly can't disable spam filters or we'll be flooded with spam." Also, he added, because the BTS returns an error message explaining what blocked the bug from being accepted, in all likelihood real users will be able to correct the problem and resubmit. Anyone who cannot decipher the message and fix it is probably "not very tech-savvy," which decreases the odds that the report would be particularly useful. "I'm not saying it's good that we miss reports like this," he concluded, "but we must put things into perspective."
Interestingly, Ben Pfaff chimed in with the suggestion that the BTS could weed out spam by inspecting the report's metadata, looking for valid package names and versions. No one replied to that comment, though; instead the focus of the discussion turned to the acceptable threshold for rejecting otherwise valid bug reports.
by saying that he was trying to raise the URL-filtering issue from bug
reporters' perspective; people who "simply want to report a bug
without being interested in external blacklist and stuff." He
compared the bug report rejection with a company telling customers
"we don't like you, go away." Russell Coker replied:
Actually companies do that all the time. Some corporate web sites used to
reject browsers other than IE. [...] So comparing Debian to a commercial
organisation doesn't support your case at all. Commercial organisations
are more than willing to reject some customers if it makes things easy for
Welle replied that he tended to "to look for role models above me, not below me. Why imitate people or companies that do a bad job? We can do better. And of course, to come back to my initial email, I doubt that using the blacklist service makes anything easier for Debian." Debian Project Leader Stefano Zacchiroli concurred with that sentiment, saying that one of the project's role models is "people who report bugs and attach patches to them," and suggesting that Welle submit a report against the lists.debian.org pseudo-package in order to continue the discussion there.
Welle agreed with Zacchiroli's assessment of the situation, although as of yet he does not appear to have filed a new bug on the subject. From outside the project, Welle's concern raises two distinct issues: how to deal with false-positives in the BTS's spam-filtering system, and the user-friendliness of the BTS as a whole (particularly where error messages are concerned).
It is hard to imagine that there are more than a handful of URL patterns
that are both likely to be spontaneously chosen as examples by a bug
reporter and be on uribl.com or a similar service's blacklist.
RFC 2606 only offers three example URLs: example.com, example.net, and
example.org. Perhaps exceptions could be made, or other techniques to
filter out spam (such as Pfaff's suggestions) could be incorporated.
But, while it is undeniable that the BTS does require sturdy anti-spam
635940 from July 2011 also questions the uribl.com blacklist. In that
report, Blars Blarson responded that even with the URL filter, BTS still gets several hundred spam messages every day, and that before it, the daily count was in the tens of thousands, often totaling more than a gigabyte. The blacklist works by sending an SMTP 550 error code, which indicates that the requested mailbox does not exist; this explicit rejection is supposed to winnow out repeat offenders that simply dropping the messages would not.
Many distributions struggle with making their bug submission process
easier to use, but Debian has extra challenges because it is in that small
minority which (1) does not offer any sort of web-based bug submission
form and (2) crucially, allows bug reports to be filed via email.
The preferred method of
reporting a bug is the reportbug command-line utility, which
collects information from the user and the OS, and dispatches its report
via email. Email reports can also be filed manually, if the correct
formatting is used.
The other large distributions offer web bug-reporting tools, but
increasingly the standard practice requires registering an account with
email verification. Ubuntu does this through
Launchpad, Fedora does the same
in Red Hat's Bugzilla, and openSUSE uses the same technique
with Novell's Bugzilla. Those systems may attract their fair share of
spam (automated or otherwise), but the BTS bug-submission email address is
well-publicized, which ensures that it is in the hands of every
self-respecting spammer, and has been for years. Launchpad does have an
email gateway, but it requires OpenPGP; Bugzilla supports email reporting
gateways, but that feature does not appear to be in use by major
The goal of the reportbug tool is generate more useful reports
by gathering specifics. One downside, of course, is the exposure to email
spam. But, given that the proper format for email bugs is unlikely to be
present in run-of-the-mill spam, one would guess that a fairly simple
filter might weed out the vast majority of spam sent to the bug-reporting
As Pedro Larroy pointed
out in May 2011, however, the absolute dependence on SMTP can also be a
problem, given that users may find themselves on a network that filters out
SMTP (or on a private network with no access to the outside world). Larroy
suggested that Debian add an HTTP transport mechanism for
reportbug to fall back on; there was general agreement on the
value of such a fallback, but many (including
Ian Jackson) also argued that HTTP was a slippery slope, and that if such a
gateway to the BTS was built, someone would (perhaps with the best of intentions) write a web-submission-form easily exploited by spammers and other attackers.
Of course, the uncomfortable truth is that Debian, like most large
projects, knows that making bug reports harder to file reduces the number
of reports, which reduces the time burden shouldered by developers,
package maintainers, and bug triagers. Josselin Mouette said as much in
the 2011 discussion, observing:
We already receive more bug reports than we can handle. We need less bug
reports, but more useful ones. Ergo, putting an entry barrier to reporting
bugs is not that silly.
Not everyone agreed; Patrick Strasser argued that "artificially throttling" reports is bad, and that user education needs to be integral to the reporting process. But Allbery countered that no one has the time to engage in user education, and consequently, making reporting less convenient "doesn't *fix* the problem, but it does weed out a lot of users who don't know how to file good bug reports (and some users who do, which is indeed a drawback)."
Ultimately, a collaborative project is going to include people with differing views on whether bug reports "serve" the users (by improving the software) or "serve" the developers (by providing information). MBA programs might classify this as a question over who is the customer and who is the supplier — the customer being the party whose needs are ultimately more important. Debian has veered into that debate before, as in the "What bugs reports are for" thread from March 2011, when Jesús Navarro cautiously suggested to Jackson that point four of the Debian Social Contract establishes users as the first priority. No one in the project seems to disagree about that principle — the thorny problem is that maintaining a balance between the ease of bug reporting and the demands placed on developers requires constant attention and adjustment.
Comments (19 posted)
On the first day of this year's Linux Foundation Collaboration Summit,
several kernel developers sat down with moderator Greg Kroah-Hartman for
another edition of the kernel panel. The developers covered a wide range
of kernel subsystems, from graphics and memory management, to storage and
networking. As is usual, a lively discussion ensued, covering a number of
topical and longtime kernel concerns.
Audience questions for the panel are eagerly sought, Kroah-Hartman said,
noting that a similar panel at LinuxCon Japan had turned into an
between the kernel hackers onstage and those in the audience. He then had
the panel introduce themselves.
Mel Gorman said that he works for SUSE
Labs on memory management along with fixing bugs in various SLES and openSUSE
kernels. John Linville of Red Hat is the maintainer for the wireless LAN
subsystem, which is, he said, not about writing "cool code" unfortunately,
but is more of an administrative role shepherding others' patches and
features. James Bottomley is the CTO for server virtualization at
Parallels as well as maintaining the
SCSI subsystem for the kernel. In addition, he "mucks about at the
edges" trying to make kernel development better, which is often as much a
social problem as anything else, he said. Keith Packard works for the
Intel Open Source Technology Center on graphics and window systems, as well
as doing kernel DRM (direct rendering
Kroah-Hartman then queried Bottomley and Gorman about what went on at the
recently completed Linux Storage, Filesystem, and Memory Management Summit
(LWN coverage: day one and day two). Bottomley rattled off a few
different topics that came up in the storage and filesystems tracks including
new "weird and wild" SCSI commands
that are coming down the pipe. He joked that it was necessary to keep
Christoph Hellwig gagged while that talk was going on so that everyone
could actually hear about the commands. The summit is becoming one of the more
important kernel development meetings, he said, and it is one that, unlike
some kernel summits, actually has arguments; it's "definitely
Gorman also mentioned a number of different topics that were discussed in
the memory management track including the two NUMA migration schemes
that are currently floating around (Peter Zijlstra's sched/numa and Andrea Arcangeli's AutoNUMA), as well as containers and control
groups. He said that kernel hackers are now
concerned about how quickly the containers and control groups code runs,
rather than whether it will run, which was the concern in the past. The
discussions were "quite civil" at the summit, which contrasts with how they
sometimes go on the mailing list. The meetings were definitely a success,
he said, as even if a decision went against a developer's idea or plan,
they got a
good idea of why the others objected to it.
Wireless and graphics
Things are clearly getting much better in the wireless area, Kroah-Hartman
noted; it "used to be a nightmare", but, he asked Linville, is it a solved
problem now? Wireless in Linux has matured quite a bit over the last few
years, Linville said, and there are a number of companies that are now
participants in developing free Linux drivers, including Broadcom and
Qualcomm/Atheros. It really helps to have people available "who know how
the hardware works", he said.
But, wireless technology continues to evolve with things like 802.11ac
coming along (Linville called it "[802.]11n on steroids") that require
support and drivers for Linux. Bottomley asked about 802.11n compliance,
which Linville said is going well, though there are still "things to be
ironed out". The code is in place and drivers are using it, but there is
still some development to be done. All of that is helped by better support
from the bigger players, but some of the second-tier wireless hardware
providers are working on free kernel drivers as well.
Moving on to Packard and graphics, Kroah-Hartman asked about X and mobile
phones. In the past, phones shipped using X, but that really is no longer
the case, he said, and wondered why. Packard said that the last six years
had been spent "radically restructuring" graphics on Linux. The idea was
to have kernel drivers that could support more window systems, beyond just
X, because X is "not what people want anymore", he said. Today, most are
interested in compositing-based models.
Compositing is a totally different windowing system model, he said, which
is simpler. After all the work that was done, the existing kernel DRM layer
is capable of supporting all of the different options, including Wayland,
Android, and X. Android usually uses different drivers, but the idea is
similar. We are, Packard said, moving away from X as the fundamental
graphics layer for Linux, instead the Linux kernel now serves that purpose.
From the audience, Bdale Garbee asked Linville about the state of Broadcom
drivers, noting that performance and stability of those drivers on
laptops recently was "not so great". Linville said that he assumes it will
get better, that the Broadcom drivers have not been in the kernel that
long, and that those drivers getting exposure in distributions should
help. That will lead to more bug reports, which will be beneficial. The
developers have been working well with Linville and are being diligent
about looking at bug reports and fixing the problems reported there.
There is always going to be a certain amount of lag, Linville said, because
some distributions are faster or slower about updating to the latest
kernel. But, it is the "same old story", he said, if you find bugs, report
them, and respond to the questions that are asked.
That led to a discussion of the stable kernel tree, with Bottomley noting
that some distributions are more attuned to stable than others.
Kroah-Hartman said that he tries to do a stable release each week, but that
it is rare for Broadcom bug fixes to be sent to the stable tree. Linville
said that he should remind developers to CC stable on the bug fix patches,
but that there is a somewhat tricky balance there, which requires judgment
calls on which fixes are appropriate.
Gorman said that it is common in memory management development to get
"slapped" if thing aren't marked for stable that should be. But, he said,
each subsystem deals with things differently. Bottomley said that nobody
likes getting the email reminding them to send fixes to the stable
maintainers but that it is important to get those patches into the stable trees.
Next up was a question for all about their "pet peeves" in Linux kernel
development. We often see the same problems over and over, Kroah-Hartman
said, which ones are particularly irksome? Packard said that his biggest
pet peeve was outside the graphics area. He uses Bluetooth a lot and is
annoyed that every time an -rc1 kernel is released, Bluetooth breaks. It
is good, in some ways, he guesses, because now he can debug and bisect
Bluetooth problems in the kernel. But the basic problem seems to be that
it is common for Bluetooth kernel development to break all of user space.
Every time he has ever suggested that for graphics work, he got "flamed to
a crisp". If your patch breaks the user-space interface, he said, please
don't bother submitting the patch.
Bottomley is unhappy with changelogs that don't say why the change
is being made. Changelogs often say what is being done, but they don't say
what the user-visible effects are. Well-written changelogs should
not describe the change itself, he said, because that's what the C
code is for. "Almost all kernel developers can read C", he said, to
a hearty laugh from the audience. When Linville suggested that he didn't
really have a pet peeve, Bottomley immediately asked if he would be willing
to swap subsystems with him.
On further reflection, Linville echoed some of Bottomley's complaints,
noting that it is sometimes difficult to determine where a particular patch
should be sent. Because the changelogs don't clearly indicate whether the
patch is a fix or a new feature, he is left guessing whether it belongs in
the next tree, or needs to applied more urgently. It is particularly
problematic during a merge window, he said, so the changelogs need to say
where the patch is bound.
Since he doesn't "have to maintain anything", Gorman is in a different
position. He joked that he gets to "rag on maintainers and make their life
miserable". More seriously, he pointed to mistakes that are made again and
again as a pet peeve of his. He mentioned writeback causing long delays
when writing to USB sticks as an example. That has been fixed "at least
eight times", he said, only to be broken again in the next release. We
need to do a better job checking to see that those kinds of bugs stay fixed,
More audience participation
A member of the audience asked about the future of proprietary loadable
kernel modules, and Kroah-Hartman immediately said that he really didn't
see a future for them. The kernel developers have provided ways to operate
hardware from user space that can be used for proprietary drivers. As an
pointed to "laser welding robots" that are driven from user space with a 3D
application that uses lots of floating point math.
If companies look at the business case, Bottomley said, it is rare that
closed drivers make sense. If a company produces standard hardware that
lots of people will require a driver for, there is no real business value
in a closed driver. For more specialized devices, user-space drivers may
make sense, he said.
Structured logging was the topic of another audience question. The idea
has been around since at least 2004, the audience member said, and some
solutions have started to appear. The problem is that users are now
supporting larger numbers of systems and "cannot manage a datacenter by
hand". Where is structured logging in the kernel headed?
Kroah-Hartman mentioned that a patch had just been merged that builds atop
dev_printk() and brings some structure to logging. There have
been recent proposals, including one at last year's kernel summit that got
derailed by a "spat over UUIDs", Bottomley said. Packard said that there
is a fear of top-down proposals for structured logging, especially if
driver writers have to specify their messages ahead of time. Some
remind him of the VMS error message documentation that came in a large
binder. What's needed is a way for driver writers and others to get the
benefits of structured logging without all the problems, he said.
SCSI and bufferbloat
The new SCSI commands that Bottomley mentioned at the start formed the
basis of the next question. Kroah-Hartman noted that there are more
high-speed storage devices these days that are avoiding SCSI because they
can't get the I/O operations/second (IOPS) rates that they need, so, he
asked, is SCSI dead for high speed? Bottomley said that it is really two
different questions. There is a need for storage that acts like memory,
and some of these efforts are intermediate steps that are being taken when
"what we really need is more memory".
As far as whether SCSI will survive, Bottomley was willing to bet that it
would be around for quite some time. There is a need for standards-based
storage devices so that users can purchase storage at Fry's or other stores
and know that they will work with their systems. Whether it will be SATA,
SAS, SCSI, or something else is not clear, but he believes that SCSI will
be in the mix.
Throughput used to be an important measure for storage, but that is moving
to IOPS. SCSI used to
sacrifice latency for throughput, but the reverse is happening because of
the focus on IOPS. Linville spoke up at that point to note the parallels
with bufferbloat in the networking world. Latency was de-emphasized in
networking devices for throughput and then people started wondering why
their interactivity had gone down.
That led to a question to Linville about bufferbloat and the patches that had
recently gone into the kernel to try to address some of those problems. It
is a "complicated topic", Linville said, and wireless is part of the
problem, though it is seen on both wired and wireless networks. Some of
the problems are inherent in wireless technologies because the available
bandwidth changes over short periods of time, which can lead to high
latency. There are also problems with wireless that aren't
bufferbloat-related but sometimes look like bufferbloat.
Unfortunately, no "magic solution" has presented itself to fix the general
bufferbloat problem. There needs to be an adaptive queue management
algorithm, but none is known that solves the problem. Something that works
well for wired networks is "random early discard" (RED), but that requires
lots of tuning. A recent change to measure queues in bytes, rather than
packets, helps, because queue length limits are set by the aggregate size
of the packets being sent. But there are still questions of what the
length of the byte queues should be, whether they should change, and, if so,
how often. The problem is not specific to Linux, and there are some
political issues surrounding it because not everyone believes it is a
problem—or that it is their problem.
A grab bag of questions and answers
As the panel time slot wound down, there were a number of other audience
questions and kernel hacker discussions. An audience question about
participation by Linux developers in standards committees noted that it
takes money and some amount of insanity to participate in those.
Kroah-Hartman pointed out that the Linux Foundation has worked on standards
participation in the past, while Bottomley asked why there was a perception
that Linux developers and companies are not involved. He noted that in the
storage area there is a "tireless Dane" (Martin Petersen) who works on
standards. It isn't the money or doing what's needed to get on the
committees that is the problem, Bottomley said, but instead it is finding
the right people to do so. The T10, T13, and UEFI committees all have
Linux representatives, he said. If there are standards committees where
Linux is not represented, we want to know about that, Kroah-Hartman said.
Grant Likely asked about the progress of the Android patches into the
mainline; when is that job "done"? Once Android is using mainline kernels
was the answer Bottomley gave. Kroah-Hartman noted that the real problem
is on the user-space side. Kernel hackers can't do anything about changing
the Android user space, but companies like Linaro and Samsung are making
some progress in doing so. The 3.3 kernel can boot an Android user space,
but it will "eat your battery alive", he said. We are making progress, but
it will require teamwork to get there.
What are kernel hackers doing to measure power consumption was the next
audience question. Packard said that there is a lot of focus on that in
the graphics arena. They are using wall power meters to measure the power
consumption of various devices over time and trying to correlate those
measurements with active functional units in graphics devices and
system-on-chips (SoCs). They measure things like joules-per-movie, which
is a critical measure for users. There is an effort to balance the "race
to idle" with voltage and frequency scaling, he said, especially for
latency-sensitive applications like displaying movies. In addition to just
the graphics hardware, they are trying to measure memory power utilization
and bus power utilization, he said.
The problem is bigger than graphics, Bottomley said. In the storage
world, the speed of the buses has been "jacked up", which increases the
power usage. On a netbook SATA link, a half-watt of power can be used just
to power the bus. Power saving for SATA buses is coming, he said.
The last topic covered was that of "hardware bypass", which are devices
that take on some of the tasks that are normally handled by the kernel in
the interests of performance. Gorman pointed out that bypass is often done
to drive some "artificial metric" and that kernel developers need to know
what the metric is in order to make proper decisions. The question for
those proposing bypass should be "Why are you trying to do that?", he
said. The problem is not just for SCSI (or storage), but for all of the
various bypass (or offload) proposals.
The CPU isolation feature, which allows an administrator to remove a CPU
from those managed by the kernel to run a particular workload unperturbed
by the rest of the system, is one that Gorman mentioned. One of the
reasons that people give for wanting the feature is to avoid the
inter-processor interrupt (IPI) "storms" that can occur. But a better way
to approach that problem is to figure out why those storms are happening
and to address that instead, he said. That's "ultimately the right thing
to do" whenever bypass is suggested.
Linville noted that TCP offload engines provided a boost for some users,
but that CPU improvements have "largely erased the gains" that were made.
Bottomley said that the question should not be how to avoid the kernel
code, but instead should be how to take advantage of the work that the
kernel developers do. Essentially, the consensus was that bypass or offload
technologies are not only bypassing the kernel, but are also ignoring the
collective knowledge and abilities of the Linux kernel community.
Once again, the kernel panel gave a nice glimpse inside the heads of kernel
developers. It provided some insight into how they approach problems, and
where they think solutions generally lie. It was nice to have a mix of
blood" as well as "old hands" on the panel, which definitely led to an
Comments (3 posted)
Page editor: Jonathan Corbet
Next page: Security>>