Leading items
GNOME finalizes its speaker guidelines
Various free software communities, from distributions to individual software projects, have codes of conduct that are meant to govern the behavior of their members, at least in the projects' common areas. The intent is to reduce friction—flamewars and other unproductive communication—between project participants and to present a more welcoming face to newcomers. The GNOME project has a code of conduct that it has been discussing for some time; more recently it has also taken up guidelines for speakers at GNOME conferences.
Based on the discussion, it seems clear that Richard Stallman's keynote at the Gran Canaria Desktop Summit (GCDS) was one of the main reasons that some felt speaker guidelines were needed. His talk ended with a segment about "Emacs virgins" (or the "Saint IGNUcius comedy routine" as Stallman calls it) that was considered to be sexist and thus offensive by some in the community. Stallman has been reprising that particular bit for many years, and he, at least, doesn't see it as sexist. But it certainly offended some, and set off a firestorm of controversy last July.
Sometime after that, concerned folks contacted the GNOME Foundation board
to see what could be done to try to prevent that kind of thing from
happening again. As part of that effort, Matthew Garrett drafted
guidelines, Murray Cumming "improved the wording
", and Vincent
Untz posted a link to the guidelines for
discussion back in
March.
That earlier draft was a bit different than the current, adopted version, but quite similar in spirit. In the discussion back in March, there was some wordsmithing about the "Dealing With Problems" section, but the draft was mostly well-received. Stallman, though, had a concern that the guidelines were overbroad:
Stallman noted that part of his GCDS talk was about the riskiness of using C#, but Sandy Armstrong was quick to point out that the guidelines were aimed at a different part of his presentation:
I don't think they are meant to prevent you from making critical statements on relevant subject matter based on technical or legal arguments.
The discussion tailed off at that point, but was rekindled when Untz announced the final guidelines. Complaints about the guidelines seem to break down into two basic categories: that the rules are too vague, much as Stallman argued in March, or that they constitute "censorship" of speakers. While, strictly speaking, it may not be "censorship" (depending on which of the many definitions is used), it is certainly meant to steer speakers away from certain topics—those that might offend the audience or community.
Patryk Zawadzki doesn't like the vagueness, but thinks that there are other ways to address the problem:
The guidelines are somewhat vague, but that is done on purpose:
There are six separate guidelines listed, some of which are, or should be,
pretty obvious, for example: "Avoid things likely to offend some
people. Your presentation content should be suitable for viewing by a wide
range of audiences so avoid slides containing sexual imagery, violence or
gratuitous swearing.
" Others, though, try to cut to the heart of
the issue and, instead of proscribing conduct or topics,
provide overarching advice:
Stallman's earlier concern was addressed in the revision process, and he is
firmly behind the guidelines: "If the community wants these guidelines, I support
them.
" While the particulars of the GCDS keynote kept popping up in
the discussion, it's clear that only a few really want to continue that
particular debate. As Brian Cameron put it
in a message worth reading in its entirety:
There is something of a sense of resignation to the fact that a policy like
this is
needed. But, as Cameron noted, the board has been criticized for how
quickly and effectively it has responded to offensive presentations in the
past. The
guidelines provide a solid footing for any action the board may wish to
take in the future; one possibility is explicitly mentioned:
"Furthermore, if necessary, the GNOME Foundation might publicly
distance itself from your opinions.
" Michael Meeks summed up the situation well:
The final "Dealing With Problems" section is the part of the document that
has drawn most of the comments this time around. Joanmarie Diggs is concerned that parts of that section are
"neither 'positive' nor 'welcoming' to would-be speakers
". In
particular, the "disclaimer" paragraph, which is meant to head off anticipated
complaints, may be reworked. There seems to be a consensus that changes
are needed in that section, though it's not clear whether it should be
expanded to fill in the gaps, reworked in more welcoming terms, or
eliminated entirely. As Cameron said, the guidelines will likely be a
"living document", and if there are problems with it, changes will be made.
While there is no specific enforcement language in the document ("Enforcement is subject to the judgment of the session overseer
"), Garrett,
at least, sees that
as a possible hole. His original draft
"suggested that event runners be able to stop presentations if they
felt they were gratuitously in breach of the guidelines
", but that
was contentious and was removed. He is concerned that "guidelines
mean little without enforcement
", but does see the current language
as a reasonable compromise. As he notes, there are those who will find the
current watered-down version to be too intrusive, so something of a balance
has been struck between the needs of speakers and their audiences.
There have been plenty of examples of presentations made at free software conferences that offended some subset of their audience. As it is unlikely that the speakers set out to do that, guidelines like these will be helpful to speakers by making them at least stop and think about their words and imagery. That is likely to lead to better presentations and happier audiences, which can only be a good thing.
Bilski: business as usual
For many months now, anybody who pays attention to the US patent system has been anxiously awaiting the decision in the Bilski case. This case started as a lawsuit against the US patent office over its rejection of a business method patent. As this case worked its way toward the US Supreme Court, it came to be seen by many as a vehicle by which, just maybe, patents on business methods and software could be struck down. Much energy - and many amicus briefs - were directed toward that goal. As the last possible date for a ruling approached, the Free Software Foundation observed: "For Supreme Court watchers, following Bilski has been like following the World Cup. Productivity has fallen and ulcers have grown." Alas, it seems that the World Cup analogy extends to bad calls as well.
The ruling is out; Groklaw has it. With the concurring dissents, it runs to 71 pages. Reading the whole thing can lead to a much better understanding of the history of patent law in the US, but, for those concerned about possible changes to the patent system, the conclusion is far more succinct:
In other words, the court chose to rule on the value of one specific patent
application. In the process, it possibly loosened the criteria slightly by
saying that the "machine
or transformation" test is not the sole guide to patentability. But
the court went out of its way to avoid deciding - either way - whether
business methods as a whole could be patented. The only real mention of
software patents was a passing note that relying too heavily on "machine or
transformation" could "create uncertainty as to the patentability of
software, advanced diagnostic medicine techniques, and inventions based on
linear programming, data compression, and the manipulation of digital
signals.
" But, even there, the court went out of its way to have
anything read into its words:
This refusal to face the issue can only come as a disappointment to anybody who was hoping that the court would make substantial changes to the current application of patent law in the US. But it can't have come as any real surprise to people who are familiar with the current court. The current chief justice - John Roberts - has been very clear from the outset that he is not interested in the writing of expansive rulings. The court was asked to decide on one specific patent, so that's what it did. No nonsense about, say, laying down a clear interpretation of the law that would eliminate the need for a long series of court cases stretching into the future.
There are many who would argue that this is exactly how it should be, that it's up to the legislature, not the courts, to write the laws. Others would argue that the American precedent-based legal system guarantees that the courts will have a hand in the writing of law that people actually live by in any case, and that the court should have taken the opportunity to reduce the amount of uncertainty in this area. Certainly, it would have been nice if the court had thought a little more broadly; now it seems that the only alternatives are more court cases or an attempt to get the Congress to do something constructive, or, likely, both. One could argue that the decision to do nothing was a bad call indeed.
That said, while it would be nice if the courts would just fix the situation, it may well be the case that rewriting the law to explicitly restrict the range of patentable inventions would be the best solution. Getting the US Congress to do something about the patent system is a daunting prospect, but it's not beyond the realm of possibility. There is an increasing awareness that the patent system is costing businesses a lot of money and is impeding the competitiveness of the country as a whole. While there are powerful interests in favor of the status quo, there are others pushing for reform. It might just happen, someday.
Meanwhile, we're stuck with the same situation we had before this decision was handed down. Software patents remain a threat in the US and they are looking increasingly threatening elsewhere. We will have to continue fighting them in all of the same ways, including what is arguably the most effective strategy of all: make free software so useful and so ubiquitous that the industry has no choice but to continue to try to protect Linux and, hopefully, find a way to address the patent threat for real.
Two GCC stories
The GNU Compiler Collection (GCC) project occupies a unique niche in the free software community. As Richard Stallman is fond of reminding us, much of what we run on our systems comes from the GNU project; much of that code, in turn, is owned by the Free Software Foundation. But most of the GNU code is relatively static; your editor wisely allowed himself to be talked out of the notion of adding an LWN weekly page dedicated to the ongoing development of GNU cat. GCC, though, is FSF-owned, is crucial infrastructure, and is under heavy ongoing development. As a result, it will show pressures that are only seen in a few places. This article will look at a couple of recent episodes, related to licensing and online identity, from the GCC community.
Documentation licensing
Back in May, GCC developer Mark Mitchell started a discussion on the topic of documentation. As the GCC folks look at documenting new infrastructure - plugin hooks, for example - they would like to be able to incorporate material from the GCC source directly into the manuals. It seems like an obvious idea; many projects use tools like Doxygen to just that end. In the GCC world, though, there is a problem: the GCC code carries the GPLv3 license, while the documents are released under the GNU Free Documentation License (GFDL). The GFDL is unpopular in many quarters, but the only thing that matters with regard to this discussion is that the GFDL and the GPL are not compatible with each other. So incorporating GPLv3-licensed code into a GFDL-licensed document and distributing the result would be a violation of the GPL.
After some further discussion, Mark was able to get a concession from Richard Stallman on this topic:
This is a severely limited permission in a number of ways. To begin with, it applies only to comments in header files; the use of more advanced tools to generate documentation from the source itself would still be a problem. But there is another issue: this permission only applies to FSF-owned code. As Mark put it:
I find that consequence undesirable. In particular, what I did is OK in that scenario, but suddenly, now, you, are a possibly unwitting violator.
Dave Korn described this situation as being
"laden with unforeseen potential booby-traps
" and suggested
that it might be better to just give up on generating documentation from
the code. The conversation faded away shortly thereafter; it may well be
that this idea is truly dead.
One might poke fun at the FSF for turning a laudable goal (better documentation) into a complicated and potentially hazardous venture. But the real problem is that we as a community lack a copyleft license that works well for both code and text. About the only thing that even comes close to working is putting the documentation under the GPL as well, but the GPL is a poor fit for text. Nonetheless, it may be the best we have in cases where GPL-licensed code is to be incorporated into documentation.
Anonymous contributions
Ian Lance Taylor recently described a problem which will be familiar to many developers in growing projects:
He also noted that a contributor who goes by the name NightStrike had offered to build a system which would track patches and help ensure that they are answered; think of it as a sort of virtual Andrew Morton. This system was never implemented, though, and it doesn't appear that it will be. The reason? The GCC Powers That Be were unwilling to give NightStrike access to the project's infrastructure without knowing something about the person behind the name. As described by Ian, the project's reasoning would seem to make some sense:
NightStrike, who still refuses to provide that information, was unimpressed:
Awesome or not, this episode highlights a real problem that we have in our community. We place a great deal of trust in the people whose code we use and we place an equal amount of trust in the people who work with the infrastructure around that code. The potential economic benefits of abusing that trust could be huge; it's surprising that we have seen so few cases of that happening so far. So it makes sense that a project would want to know who it is taking code from and who it is letting onto its systems. To do anything else looks negligent.
But what do we really know about these people? In many projects, all that is really required is to provide a name which looks moderately plausible. Debian goes a little further by asking prospective maintainers to submit a GPG key which has been signed by at least one established developer. But, in general, it is quite hard to establish that somebody out there on the net is who he or she claims to be. Much of what goes on now - turning away obvious pseudonyms but accepting names that look vaguely real, for example - could well be described as a sort of security theater. The fact that Ian thanked NightStrike for not making up a name says it all: the project is turning away contributors who are honest about their anonymity, but it can do little about those who lie.
Fixes for this problem will not be easy to come by. Attempts to impose identity structures on the net - as the US is currently trying to do - seem likely to create more problems than they solve, even if they can be made to work on a global scale. What we really need is processes which are robust in the presence of uncertain identity. Peer review of code is clearly one such process, as is better peer review of the development and distribution chain in general. Distributed version control systems can make repository tampering nearly impossible. And so on. But no solution is perfect, and these concerns will remain with us for some time. So we will have to continue to rely on feeling that, somehow, we know the people we are trusting our systems to.
Page editor: Jonathan Corbet
Next page:
Security>>
