|
|
Log in / Subscribe / Register

Leading items

GNOME finalizes its speaker guidelines

By Jake Edge
June 30, 2010

Various free software communities, from distributions to individual software projects, have codes of conduct that are meant to govern the behavior of their members, at least in the projects' common areas. The intent is to reduce friction—flamewars and other unproductive communication—between project participants and to present a more welcoming face to newcomers. The GNOME project has a code of conduct that it has been discussing for some time; more recently it has also taken up guidelines for speakers at GNOME conferences.

Based on the discussion, it seems clear that Richard Stallman's keynote at the Gran Canaria Desktop Summit (GCDS) was one of the main reasons that some felt speaker guidelines were needed. His talk ended with a segment about "Emacs virgins" (or the "Saint IGNUcius comedy routine" as Stallman calls it) that was considered to be sexist and thus offensive by some in the community. Stallman has been reprising that particular bit for many years, and he, at least, doesn't see it as sexist. But it certainly offended some, and set off a firestorm of controversy last July.

Sometime after that, concerned folks contacted the GNOME Foundation board to see what could be done to try to prevent that kind of thing from happening again. As part of that effort, Matthew Garrett drafted guidelines, Murray Cumming "improved the wording", and Vincent Untz posted a link to the guidelines for discussion back in March.

That earlier draft was a bit different than the current, adopted version, but quite similar in spirit. In the discussion back in March, there was some wordsmithing about the "Dealing With Problems" section, but the draft was mostly well-received. Stallman, though, had a concern that the guidelines were overbroad:

The proposed speaker guidelines have a serious problem. Since they prohibit anything that makes someone uncomfortable, regardless of why, and since criticism of one's actions tends to make many people uncomfortable, the consequence is to prohibit serious criticism of any practice that is followed by someone in the audience.

Stallman noted that part of his GCDS talk was about the riskiness of using C#, but Sandy Armstrong was quick to point out that the guidelines were aimed at a different part of his presentation:

Richard, I'm fairly certain these guidelines are more about not making the audience uncomfortable when prominent speakers make sexist remarks, or remarks critical of religion, etc etc, especially when these remarks are completely off-topic.

I don't think they are meant to prevent you from making critical statements on relevant subject matter based on technical or legal arguments.

The discussion tailed off at that point, but was rekindled when Untz announced the final guidelines. Complaints about the guidelines seem to break down into two basic categories: that the rules are too vague, much as Stallman argued in March, or that they constitute "censorship" of speakers. While, strictly speaking, it may not be "censorship" (depending on which of the many definitions is used), it is certainly meant to steer speakers away from certain topics—those that might offend the audience or community.

Patryk Zawadzki doesn't like the vagueness, but thinks that there are other ways to address the problem:

It would be better if GNOME defined a precise set of rules (ie. "don't mention religion"). As for the hazy areas, common sense is a better judge than a set of written rules. If someone does something grossly inappropriate just don't invite them to further events.

The guidelines are somewhat vague, but that is done on purpose:

This is not a precise list of rules because the GNOME Foundation cannot predict all circumstances. These guidelines are not to be interpreted as prohibiting the serious raising of a bona fide technical, legal or ethical issue during a presentation.

There are six separate guidelines listed, some of which are, or should be, pretty obvious, for example: "Avoid things likely to offend some people. Your presentation content should be suitable for viewing by a wide range of audiences so avoid slides containing sexual imagery, violence or gratuitous swearing." Others, though, try to cut to the heart of the issue and, instead of proscribing conduct or topics, provide overarching advice:

A successful GNOME event involves everyone having fun. If someone in your audience is uncomfortable with something you've said, you're not doing your job. Apologize to them as soon as possible, and try to avoid the topic that triggered this for the rest of your presentation.

Stallman's earlier concern was addressed in the revision process, and he is firmly behind the guidelines: "If the community wants these guidelines, I support them." While the particulars of the GCDS keynote kept popping up in the discussion, it's clear that only a few really want to continue that particular debate. As Brian Cameron put it in a message worth reading in its entirety:

Over the past 2.5 years that I have been on the board, the board has been asked to help address a situation where someone has been offensive at least a half-dozen times. The Speaker Guidelines were created to help deal with this class of problems, not to deal with any particular person who may have been offensive at any particular time.

There is something of a sense of resignation to the fact that a policy like this is needed. But, as Cameron noted, the board has been criticized for how quickly and effectively it has responded to offensive presentations in the past. The guidelines provide a solid footing for any action the board may wish to take in the future; one possibility is explicitly mentioned: "Furthermore, if necessary, the GNOME Foundation might publicly distance itself from your opinions." Michael Meeks summed up the situation well:

But it does seem a little silly to need a policy at all. Ultimately, I guess we need to accept and live with the fact that ~everyone is unbalanced in some way, and has some or other noxiously offensive opinion, and perhaps provide some interactive booing & hissing / sharp questions from the audience at times ;-)

The final "Dealing With Problems" section is the part of the document that has drawn most of the comments this time around. Joanmarie Diggs is concerned that parts of that section are "neither 'positive' nor 'welcoming' to would-be speakers". In particular, the "disclaimer" paragraph, which is meant to head off anticipated complaints, may be reworked. There seems to be a consensus that changes are needed in that section, though it's not clear whether it should be expanded to fill in the gaps, reworked in more welcoming terms, or eliminated entirely. As Cameron said, the guidelines will likely be a "living document", and if there are problems with it, changes will be made.

While there is no specific enforcement language in the document ("Enforcement is subject to the judgment of the session overseer"), Garrett, at least, sees that as a possible hole. His original draft "suggested that event runners be able to stop presentations if they felt they were gratuitously in breach of the guidelines", but that was contentious and was removed. He is concerned that "guidelines mean little without enforcement", but does see the current language as a reasonable compromise. As he notes, there are those who will find the current watered-down version to be too intrusive, so something of a balance has been struck between the needs of speakers and their audiences.

There have been plenty of examples of presentations made at free software conferences that offended some subset of their audience. As it is unlikely that the speakers set out to do that, guidelines like these will be helpful to speakers by making them at least stop and think about their words and imagery. That is likely to lead to better presentations and happier audiences, which can only be a good thing.

Comments (39 posted)

Bilski: business as usual

By Jonathan Corbet
June 28, 2010
For many months now, anybody who pays attention to the US patent system has been anxiously awaiting the decision in the Bilski case. This case started as a lawsuit against the US patent office over its rejection of a business method patent. As this case worked its way toward the US Supreme Court, it came to be seen by many as a vehicle by which, just maybe, patents on business methods and software could be struck down. Much energy - and many amicus briefs - were directed toward that goal. As the last possible date for a ruling approached, the Free Software Foundation observed: "For Supreme Court watchers, following Bilski has been like following the World Cup. Productivity has fallen and ulcers have grown." Alas, it seems that the World Cup analogy extends to bad calls as well.

The ruling is out; Groklaw has it. With the concurring dissents, it runs to 71 pages. Reading the whole thing can lead to a much better understanding of the history of patent law in the US, but, for those concerned about possible changes to the patent system, the conclusion is far more succinct:

The patent application here can be rejected under our precedents on the unpatentability of abstract ideas. The Court, therefore, need not define further what constitutes a patentable "process"...

In other words, the court chose to rule on the value of one specific patent application. In the process, it possibly loosened the criteria slightly by saying that the "machine or transformation" test is not the sole guide to patentability. But the court went out of its way to avoid deciding - either way - whether business methods as a whole could be patented. The only real mention of software patents was a passing note that relying too heavily on "machine or transformation" could "create uncertainty as to the patentability of software, advanced diagnostic medicine techniques, and inventions based on linear programming, data compression, and the manipulation of digital signals." But, even there, the court went out of its way to have anything read into its words:

It is important to emphasize that the Court today is not commenting on the patentability of any particular invention, let alone holding that any of the above-mentioned technologies from the Information Age should or should not receive patent protection. This Age puts the possibility of innovation in the hands of more people and raises new difficulties for the patent law. With ever more people trying to innovate and thus seeking patent protections for their inventions, the patent law faces a great challenge in striking the balance between protecting inventors and not granting monopolies over procedures that others would discover by independent, creative application of general principles. Nothing in this opinion should be read to take a position on where that balance ought to be struck.

This refusal to face the issue can only come as a disappointment to anybody who was hoping that the court would make substantial changes to the current application of patent law in the US. But it can't have come as any real surprise to people who are familiar with the current court. The current chief justice - John Roberts - has been very clear from the outset that he is not interested in the writing of expansive rulings. The court was asked to decide on one specific patent, so that's what it did. No nonsense about, say, laying down a clear interpretation of the law that would eliminate the need for a long series of court cases stretching into the future.

There are many who would argue that this is exactly how it should be, that it's up to the legislature, not the courts, to write the laws. Others would argue that the American precedent-based legal system guarantees that the courts will have a hand in the writing of law that people actually live by in any case, and that the court should have taken the opportunity to reduce the amount of uncertainty in this area. Certainly, it would have been nice if the court had thought a little more broadly; now it seems that the only alternatives are more court cases or an attempt to get the Congress to do something constructive, or, likely, both. One could argue that the decision to do nothing was a bad call indeed.

That said, while it would be nice if the courts would just fix the situation, it may well be the case that rewriting the law to explicitly restrict the range of patentable inventions would be the best solution. Getting the US Congress to do something about the patent system is a daunting prospect, but it's not beyond the realm of possibility. There is an increasing awareness that the patent system is costing businesses a lot of money and is impeding the competitiveness of the country as a whole. While there are powerful interests in favor of the status quo, there are others pushing for reform. It might just happen, someday.

Meanwhile, we're stuck with the same situation we had before this decision was handed down. Software patents remain a threat in the US and they are looking increasingly threatening elsewhere. We will have to continue fighting them in all of the same ways, including what is arguably the most effective strategy of all: make free software so useful and so ubiquitous that the industry has no choice but to continue to try to protect Linux and, hopefully, find a way to address the patent threat for real.

Comments (36 posted)

Two GCC stories

By Jonathan Corbet
June 30, 2010
The GNU Compiler Collection (GCC) project occupies a unique niche in the free software community. As Richard Stallman is fond of reminding us, much of what we run on our systems comes from the GNU project; much of that code, in turn, is owned by the Free Software Foundation. But most of the GNU code is relatively static; your editor wisely allowed himself to be talked out of the notion of adding an LWN weekly page dedicated to the ongoing development of GNU cat. GCC, though, is FSF-owned, is crucial infrastructure, and is under heavy ongoing development. As a result, it will show pressures that are only seen in a few places. This article will look at a couple of recent episodes, related to licensing and online identity, from the GCC community.

Documentation licensing

Back in May, GCC developer Mark Mitchell started a discussion on the topic of documentation. As the GCC folks look at documenting new infrastructure - plugin hooks, for example - they would like to be able to incorporate material from the GCC source directly into the manuals. It seems like an obvious idea; many projects use tools like Doxygen to just that end. In the GCC world, though, there is a problem: the GCC code carries the GPLv3 license, while the documents are released under the GNU Free Documentation License (GFDL). The GFDL is unpopular in many quarters, but the only thing that matters with regard to this discussion is that the GFDL and the GPL are not compatible with each other. So incorporating GPLv3-licensed code into a GFDL-licensed document and distributing the result would be a violation of the GPL.

After some further discussion, Mark was able to get a concession from Richard Stallman on this topic:

If Texinfo text is included the .h files specifically to be copied into a manual, it is ok to for you copy that text into a manual and release the manual under the GFDL.

This is a severely limited permission in a number of ways. To begin with, it applies only to comments in header files; the use of more advanced tools to generate documentation from the source itself would still be a problem. But there is another issue: this permission only applies to FSF-owned code. As Mark put it:

However, if I changed the code, but did not regenerate the docs, and you then picked up my changes, possibly made more of your own, and then regenerated the docs, *you* would be in breach. (Because my changes are only available to you under the GPL; you do not have the right to relicense my changes under the GFDL.)

I find that consequence undesirable. In particular, what I did is OK in that scenario, but suddenly, now, you, are a possibly unwitting violator.

Dave Korn described this situation as being "laden with unforeseen potential booby-traps" and suggested that it might be better to just give up on generating documentation from the code. The conversation faded away shortly thereafter; it may well be that this idea is truly dead.

One might poke fun at the FSF for turning a laudable goal (better documentation) into a complicated and potentially hazardous venture. But the real problem is that we as a community lack a copyleft license that works well for both code and text. About the only thing that even comes close to working is putting the documentation under the GPL as well, but the GPL is a poor fit for text. Nonetheless, it may be the best we have in cases where GPL-licensed code is to be incorporated into documentation.

Anonymous contributions

Ian Lance Taylor recently described a problem which will be familiar to many developers in growing projects:

The gcc project currently has a problem: when people who are not regular gcc developers send in a patch, those patches often get dropped. They get dropped because they do not get reviewed, and they get dropped because after review they do not get committed. This discourages new developers and it means that the gcc project does not move as fast as it could.

He also noted that a contributor who goes by the name NightStrike had offered to build a system which would track patches and help ensure that they are answered; think of it as a sort of virtual Andrew Morton. This system was never implemented, though, and it doesn't appear that it will be. The reason? The GCC Powers That Be were unwilling to give NightStrike access to the project's infrastructure without knowing something about the person behind the name. As described by Ian, the project's reasoning would seem to make some sense:

Giving somebody a shell account on gcc.gnu.org means giving them a very high level of trust. There are quite a few people who could translate a shell account on gcc.gnu.org into a number of difficult-to-detect attacks on the entire FLOSS infrastructure, including the kernel, the source code control systems, etc. It's hard for us to get to the required level of trust in somebody whom we have never met and who won't provide any real world contact information.

NightStrike, who still refuses to provide that information, was unimpressed:

What you guys need to realize is that if I did just make something up, there wouldn't be an issue. Your policies are vintage computer security circa 1963. That's what's so darn frustrating about this whole entire thing. You don't have any actual security, but yet you think I'm going to try to bring down everything GNU. That's just awesome.

Awesome or not, this episode highlights a real problem that we have in our community. We place a great deal of trust in the people whose code we use and we place an equal amount of trust in the people who work with the infrastructure around that code. The potential economic benefits of abusing that trust could be huge; it's surprising that we have seen so few cases of that happening so far. So it makes sense that a project would want to know who it is taking code from and who it is letting onto its systems. To do anything else looks negligent.

But what do we really know about these people? In many projects, all that is really required is to provide a name which looks moderately plausible. Debian goes a little further by asking prospective maintainers to submit a GPG key which has been signed by at least one established developer. But, in general, it is quite hard to establish that somebody out there on the net is who he or she claims to be. Much of what goes on now - turning away obvious pseudonyms but accepting names that look vaguely real, for example - could well be described as a sort of security theater. The fact that Ian thanked NightStrike for not making up a name says it all: the project is turning away contributors who are honest about their anonymity, but it can do little about those who lie.

Fixes for this problem will not be easy to come by. Attempts to impose identity structures on the net - as the US is currently trying to do - seem likely to create more problems than they solve, even if they can be made to work on a global scale. What we really need is processes which are robust in the presence of uncertain identity. Peer review of code is clearly one such process, as is better peer review of the development and distribution chain in general. Distributed version control systems can make repository tampering nearly impossible. And so on. But no solution is perfect, and these concerns will remain with us for some time. So we will have to continue to rely on feeling that, somehow, we know the people we are trusting our systems to.

Comments (106 posted)

Page editor: Jonathan Corbet
Next page: Security>>


Copyright © 2010, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds