LWN.net Weekly Edition for December 2, 2010
The dark side of open source conferences
In the past two decades, the open source community has evolved from an obscure grass-roots movement of wild-eyed crusaders, indigent grad students, and spare-time hobbyists to an unprecedented worldwide collaboration of full-time professionals and extraordinarily committed volunteers. We pride ourselves on our openness to new contributors, from any country or social background, and most often describe the power structure of open source projects as a meritocracy. Many of us believe that open source is inherently progressive - a way to level the playing field that operates across all social categories and class boundaries.And yet, here it is, the year 2010, and my female friends and I are still being insulted, harassed, and groped at at open source conferences. Some people argue that if women really want to be involved in open source (or computing, or corporate management, etc.), they will put up with being stalked, leered at, and physically assaulted at conferences. But many of us argue instead that putting up extra barriers to the participation of women will only harm the open source community. We want more people in open source, not fewer.
In this article, we will first explore the current state of harassment in open source through interviews with ten women (including myself) about their experiences at open source conferences. Then we will describe some concrete, simple actions anyone can take to help reduce harassment for everyone, not just women (who have no monopoly on being the target of harassment). In particular, we'll discuss the recently released generic anti-harassment policy for open source conferences - basically, HOWTO Not Be A Jerk At A Conference.
Interviews
I interviewed by email nine women about their experiences at open source conferences. Harassment can and does happen to anyone of any gender identity or persuasion (just ask anyone who has been to middle school), but I know enough to write about only two kinds of harassment: the kind you get for being female, and the kind you get for using Emacs. I strongly encourage other people to write about their experiences being harassed at conferences, as harassment is an important problem no matter who it happens to.The women I interviewed are: Cat Allman, FOSS community organizer at Google and event professional, Donna Benjamin, executive director of Creative Contingencies, Beth Lynn Eicher, an organizer of Ohio LinuxFest, Selena Deckelmann, major contributor to PostgreSQL and founder of Open Source Bridge, Mackenzie Morgan, Ubuntu developer, Deb Nicholson, an organizer of LibrePlanet and the FSF Women's Caucus, Noirin Shirley, Executive VP at Apache Software Foundation, Sarah "Sez" Smith, and one anonymous respondent. I interviewed myself for a tenth woman. Nine of us have been harassed at one or more conferences. Of the nine of us who have served as conference organizers, eight of us have dealt with at least one incident of harassment at a conference we were running.
First, I asked each person about their first open source conference: Which one was it, what year, and what do you remember the most?
Cat Allman recalls the atmosphere at the 1998 forerunner
of OSCON: It was "Joyous
pandemonium: it was a gathering of the tribes, a religious festival,
the morning of the first day of a Children's Crusade; so much passion
and belief in one room.
" Donna Benjamin went
to LCA 2006 "Intending to stay
just for the miniconfs, but having such an awesome time, meeting
awesome people and changing my flights to stay for the whole week.
"
Sarah Smith loved the "grass roots feel
"
of LCA 2002. Selena Deckelmann
says of LISA 1997: "I
felt energized and enjoyed meeting new people - students and
professionals - and talking about all the free software we all used to
do our work.
"
My first open source conference was Ottawa Linux Symposium 2002. I was surprised by how nice the other kernel developers were in person. People I knew as unholy terrors on the mailing lists smiled and shook my hand and said, "How nice to finally meet you in person!" I ended up inviting ten or so people back to my hotel room to play the TCP/IP Drinking Game, including (to my delighted newbie surprise) Alan Cox and Rusty Russell.
Next, I asked about a time each person felt uncomfortable at a conference. Unfortunately, this was an easy question to answer for most. Anonymous says:
Mackenzie Morgan says,
Selena Deckelmann says:
For Selena,
as for many women, it's a double-bind: "I have to be very aggressive
when initiating conversation to get people to talk with me about
technical subjects,
" but then her behavior is "incorrectly
interpreted
as flirting.
" Beth Lynn had the same experience: "I was at a
conference where a man mistook my friendliness and technical interest
as sexual attraction to him.
" Mackenzie says, "At one
conference, it
was implied that another engineer was only agreeing with me on a
technical matter because I would pay him back with a sexual favor
later.
"
Cat Allman says that computer conferences have come a long way in the
last 25 years - but that they still have a long way to go. She says
of a conference in the mid-1980s: "Male attendees would walk up to you
- even if you were in a group - and ask 'How much for a (sex act)?'
You tried hard not get in an elevator in the convention center alone.
"
Now, women hired to wear company polo shirts and g-strings (true
story) are rare outside of Las Vegas, but the problem of a
sexualized
environment remains:
Deb
Nicholson says the days of "eye candy" are far from over. She says of
an event held within the last two years, "When strippers were hired to
mix with people at the Saturday night event everyone attended, that
made everyone uncomfortable.
"
Three of the ten women reported being physically assaulted at a
conference. Mackenzie says, "I was grabbed from behind and kissed by
a stranger without permission.
" Later she found out that this person
assaulted another woman at the conference. Noirin Shirley says after
a man grabbed and kissed her at a conference after-party, she told him
she wasn't interested, and "He responded by jamming his hand into my
underwear and fumbling.
" At
the Linux Storage and
Filesystems Workshop 2007, I organized a group outing to a pub,
only to have one of the invited workshop attendees grab my ass while I
was having a completely normal conversation. (I told him to never
touch me again, warned my friends about him, and refused to speak to
him again.)
Next, I asked about how people decide which conferences to attend. Besides the obvious factors - time, location, travel funding, speaker status, who is attending - reputation of conference organizers and attendee behavior came out as a major factor.
Beth Lynn says, "If the conference has a reputation for encouraging
unprofessional behaviour such as a sexual environment, I will not
go. For this reason I am not attending Penguicon any more.
" Cat says,
"If I think an event organizer turns a blind eye to questionable
behavior I'll pass on the event.
" Noirin Shirley says, "It's
word-of-mouth and knowing some of the organizers, knowing that they're
not going to put on an event where bad behaviour is tolerated.
"
I base my decision on three major elements: the reputation of the
conference organizers, the word-of-mouth from my friends, and my past
experience at that conference (if any). For example, anything run by
the Linux Foundation will be extremely professional, respectful of
women, and rank high on the getting-stuff-done factor.
I only stopped attending one open source conference altogether because of consistently bad behavior of both attendees and the organizers: the Ottawa Linux Symposium. This was a difficult decision for me because, at the time, attending OLS was almost a requirement for any serious Linux kernel developer, since that's where a lot of the face-to-face design work and discussion got done. But every year I attended, I was insulted, lewdly propositioned, or groped by several people, by everyone from newbies to top Linux kernel developers. This happened even though I was a speaker, BOF organizer, or program committee member for five years. The organizers appeared to condone the behavior by doing things like giving a wink-wink nudge-nudge review of conference shenanigans before the keynote, and "playfully" nagging attendees not to bring girlfriends or women they picked up on the street to the conference parties. (Message: OLS is for men, women go home.) I complained to the conference organizers but got no response.
After OLS 2006, I decided that I cared about being treated respectfully more than I cared about advancing my career, and stopped attending OLS. Luckily for me, the Linux Plumbers Conference started soon after, and I volunteered to help get Plumbers off the ground, in large part because the organizers were clearly committed to creating a professional, welcoming, get-things-done atmosphere. To be fair, it's been a few years since these incidents, and the OLS organizers have gone their separate ways, so I wouldn't be surprised if they have had a change of heart about what makes a good conference.
Changing the atmosphere
So how we do we go about changing the culture of open source conferences so that we don't chase off the very people we want to attract, both women and men? Judging from the past ten years of my experience, harassment at open source conferences is not going to stop all by itself. We have to take action.A good first step is for conferences and communities to adopt and enforce explicit policies or codes of conduct that spell out what kind of behavior won't be tolerated and what response it will get. Much in the way that people don't stop speeding unless they get speeding tickets, or that murder is totally unacceptable to most people but laws against it still exist, harassment at conferences may seem obviously wrong, but stopping it will require written rules and enforceable penalties.
To get things started, I helped write a customizable, general-purpose anti-harassment policy for open source conferences. For online communities, the Ubuntu code of conduct is a good place to start.
If you want to do something personally to help stop harassment, you have a few options. You can email the organizers of conferences you like to attend asking if they have a policy for dealing with harassment, and suggesting this one as an example. (You can find a list of conferences and their contact email addresses in this blog post about the policy.) If you are a conference organizer, you can skip the middleman and adopt the policy yourself. If you have the Internet, you can write a blog entry and post on your favorite short-message site about the policy. And, finally, if you see harassment happening or hear people bragging about it, you can speak up and stop it yourself.
Donna Benjamin says, "We want harassment not to happen in the first
place, because dealing with it is so deeply unpleasant for all
concerned. But with silence and inaction, women just stop coming to
events, and harassers keep harassing.
" What Donna is suggesting is
something we can all work towards: a time when polices like this are
no longer needed. I'm going to work for that time. Will you join me?
Lessons from a forged commit
The recent uproar in the X.org community—due to a forged commit to the radeonhd tree—seems to have largely played itself out. The perpetrators stepped forward, admitted to what they had done, and voluntarily removed their root privileges on the freedesktop.org machines. But some other things came out of the "discussion", including the need for better, or at least different, mechanisms for handling problems like the repository vandalism. There is also a perception that it is difficult for "outsiders" to get their code upstream into X.org.
The flash point was a commit made to a branch of the radeonhd tree on November 2, which was noticed by Luc Verhaegen on November 23. The commit itself was pretty obviously bogus, but it caused some consternation about the integrity of the X.org repositories—at least until Adam Jackson took responsibility and disabled his root access on the freedesktop.org machines that house the repositories. Daniel Stone also noted his involvement and disabled his root access as well.
While there are different interpretations, the commit seems to pretty
clearly be a poorly thought-out prank. Both Jackson and Stone have called
it "indefensible
", and it is. They also stated that it was an
isolated incident—one that can't be repeated now that their root
access is gone—so the belief is that
the repository as a whole is not suspect. That would seem to close the
matter, but there are some other aspects to consider.
There was some unhappiness about how Verhaegen notified the
developers about the commit, with Dave Airlie complaining that the email was sent to "2 mailing lists consisting of 2-300 people who could do
nothing about it
". There were suggestions that perhaps alerting the
freedesktop.org administrators privately might have been better. In this
case, though, two of those administrators were the folks who made the bogus
commit, so that may not have been the right course of action as Verhaegen
points out.
Eirik Byrkjeflot Anonsen looks at the bigger picture, noting that he is "not
particularly worried about this incident, as anyone with true 'evil
intent' would not have advertised their actions like this
". But he
is concerned about "evil commits
" and how well the project can detect them
and defend against
them. In addition, he would like to see some kind of policy come about to
make it clear what should be done when the project has been breached in
some way:
Several people mentioned Git as providing a means to detect repository
tampering. Peter Hutterer pointed to
"active maintainership
" as a safeguard as well. When a
maintainer tries to push code to a repository or branch that they maintain,
it should be fairly obvious that something has gone awry when that fails
because their repository is out of sync. Airlie concurred: "git + humans using the repos
should catch most things
".
It was generally agreed that there are no real policies governing how to
handle incidents like this, though, and some effort may be made in that
area. Airlie would like to see an "escalation procedures in place that
are less public
", but Verhaegen disagrees: "more visibility is what is
needed, not less!
". Most seem to agree with Verhaegen's actions in
posting about the bogus commit to the list, though the thread quickly
degenerated into what Hutterer called
"the usual fights
"—it is clear that there is some bad blood
between Verhaegen and some other X developers from previous disagreements that
spilled over into the thread.
Another issue that came up was Verhaegen's contention that hardware makers and others outside of the core X development community find it somewhere between hard and impossible to get their changes upstream:
Matt Dew moved that particular statement to a new thread, presumably to disentangle it from the arguments raging in the original, and asked:
Verhaegen didn't think it likely that companies would respond with their reasons for not contributing, but that didn't stop others from considering the issue. Matthew Garrett sees it, at least partially, as a consequence of the modular development model. Hardware vendors can keep their drivers out-of-tree, but those drivers will still work with the X server and the rest of X.org:
But there is more to it than that, according to
Alan Cox. X development has been "rather closed
" at
times, which has served to reduce the number of developers, but it is also
quite complex:
Maintaining the old Voodoo2 driver was a bit like minor kernel hacking. I can't even imagine how KeithP [Packard] fits everything he needs to know for the intel drivers into his head.
Garrett also noted a lack of documentation
that creates a fairly large hurdle: "I found X development far more intimidating than getting
involved in the kernel
". Alan Coopersmith agreed, noting that
documentation is an area that is currently being addressed through several
different efforts, including setting aside "a few days
before the 2011 X Developer Conference for a 'book sprint' to produce
documentation for developers
". While there is a fair amount of
information available in the X.org 1.9 tree, it still needs work in
bringing it up to date, which is something of a never-ending battle.
While projects are loath to air their dirty laundry in public—that was certainly part of the complaint about Verhaegen's email—there are lessons in this incident for other projects. X.org is hardly the only development community with administrators that occasionally show a lack of good judgement. Nor is it the only community with a perceived, or actual, barrier to participation. The embarassment suffered by the project may well be overshadowed by the ability of other projects to learn from it. That is one of the advantages of an open development model.
The 2010 Linux and free software timeline - Q1
Here is LWN's thirteenth annual timeline of significant events in the Linux and free software world for the year.
In what is becoming a fairly standard pattern, 2010 brought various patent lawsuits, company acquisitions, new initiatives, and new projects. It also brought new releases of the software that we use on a daily basis. There were licensing squabbles and development direction disagreements—all things that we have come to expect from the Linux and free software world over a year's time. Also as expected, though, were the improvements in the kernel, applications, distributions, and so on that make up that world. Linux and free software just keep chugging along, and we are very happy to be able to keep on reporting about it.
Like last year, we will be breaking this up into quarters, and this is our report on January-March 2010. Over the next month or so, we will be putting out timelines of the other three quarters of the year.
This is version 0.8 of the 2010 timeline. There are almost certainly some errors or omissions; if you find any, please send them to timeline@lwn.net.
LWN subscribers have paid for the development of this timeline, along with previous timelines and the weekly editions. If you like what you see here, or elsewhere on the site, please consider subscribing to LWN.
For those with a nostalgic bent, our timeline index page has links to the previous twelve timelines and some other retrospective articles going all the way back to 1998.
January |
SpamAssassin suffers from a Y2K10 problem. It increased the spam scores of email with 2010 dates, which suddenly became much more prevalent (SA developer blog post, LWN coverage).
Ted Ts'o leaves the Linux Foundation for Google. His two-year fellowship as LF CTO, on-loan from IBM, was completed in December (news coverage).
The Linux laptop orchestra (L2Ork) debuts (news article).
linux.conf.au 2010 is held in Wellington, New Zealand. This is the
tenth linux.conf.au (under
that name, anyway), and the second held in New Zealand. LWN had extensive
coverage from the conference (Overview, Community destruction, Package copyrights and license
management, Filesystems,
GCC static analysis, HackAbility, and Graphics drivers).
Canonical announces a switch to Yahoo as the default search provider for Ubuntu, starting with Lucid Lynx (10.04). This change is reverted in April and Lucid ships with a Google default (announcement, reversion).
SpamAssassin 3.3.0 is released. This is the first major release of the spam filtering solution since 2007 (announcement).
![[OpenStreetMap]](https://static.lwn.net/images/tl2010/osm-logo.png)
OpenStreetMap data is used in Haiti earthquake relief efforts (Michael Tiemann's blog posting).
Firefox 3.6 is released (announcement).
Red Hat launches opensource.com to explore applying open source principles to other fields (About opensource.com).
The European Commission clears Oracle's purchase of Sun, paving the
way for the acquisition to close (announcement).
Mozilla releases Mobile Firefox (aka "Fennec") for Maemo (announcement).
February |
-- Mozilla's Johnathan Nightingale on the CNNIC uproar
Symbian opens up its code, though it is a short-lived open source "project" as its web sites will close in December (announcement, web site shutdown announcement -- these links may not function after December 17).
Facebook releases its HipHop PHP translator, which translates PHP to highly optimized C++ (announcement).
Matt Asay becomes Canonical Chief Operating Officer (announcement).
FOSDEM '10 held in Brussels, Belgium (LWN coverage).
Maemo and Moblin merge to become MeeGo. Nokia and Intel merge their
mobile Linux initiatives under Linux Foundation stewardship (LWN article).
Fedora implements the "No Frozen Rawhide" plan, which will stop blocking progress of rawhide in preparation for releases (proposal, progress).
OpenOffice.org 3.2 is released (announcement).
In fact, I'd say that the various forks of Linux, and how the Linux maintainers have roped back in some forks (and let others go on their merry way) is what made the Linux kernel great and not just a BSD rehash.
-- Chris DiBona
Southern California Linux Expo (SCALE) 8x is held in Los Angeles,
with several LWN authors (and an editor) in attendance. (Moving the needle, Legal issues, Codeplex foundation and open
source, Relational
vs. non-relational, Color
management, Ubuntu
kernel development, Gnash, and 10,000,001 penguins).
LWN editor Forrest Cook departs (announcement).
Linux 2.6.33 is released (announcement, KernelNewbies summary).
The Apache web server turns 15 (announcement).
The Java Model Railroad Interface (JMRI) case (aka Jacobsen vs. Katzer) is settled on terms favorable for free software (Andy Updegrove analysis, previous LWN coverage [1, 2]).
March |
The Ubuntu One Music Store added for Ubuntu 10.04 as a way for Ubuntu users to buy MP3s while supporting Canonical (LWN coverage).
Apple sues HTC for patent infringement targeting Android (LWN article).
Elliott Associates makes an unsolicited offer to buy Novell, which starts the process that led to an accepted bid by Attachmate (with Elliott's assistance) in November (press release).
-- Tom "spot" Callaway on Chromium's bundled libraries
![[Ubuntu]](https://static.lwn.net/images/tl2010/ubuntu-darktheme-sm.png)
Ubuntu updates its branding, moving from its brown theme to a purple-ish look, with updates to its logos as well (announcement). The branding change also impacts the location of window buttons in the default GNOME interface, which sets off something of a firestorm (LWN article).
Mozilla starts the process of updating the Mozilla Public License (LWN blurb).
Open Clip Art Library releases version 2.0 of its collection of SVG clip art graphics (announcement).
Sony removes "install other OS" support from Playstation 3 firmware "upgrade", so users can no longer run Linux on PS3s. (LWN article).
-- Matt Asay
Novell wins ownership of the Unix copyrights, in yet another defeat for SCO in its attack on Linux (LWN article).
PHP 6 development process restarts after several attempts to
complete the release, which was derailed mostly by Unicode issues (LWN coverage).
GNOME 2.30 is released (announcement).
The first MeeGo code release is made (announcement).
Security
Using HTTP POST for denial of service
While it isn't altogether new, the implications of a recently reported denial of service (DoS) attack method against web servers is somewhat eye-opening. The attack itself is similar in many ways to the Slowloris technique. Depending on how the web server is configured, a DoS against it may only require a fairly small number of slow-moving connections. In addition, since it exploits a weakness in HTTP, it is difficult to work around.
Open Web Application Security Project (OWASP) researchers, Wong Onn Chee and Tom Brennan, presented [PDF] on the flaw at the AppSec DC 2010 conference in mid-November. As they point out, most DoS (and distributed DoS or DDoS) attacks target lower layers in the network stack, typically the transport layer, which is layer four in the OSI model. More recent DoS attacks have moved up the stack, with things like Slowloris attacking layer seven, the application layer (e.g. HTTP, FTP, SMTP).
ISPs and internet carriers have gotten much better at thwarting attacks at the transport layer, but for a number of reasons, attacks against applications are more difficult to deal with. It can be difficult to distinguish legitimate application traffic from a DoS attack. Applications tend to have a larger footprint and require more server resources. That means that a smaller number of attacking resources may be needed to perform a DoS at the application versus those required to do it at the transport layer.
Previously, Slowloris and other techniques used the HTTP GET method in conjunction with sending the HTTP headers very slowly to monopolize a connection to the web server. If enough of those "clients" were running, they would consume all of the sockets or other resources at the server end, thus denying any other clients (i.e. legitimate traffic) access to the server. Since that time, various workarounds have been found to reduce the problem from slow HTTP GETs in the Apache web server. Interestingly, Microsoft's IIS web server uses a different mechanism to handle incoming requests and was not vulnerable to GET-based DoS.
What Chee and his team found in September 2009 was that the HTTP POST method could also be used to perform a DoS. By sending a very large value in the Content-Length header, then very slowly sending that amount of data, one byte at a time, a client can consume a connection on the server for a very long time. In addition, because all of the headers have been sent, the mechanism that allowed IIS to avoid the GET-based attack was bypassed, so IIS and Apache are both vulnerable to these POST-based attacks.
So, Apache and IIS web servers that accept any kind of forms—that is to say, nearly all of them—are vulnerable. In addition, the attacks don't even have to reference a valid form on the server as most servers don't check until after the request is received. How many attacker connections are required to shut down a server is variable depending on the configuration of the server. The presentation mentions 20,000 connections for IIS and fewer for Apache because of client or thread limits in httpd.conf. An Acunetix blog post notes that 256 connections can be enough for Apache 1.3 in its default configuration. In any case, neither 256 nor 20,000 is a significant hurdle for an interested attacker.
Both Apache and Microsoft were contacted about this problem, but neither plans to "fix" it, because it is inherent in the protocol. HTTP is meant to allow for slow and intermittent connections, which don't look very different from this kind of attack. Apache has two potential workarounds: the experimental mod_reqtimeout, which allows for timeouts on headers and request bodies, or the LimitRequestBody directive, which allows a maximum request size to be set (by default it is 2GB). Those may provide band-aids but there will be collateral damage as folks with slower, dodgier connections—perhaps from a mobile device—may suffer. It seems likely that most servers could live with maximum request sizes significantly smaller than 2GB, however.
Chee and Brennan also report that botnet operators have started incorporating application layer DDoS into their bag of tricks, so we will likely be seeing more of these kind of attacks. It may mostly be GET-based attacks for the moment, but the botnet "herders" will eventually get around to incorporating POST-based attacks as well. The researchers predict that application layer DDoS will supplant transport layer DDoS sometime in the next ten years.
DoS attacks are probably less of a problem to small-time web site operators as the likely targets are deep-pocketed online retailers and the like. Criminals often target those sites at particularly important points in the calendar, like when holiday shoppers are likely to visit. It isn't too difficult to extract a large payment from such a retailer when it is faced with losing most of its sales at such a critical time. Those kinds of sites should probably be gearing up—hopefully have already geared up—for those kinds of attacks over the next month and beyond.
Brief items
Security quote of the week
Did no-one think of merging the capabilities checks and the security subsystem callbacks in some easy-to-use manner, which makes the default security policy apparent at first sight?
Savannah.gnu.org compromised
The Savannah front page currently reads: "There's been a SQL injection leading to leaking of encrypted account passwords, some of them discovered by brute-force attack, leading in turn to project membership access." The site is being restored to its state as of November 23; changes committed after that date will need to be redone. (Thanks to Giacomo Catenazzi).
New vulnerabilities
condor: authentication bypass
Package(s): | condor | CVE #(s): | CVE-2010-4179 | ||||||||
Created: | December 1, 2010 | Updated: | December 1, 2010 | ||||||||
Description: | The Condor management tool trusted its "trusted channel" a bit too much, enabling an attacker to submit jobs as any user (except root). | ||||||||||
Alerts: |
|
dracut: insecure /dev/systty permissions
Package(s): | dracut | CVE #(s): | CVE-2010-4176 | ||||||||||||
Created: | November 25, 2010 | Updated: | December 3, 2010 | ||||||||||||
Description: | From the Fedora advisory: It was discovered that /dev/systty device file created by dracut-generated initramfs scripts used an insecure file permissions. This could possibly allow local user to snoop on other user's terminal. | ||||||||||||||
Alerts: |
|
kernel: multiple vulnerabilities
Package(s): | kernel | CVE #(s): | CVE-2010-3448 CVE-2010-3848 CVE-2010-3849 CVE-2010-3850 CVE-2010-3858 CVE-2010-3859 CVE-2010-3873 CVE-2010-3874 CVE-2010-3875 CVE-2010-3876 CVE-2010-3877 CVE-2010-3880 CVE-2010-4072 CVE-2010-4073 CVE-2010-4074 CVE-2010-4078 CVE-2010-4079 CVE-2010-4080 CVE-2010-4081 CVE-2010-4083 CVE-2010-4164 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Created: | November 29, 2010 | Updated: | August 9, 2011 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Description: | From the Debian advisory:
Dan Jacobson reported an issue in the thinkpad-acpi driver. On certain Thinkpad systems, local users can cause a denial of service (X.org crash) by reading /proc/acpi/ibm/video. (CVE-2010-3448) Nelson Elhage discovered an issue in the Econet protocol. Local users can cause a stack overflow condition with large msg->msgiovlen values that can result in a denial of service or privilege escalation. (CVE-2010-3848) Nelson Elhage discovered an issue in the Econet protocol. Local users can cause a denial of service (oops) if a NULL remote addr value is passed as a parameter to sendmsg(). (CVE-2010-3849) Nelson Elhage discovered an issue in the Econet protocol. Local users can assign econet addresses to arbitrary interfaces due to a missing capabilities check. (CVE-2010-3850) Brad Spengler reported an issue in the setup_arg_pages() function. Due to a bounds-checking failure, local users can create a denial of service (kernel oops). (CVE-2010-3858) Dan Rosenberg reported an issue in the TIPC protocol. When the tipc module is loaded, local users can gain elevated privileges via the sendmsg() system call. (CVE-2010-3859) Dan Rosenberg reported an issue in the X.25 network protocol. Local users can cause heap corruption, resulting in a denial of service (kernel panic). (CVE-2010-3873) Dan Rosenberg discovered an issue in the Control Area Network (CAN) subsystem on 64-bit systems. Local users may be able to cause a denial of service (heap corruption). (CVE-2010-3874) Vasiliy Kulikov discovered an issue in the AX.25 protocol. Local users can obtain the contents of sensitive kernel memory. (CVE-2010-3875) Vasiliy Kulikov discovered an issue in the Packet protocol. Local users can obtain the contents of sensitive kernel memory. (CVE-2010-3876) Vasiliy Kulikov discovered an issue in the TIPC protocol. Local users can obtain the contents of sensitive kernel memory. (CVE-2010-3877) Nelson Elhage discovered an issue in the INET_DIAG subsystem. Local users can cause the kernel to execute unaudited INET_DIAG bytecode, resulting in a denial of service. (CVE-2010-3880) Kees Cook discovered an issue in the System V shared memory subsystem. Local users can obtain the contents of sensitive kernel memory. (CVE-2010-4072) Dan Rosenberg discovered an issue in the System V shared memory subsystem. Local users on 64-bit system can obtain the contents of sensitive kernel memory via the 32-bit compatible semctl() system call. (CVE-2010-4073) Dan Rosenberg reported issues in the mos7720 and mos7840 drivers for USB serial converter devices. Local users with access to these devices can obtain the contents of sensitive kernel memory. (CVE-2010-4074) Dan Rosenberg reported an issue in the framebuffer driver for SiS graphics chipesets (sisfb). Local users with access to the framebuffer device can obtain the contents of sensitive kernel memory via the FBIOGET_VBLANK ioctl. (CVE-2010-4078) Dan Rosenberg reported an issue in the ivtvfb driver used for the Hauppauge PVR-350 card. Local users with access to the framebuffer device can obtain the contents of sensitive kernel memory via the FBIOGET_VBLANK ioctl. (CVE-2010-4079) Dan Rosenberg discovered an issue in the ALSA driver for RME Hammerfall DSP audio devices. Local users with access to the audio device can obtain the contents of sensitive kernel memory via the NDRV_HDSP_IOCTL_GET_CONFIG_INFO ioctl. (CVE-2010-4080) Dan Rosenberg discovered an issue in the ALSA driver for RME Hammerfall DSP MADI audio devices. Local users with access to the audio device can obtain the contents of sensitive kernel memory via the SNDRV_HDSP_IOCTL_GET_CONFIG_INFO ioctl. (CVE-2010-4081) Dan Rosenberg discovered an issue in the semctl system call. Local users can obtain the contents of sensitive kernel memory through usage of the semid_ds structure. (CVE-2010-4083) Dan Rosenberg discovered an issue in the X.25 network protocol. Remote users can achieve a denial of service (infinite loop) by taking advantage of an integer underflow in the facility parsing code. (CVE-2010-4164) | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Alerts: |
|
krb5: authentication bypass
Package(s): | krb5 | CVE #(s): | CVE-2010-1323 CVE-2010-1324 CVE-2010-4020 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Created: | December 1, 2010 | Updated: | December 24, 2010 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Description: | Due to a series of checksum flaws, Kerberos protocol packets can be crafted by a remote attacker in a way which could bypass authentication mechanisms in some configurations. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Alerts: |
|
krb5: privilege escalation
Package(s): | krb5 | CVE #(s): | CVE-2010-4021 | ||||||||||||||||||||||||
Created: | December 1, 2010 | Updated: | December 24, 2010 | ||||||||||||||||||||||||
Description: | From the Mandriva advisory: An authenticated remote attacker that controls a legitimate service principal could obtain a valid service ticket to itself containing valid KDC-generated authorization data for a client whose TGS-REQ it has intercepted. The attacker could then use this ticket for S4U2Proxy to impersonate the targeted client even if the client never authenticated to the subverted service. The vulnerable configuration is believed to be rare. | ||||||||||||||||||||||||||
Alerts: |
|
mono: privilege escalation
Package(s): | mono | CVE #(s): | CVE-2010-4159 | ||||||||||||||||||||
Created: | November 25, 2010 | Updated: | May 3, 2011 | ||||||||||||||||||||
Description: | From the Mandriva advisory: Untrusted search path vulnerability in metadata/loader.c in Mono 2.8 and earlier allows local users to gain privileges via a Trojan horse shared library in the current working directory (CVE-2010-4159). | ||||||||||||||||||||||
Alerts: |
|
openconnect: information leak
Package(s): | openconnect | CVE #(s): | CVE-2010-3902 | ||||||||||||
Created: | November 30, 2010 | Updated: | December 1, 2010 | ||||||||||||
Description: | From the Red Hat bugzilla:
OpenConnect before 2.26 places the webvpn cookie value in the debugging output, which might allow remote attackers to obtain sensitive information by reading this output, as demonstrated by output posted to the public openconnect-devel mailing list. | ||||||||||||||
Alerts: |
|
openjdk-6: information leak
Package(s): | openjdk-6 | CVE #(s): | CVE-2010-3860 | ||||||||||||||||||||||||||||||||
Created: | November 30, 2010 | Updated: | April 15, 2011 | ||||||||||||||||||||||||||||||||
Description: | From the Ubuntu advisory:
It was discovered that certain system property information was being leaked, which could allow an attacker to obtain sensitive information. | ||||||||||||||||||||||||||||||||||
Alerts: |
|
openslp: denial of service
Package(s): | openslp | CVE #(s): | CVE-2010-3609 | ||||||||||||||||||||||||||||
Created: | November 30, 2010 | Updated: | May 28, 2015 | ||||||||||||||||||||||||||||
Description: | From the openSUSE advisory:
the openslp daemon could run into an endless loop when receiving specially crafted packets | ||||||||||||||||||||||||||||||
Alerts: |
|
php: insufficiently random number generation
Package(s): | php | CVE #(s): | CVE-2010-1128 | ||||||||||||||||
Created: | November 30, 2010 | Updated: | December 2, 2010 | ||||||||||||||||
Description: | From the Red Hat advisory:
It was discovered that the PHP lcg_value() function used insufficient entropy to seed the pseudo-random number generator. A remote attacker could possibly use this flaw to predict values returned by the function, which are used to generate session identifiers by default. This update changes the function's implementation to use more entropy during seeding. | ||||||||||||||||||
Alerts: |
|
phpmyadmin: cross-site scripting
Package(s): | phpmyadmin | CVE #(s): | CVE-2010-4329 | ||||||||||||||||
Created: | November 30, 2010 | Updated: | December 31, 2010 | ||||||||||||||||
Description: | From the Mandriva advisory:
It was possible to conduct a XSS attack using spoofed request on the db search script. | ||||||||||||||||||
Alerts: |
|
wireshark: code execution
Package(s): | wireshark | CVE #(s): | CVE-2010-4300 | ||||||||||||||||||||||||||||||||
Created: | November 29, 2010 | Updated: | April 19, 2011 | ||||||||||||||||||||||||||||||||
Description: | From the Mandriva advisory:
Heap-based buffer overflow in the dissect_ldss_transfer function (epan/dissectors/packet-ldss.c) in the LDSS dissector in Wireshark 1.2.0 through 1.2.12 and 1.4.0 through 1.4.1 allows remote attackers to cause a denial of service (crash) and possibly execute arbitrary code via an LDSS packet with a long digest line that triggers memory corruption. | ||||||||||||||||||||||||||||||||||
Alerts: |
|
Page editor: Jake Edge
Kernel development
Brief items
Kernel release status
The current development kernel is 2.6.37-rc4, released on November 29. "As suspected, spending most of the week in Japan made some kernel developers break out in gleeful shouts of 'let's send Linus patches when he is jet-lagged, and see if we can confuse him even more than usual'. As a result -rc4 has about twice the commits that -rc3 had." It's still mostly fixes, though; see the announcement for the short-form changelog, or the full changelog for all the details.
Stable updates: there have been no stable updates released over the last week.
Quotes of the week
The Linux Foundation's kernel development report
The Linux Foundation has announced the annual update of its report on kernel development. There is little there that will be new to LWN readers, but, in the humble opinion of your editor (who is one of the authors), it is a good summary of the situation. "This paper documents a bit less frenzied development than the last one, which was expected given all the new features of 2.6.30 (ext4, ftrace, btrfs, perf etc) as well as the peak of merged drivers from Linux stable tree. Regardless, this report continues to paint a picture of a very strong and vibrant development community."
Restricting /proc/kallsyms - again
During the 2.6.37 merge window, a change was merged which made /proc/kallsyms unreadable by unprivileged users by default. That change was subsequently reverted when it was found to break the bootstrap process on an older Ubuntu release. A new form of the patch has returned which fixes that problem - but it still may not be merged.The new patch is quite simple: if the process reading the file lacks the CAP_SYS_ADMIN capability, /proc/kallsyms appears to be an empty file. It has been confirmed that this version of the patch no longer breaks user space. But there were complaints anyway: rather than restricting access to the file with the usual access control bits, this patch encodes a policy (CAP_SYS_ADMIN) into the kernel where it cannot be changed. That rubs a number of people the wrong way, so this patch probably will not go in either. Instead, concerned administrators (or distributors) will need to simply change the permissions on the file at boot time.
Structure holes and information leaks
Many of the kernel security vulnerabilities reported are information leaks - passing the contents of uninitialized memory back to user space. These leaks are not normally seen to be severe problems, but the potential for trouble always exists. An attacker may be able to find a sequence of operations which puts useful information (a cryptographic key, perhaps) into a place where the kernel will leak it. So information leaks should be avoided, and they are routinely fixed when they are found.Many information leaks are caused by uninitialized structure members. It can be easy to forget to assign to all members in all paths, or, possibly, the form of the structure might change over time. One way to avoid that possibility is to use something like memset() to clear the entire structure at the outset. Kernel code uses memset() in many places, but there are places where that is seen as an expensive and unnecessary call; why clear a bunch of memory which will be assigned to anyway?
One way of combining operations is with a structure initialization like:
struct foo { int bar, baz; } f = { .bar = 1, };
In this case, the baz field will be implicitly set to zero. This kind of declaration should ensure that there will be no information leaks involving this structure. Or maybe not. Consider this structure instead:
struct holy_foo { short bar; long baz; };
On a 32-bit system, this structure likely contains a two-byte hole between the two members. It turns out that the C standard does not require the compiler to initialize holes; it also turns out that GCC duly leaves them uninitialized. So, unless one knows that a given structure cannot have any holes on any relevant architecture, structure initializations are not a reliable way of avoiding uninitialized data.
There has been some talk of asking the GCC developers to change their behavior and initialize holes, but, as Andrew Morton pointed out, that would not help for at least the next five years, given that older compilers would still be in use. So it seems that there is no real alternative to memset() when initializing structures which will be passed to user space.
Kernel development news
Conditional tracepoints
Tracepoints are small hooks placed into kernel code; when they are enabled, they can generate event information which can be consumed through the ftrace or perf interfaces. These tracepoints are defined via the decidedly gnarly TRACE_EVENT() macro which Steven Rostedt nicely described in detail for LWN earlier this year. As kernel developers add more tracepoints to the kernel, they are occasionally finding things which can be improved. One of those seems relatively simple: what if a tracepoint should only fire some of the time?Arjan van de Ven recently posted a patch adding a tracepoint to __mark_inode_dirty(), a function called deep within the virtual filesystem layer to, surprisingly, mark an inode as being dirty. Arjan's purpose is to figure out which processes are causing files to have dirty contents; that will allow tools like PowerTop to tell laptop users which process is causing their disk to spin up. The only problem is that some calls to __mark_inode_dirty() are essentially noise from this point of view; they happen, for example, when an inode is first created or is being freed. Tracing those calls could create a stream of useless events which would have to be filtered out by PowerTop, causing PowerTop itself to require more power. So it is preferable to avoid creating those events in the first place if possible.
For that reason, Arjan made the call to the tracepoint be conditional:
if (flags & (I_DIRTY_SYNC | I_DIRTY_DATASYNC | I_DIRTY_PAGES)) trace_writeback_inode_dirty(inode, flags);
This code works in that it causes the tracepoint to be "hit" only when an application has actually done something to dirty an inode.
The VFS developers seem to have no objection to this tracepoint being added; the resulting information can be useful. But they didn't like the conditional nature of it. Part of the problem is that tracepoints are supposed to keep a low profile; developers want to be able to ignore them most of the time. Expanding a tracepoint to two lines and an if statement rather defeats that goal. But tracepoints are also supposed to not affect execution time. They have been carefully coded to impose almost no overhead when they are not enabled (which is most of the time); with techniques like jump label, that overhead can be reduced even further. But that if statement, being outside of the tracepoint altogether, will always be executed regardless of whether the tracepoint is currently enabled or not. Multiply that test-and-jump across millions of calls to __mark_inode_dirty() on each of millions of machines, and the extra CPU cycles start to add up.
So it was asked: could this test be moved into the tracepoint itself? One approach might be to put the test into the TP_fast_assign() portion of the tracepoint, which copies the tracepoint data into the tracing ring buffer. The problem with that idea is that, by that time, the tracepoint has already fired, space has been allocated in the ring buffer, etc. There is currently no mechanism to cancel a tracepoint hit partway through. There has, in the past, been talk of adding some sort of "never mind" operation which could be invoked within TP_fast_assign(), but that idea seems less than entirely elegant.
What might happen, instead, is the creation of a variant of TRACE_EVENT() with a name like TRACE_EVENT_CONDITION(). It would take an extra parameter which would be, of course, another tricky macro. For Arjan's tracepoint, the condition would look something like:
TP_CONDITION( if (flags & (I_DIRTY_SYNC | I_DIRTY_DATASYNC | I_DIRTY_PAGES)) return 1; else return 0; ),
The tracepoint code would then test the condition before doing any other work associated with the tracepoint - but only if the tracepoint itself has been enabled.
This solution should help to keep the impact of tracepoints to a minimum once again, especially when those tracepoints are not enabled. There is one potential problem in that the condition is now hidden deeply within the definition of the tracepoint; that definition is usually found in a special header file far from the code where the tracepoint is actually inserted. At the tracepoint itself, the condition which might cause it not to fire is not visible in any way. So, if somebody other than the initial developer wants to use the tracepoint, they could misinterpret a lack of output as a sign that the surrounding code is not being executed at all. That little problem could presumably be worked around with clever tracepoint naming, better documentation, or simply expecting users to understand what tracepoints are actually telling them.
The best way to throw blocks away
An old-style rotating disk drive does not really care if any specific block contains useful data or not. Every block sits in its assigned spot (in a logical sense, at least), and the knowledge that the operating system does not care about the contents of any particular block is not something the drive can act upon in any way. More recent storage devices are different, though; they can - in theory, at least - optimize their behavior if they know which blocks are actually worth hanging onto. Linux has a couple of mechanisms for communicating that knowledge to the block layer - one added for 2.6.37 - but it's still not clear which of those is best.So when might a block device want to know about blocks that the host system no longer cares about? The answer is: just about any time that there is a significant mapping layer between the host's view of the device and the true underlying medium. One example is solid-state storage devices (SSDs). These devices must carefully shuffle data around to spread erase cycles across the media; otherwise, the device will almost certainly fail prematurely. If an SSD knows which blocks the system actually cares about, it can avoid copying the others and make the best use of each erase cycle.
A related technology is "thin provisioning," where a storage array claims to be much larger than it really is. When the installed storage fills, the device can gently suggest that the operator install more drives, conveniently available from the array's vendor. In the absence of knowledge about disused blocks, the array must assume that every block that has ever been written to contains useful data. That approach may sell more drives in the short term, but vendors who want their customers to be happy in the long term might want to be a bit smarter about space management.
Regardless of the type of any specific device, it cannot know about uninteresting blocks unless the operating system tells it. The ATA and SCSI standard committees have duly specified operations for communicating this formation; those operations are often called "trim" or "discard" at the operating system level. Linux has had support for trim operations for some time in the block layer; a few filesystems (and the swap code) have also been modified to send down trim commands when space is freed up. So Linux should be in good shape when it comes to trim support.
The only problem is that on-the-fly trim (also called "online discard") doesn't work that well. On some devices, it slows operation considerably; there's also been some claims that excessive trimming can, itself, shorten drive life. The fact that the SATA version of trim is a non-queued operation (so all other I/O must be stopped before a trim can be sent to the drive) is also extremely unhelpful. The observed problems have been so widespread that SCSI maintainer James Bottomley was recently heard to say:
The alternative is "batch discard," where a trim operation is used to mark large chunks of the device unused in a single operation. Batch discard operations could be run from the filesystem code; they could also run periodically from user space. Using batch discard to run trim on every free space extent would be a logical thing to do after an fsck run as well. Batching discard operations implies that the drive does not know immediately when space becomes unused, but it should be a more performance- and drive-friendly way to do things.
The 2.6.37 includes a new ioctl() command called FITRIM which is intended for batch discard operations. The parameter to FITRIM is a structure describing the region to be marked:
struct fstrim_range { uint64_t start; uint64_t len; uint64_t minlen; };
An ioctl(FITRIM) call will instruct the filesystem that the free space between start and start+len-1 (in bytes) should be marked as unused. Any extent less than minlen bytes will be ignored in this process. The operation can be run over the entire device by setting start to zero and len to ULLONG_MAX. It's worth repeating that this command is implemented by the filesystem, so only the space known by the filesystem to be free will actually be trimmed. In 2.6.37, it appears that only ext4 will have FITRIM support, but other filesystems will certainly get that support in time.
Batch discard using FITRIM should address the problems seen with online discard - it can be applied to large chunks of space, at a time which is convenient for users of the system. So it may be tempting to just give up on online discard. But Chris Mason cautions against doing that:
So the kernel developers will probably not trim online discard support at this time. No filesystem enables it by default, though, and that seems unlikely to change. But if, at some future time, implementations of the trim operation improve, Linux should be ready to use them.
The kernel and the C library as a single project
The kernel has historically been developed independently of anything that runs in user space. The well-defined kernel ABI, built around the POSIX standard, has allowed for a nearly absolute separation between the kernel and the rest of the system. Linux is nearly unique, however, in its division of kernel and user-space development. Proprietary operating systems have always been managed as a single project encompassing both user and kernel space; other free systems (the BSDs, for example) are run that way as well. Might Linux ever take a more integrated approach?Christopher Yeoh's cross-memory attach patch was covered here last September. He recently sent out a new version of the patch, wondering, in the process, how he could get a response other than silence. Andrew Morton answered that new system calls are increasingly hard to get into the mainline:
Ingo Molnar jumped in with a claim that the C library (libc) is the real problem. Getting a new feature into the kernel and, eventually, out to users takes long enough. But getting support for new system calls into the C library seems to take much longer. In the meantime, those system calls languish, unused. It is possible for a suitably motivated developer to invoke an unsupported system call with syscall(), but that approach is fiddly, Linux-specific, and not portable across architectures (since system call numbers can change from one architecture to the next). So most real-world use of syscall() is probably due to kernel developers testing out new system calls.
But, Ingo said, it doesn't have to be that way:
Ingo went on to describe some of the
benefits that could come from a built-in libc. At the top of the list is
the ability to make standard libc functions take advantage of new system
calls as soon as they are available; applications would then get immediate
access to the new calls. Instrumentation could be added, eventually
integrating libc and kernel tracing. Perhaps something better could have
been done with asynchronous I/O. And so on. He concluded by saying
"Apple and Google/Android understands that single-project mentality
helps big time. We don't yet.
"
As of this writing, nobody has responded to this suggestion. Perhaps it seems too fantastical, or, perhaps, nobody is reading the cross-memory attach thread. But it is an interesting idea to ponder on.
In the early days of Linux kernel development, the purpose was to create an implementation of a well-established standard for which a great deal of software had already been written. There was room for discussion about how a specific system call might be implemented between the C library and the kernel, but the basic nature of the task was well understood. At this point, Linux has left POSIX far behind; that standard is fully implemented and any new functionality goes beyond it. New system calls are necessarily outside of POSIX, so taking advantage of them will require user-space changes that, say, a better open() implementation would not. But new features are only really visible if and when libc responds by making use of them and by making them available to applications. The library most of us use (glibc) has not always been known for its quick action in that regard.
Turning libc into an extension of the kernel itself would short out the current library middlemen. Kernel developers could connect up and make use of new system calls immediately; they would be available to applications at the same time that the kernel itself is. The two components would presumably, over time, work together better. A kernel-tied libc could also shed a lot of compatibility code which is required if it must work properly with a wide range of kernel versions. If all went well, we could have a more tightly integrated libc which offers more functionality and better performance.
Such a move would also raise some interesting questions, naturally, starting with "which libc?" The obvious candidate would be glibc, but it's a large body of code which is not universally loved. The developers of whichever version of libc is chosen might want to have a say in the matter; they might not immediately welcome their new kernel overlords. One would hope that the ability to run the system with an alternative C library would not be compromised. Picking up the pace of libc development might bring interesting new capabilities, but there is also the ever-present possibility of introducing new regressions. Licensing could raise some issues of its own; an integrated libc would have to remain separate enough to carry a different license.
And, one should ask, where would the process stop? Putting nethack into the kernel repository might just pass muster, but, one assumes, Emacs would encounter resistance and LibreOffice is probably out of the question.
So a line needs to be drawn somewhere. This idea has come up in the past, and the result has been that the line has stayed where it always was: at the kernel/user-space boundary. Putting perf into the kernel repository has distorted that line somewhat, though. By most accounts, the perf experiment has been a success; perf has evolved from a rough utility to a powerful tool in a surprisingly short time. Perhaps an integrated C library would be an equally interesting experiment. Running that experiment would take a lot of work, though; until somebody shows up with a desire to do that work, it will continue to be no more than a thought experiment.
Patches and updates
Kernel trees
Architecture-specific
Core kernel code
Development tools
Device drivers
Documentation
Filesystems and block I/O
Memory management
Security-related
Virtualization and containers
Miscellaneous
Page editor: Jonathan Corbet
Distributions
CentOS grapples with its development process
CentOS, the "community enterprise" operating system rebuilt from the source packages of Red Hat Enterprise Linux (RHEL), recently started development on its next release, CentOS 6. The beginning of the process has not been smooth, however, with the development team talking about restructuring its package repository layout and installation offerings, combined with a heated discussion on the difficulty of recruiting new users into the all-important package review and testing phase.
Although CentOS is a community-developed distribution, its status as an explicitly source-compatible derivative of RHEL means that it has a substantially different QA and release process than, say, Fedora or Debian. Red Hat provides source RPMs for each RHEL release as part of its GPL compliance process. When a new release drops, CentOS volunteers begin systematically poring through the packages, looking for everything with a trademark that must be altered or removed before the package can be distributed by CentOS.
This includes graphical logos and written branding, plus anything else that might lead a user to think that the software comes from or is associated with Red Hat. Although there are some obvious places to begin, such as artwork packages, everything from menus to %description lines in RPM spec files must be sanitized. The fixes are not always simple search-and-replace, either — utilities like the Automatic Bug Reporting Tool (ABRT) that is functionally linked to Red Hat's Bugzilla must also be patched. Only after this process is complete does the development team proceed to the build and packaging QA that eventually leads to a downloadable ISO installation image.
The new upstream source
The RHEL 6 sources were released on November 10th, more than three years after the last major revision, RHEL 5. The size of the base distribution has grown considerably, and now spans two DVD-sized ISO images. In addition, the company is now offering four versions of the release: server, workstation, high-performance computing (HPC) "compute node," and desktop client — up from the two (desktop and server) for RHEL 5.
These changes forced the CentOS developers to re-examine their own offerings, including the possibility of separate server and workstation editions (a change for CentOS) as well as a "light" installation ISO image that would allow administrators to set up a functional minimal server without installing the full, multi-gigabyte image. In the past, CentOS has striven for a single ISO image approach, but some developers expressed an interest on the mailing list of maintaining better compatibility with RHEL's offerings by splitting the different installation profiles into separate images.
At the present, the consensus seems to be retaining the single-profile workstation-and-server approach, although the sheer size of RHEL may force splitting the packages into "core" and "extra" DVDs. The minimal-install option is still under discussion, but seems likely to happen. The goal is to provide a smaller image suitable for use on USB media, virtual machines, and for deployment in an un-networked environment, with the target size being small enough to fit onto a CD.
Still unresolved is the question of restructuring the package repositories. CentOS replaces Red Hat's subscription-based Red Hat Network update service with yum repositories used to push updated packages out to users. Because CentOS supports each release with updates for seven years, properly re-organizing the repositories is a critical decision. The main point of discussion is whether or not to split packages into "os" and "updates" repositories alone, or to move some nonessential packages into "optional" and "optional-updates" repositories, which would mirror the approach that split the installation media into two DVD images.
Round 1 and new contributors
More contentious was a high-spirited debate over the "round 1" package-auditing process, the secondary QA process, and what many perceive as the high barrier-to-entry to new users wishing to get involved in development. Lead developer Karanbir Singh posted a "step one" call-for-help message to the CentOS development list on November 11, just after the RHEL 6 sources were released, in which he appealed for interested parties to get involved with the trademark-auditing process.
More than two weeks later, evidently very little progress had been made on that front. This is especially problematic for CentOS, which in the past had made its releases within a 6-to-8-week window of the RHEL source code drop. Many CentOS users begin by testing Red Hat's 30-day free trial of RHEL, so delaying the release of the source-compatible CentOS update by several weeks can make those users quite anxious.
A flame-war between Singh and another packager — that was at best tangential — frustrated developer Florian La Roche enough that he accused Singh of not making the process open enough. Singh then wrote to the list again, this time complaining that there was a "level of fantasy that some of you guys seem to live with
" — namely, that he had publicly asked for volunteers, virtually none had stepped up, and that as a result the "usual suspects
" end up doing the work, and at a slower-than-ideal pace.
Untangling a mailing list argument is always tricky, but in this case several factors seemed to converge to frustrate all involved. The first was Singh's perception that he had asked for volunteers, and none had stepped up to the plate. The second was the new users' perception that Singh had not provided meaningful instructions on how to participate — in particular, he pointed the list to an un-finished wiki page, and left several key steps of the audit process undocumented. For example, the wiki page left several resource URLs empty, but marked "coming soon", and there was no indication in the wiki or mailing list post where to find a list of audited-vs-unaudited packages, or how to submit a properly formatted issue to the bug tracker.
Third, several readers accused Singh of being "pedantic
", picking apart the grammar and wording choices of other posters in the discussion, rather than responding to the meat of their questions. Fourth, some new users seemed to feel that however the audit and QA processes work technically, they suffered from being too opaque to outsiders. Douglas McClendon noted the lack of documented examples of properly-formatted bug reports and the absence of an overall "
progress bar
" that would track the current state of the work.
Fortunately, all heads eventually cooled, and the new would-be-contributors posted more in-depth descriptions of the type of documentation that they needed. Singh, likewise, responded to the requests, clarified both instructions and formatting requirements, even observing "This is constructive.. we should have had these conversations about 2 weeks back :/
" There is also an AuditStatus page tracking which packages have been checked for trademark issues in Round 1.
Scarcity of volunteers is not unique to CentOS, of course, but the nature of the distribution does make it harder to raise manpower from among its end users. Unlike a desktop-centric distribution, a high percentage of CentOS users are independent contractors who deploy the system for clients. They are certainly a knowledgeable bunch, and tend to be active on the lists. But another major slice of CentOS's install base is commercial web-hosting services, who offer it as a rock-solid alternative to RHEL and other enterprise server distributions. Regrettably, however, many of those hosting providers don't seem to participate in the development process — if they did, even just in package auditing and testing, it would make for a sizable contribution.
Community
All open source projects struggle with some of these "preferred way of doing things" issues. To Singh, it seems, some of the new developers just had not done their homework, such as looking at the bug reports for previous CentOS releases for clues as to the preferred formatting. On the other hand, if there is a preferred format, it clearly should have been documented as such.
Similarly, the divide between the documentation on the wiki and the discussion on the list did its part to further the confusion. In Singh's initial email asking for volunteers, he included a link to http://bugs.centos.org, which was evidently one of the URLs missing from the wiki page. To him, he was providing the updated information there, but to the new developers it was not at all clear that his message was linked to the (still un-updated) wiki page.
Writing good documentation is never easy, but too much of the time open source projects think about "documentation" solely in terms of end-user manuals and tutorials. As CentOS is finding out, documenting the development process itself is vital. Considering that just over a year ago the CentOS project faced a leadership crisis, getting the "preferred way of doing things" out of the heads of the long-time contributors an into a publicly-available resource is something the project needs to address — not just in case the existing contributors disappear, but simply to draw in the constant stream of interested users who want to answer the call to step up to the next level.
Brief items
Distribution quote of the week
A) Raptors happen. Jobs and projects that have been automated, well
documented, highly mentored are better able to handle extinction
events.
B) A person should always be looking for something new and interesting to do.
C) The person was not important to the job. Making oneself critical
for a job to go on meant that raptor problems made things fail
horribly.
Announcing the openSUSE Tumbleweed project
"Tumbleweed" is meant to be a rolling version of openSUSE containing the most recent stable versions of packages. "A good example of how Tumbleweed is different from Factory would be to look at the kernel package today in Factory. It is 2.6.37-rc as the goal is to have the final 2.6.37 release in the next 11.4 release. But, it is still under active development and not recommended for some people who don't like reporting kernel bugs. For this reason, Tumbleweed would track the stable kernel releases, in this case, it would stay at 2.6.36 until the upstream kernel is released in a 'stable' format." The project will get rolling (so to speak) for production use after the openSUSE 11.4 release.
MeeGo updates
The MeeGo project has released MeeGo 1.0 Update5 and MeeGo 1.1 Update1. Both releases provide "general operating system fixes that enhance the stability, compatibility, and security of your devices."
Debian 5.0.7 released
The Debian project has announced the seventh update for Debian 5.0 "lenny". "This update mainly adds corrections for security problems to the stable release, along with a few adjustment to serious problems."
Distribution News
Debian GNU/Linux
Report from Debian booth at SfN2010
Last November the NeuroDebian team ran a Debian booth at the annual meeting of the Society for Neuroscience (SfN2010). Click below for a report by Michael Hanke. "A number of visitors were involved in free software development -- at various levels. We talked to a Debian ftpmaster, a Gentoo developer, various developers of neuroscience-related software that is already integrated in Debian and many more whose work still needs to be packaged. We were visited by representatives of companies looking for support to get their open-source products into Debian. The vast majority, however, were scientists looking for a better research platform for their labs. That included the struggling Phd-student, as well as lab heads sharing their experience managing a computing infrastructure for neuroscience research." (Thanks to Paul Wise)
Fedora
Fedora board election results
The Fedora Project has announced the results for its 2010 board elections. Joerg Simon and Jaroslav Reznik have been elected to the Fedora board, Christoph Wickert, Adam Jackson, Matthew Garrett, and Marcela Maláňová have been elected to the engineering steering committee (FESCo), and Neville A. Cross, Larry Cafiero, Rahul Sundaram, Gerard Braad, Igor Soares, Pierros Papadeas, and Caius Chance will be on the ambassadors' steering committee (FAmSCo).Fedora Board Recap 2010-11-29
Click below for the minutes from the November 29 meeting of the Fedora Board. Topics include elections, jsmith at Fedora EMEA FAD, and a multi-release goals discussion.
Ubuntu family
Minutes from the Technical Board meeting, 2010-11-30
Click below for the minutes from the November 30 meeting of the Ubuntu Technical Board. Topics include couchdb lucid backport and ARB exception proposal.
Newsletters and articles of interest
Distribution newsletters
- DistroWatch Weekly, Issue 382 (November 29)
- Fedora Weekly News Issue 253 (November 24)
- openSUSE Weekly News, Issue 151 (November 27)
Hertzog: People behind Debian: Colin Watson
Raphaël Hertzog talks with Colin Watson about his role in Debian and Ubuntu. "I've had my fingers in a lot of pies over the years, doing ongoing maintenance and fixing lots of bugs. I think the single project I'm most proud of would have to be my work on the Debian installer. I joined that team in early 2004 (a few months before Canonical started up) partly because I was a release assistant at the time and it was an obvious hot-spot, and partly because I thought it'd be a good idea to make sure it worked well on the shiny new G4 PowerBook I'd just treated myself to."
What's Coming in Mandriva 2011 (OStatic)
Susan Linton looks at the plans for Mandriva 2011. "With 2011, Mandriva will be switching to RPM5. This news was announced by Per Øyvind Karlsen last week and is the first item in the list. RPM5 is actually a fork of RPM with the main goals of supporting XAR, an XML based archiving format, and featuring an integrated dependency resolver. This move has been in the works for quite some time but Mandriva 2011 will be the first release fully committed. Per Øyvind Karlsen said RPM5, "is the only sensible choice." Relatedly, their software center is scheduled a face-lift for a "more modern and simple to use interface.""
Linux Distribution: Lightweight Portable Security (Linux Journal)
Linux Journal takes a look at Lightweight Portable Security (LPS). "The Lightweight Portable Security distribution was created by the Software Protection Initiative under the direction of the Air Force Research Laboratory and the US Department Of Defense. The idea behind it is that government workers can use a CDROM or USB stick to boot into a tamper proof, pristine desktop when using insecure computers such as those available in hotels or a worker's own home. The environment that it offers should be largely resistant to Internet-borne security threats such as viruses and spyware, particularly when launched from read-only media such as a CDROM. The LPS system does not mount the hard drive of the host machine, so leaves no trace of the user's activities behind."
Page editor: Rebecca Sobol
Development
Linux client for Ryzom MMORPG released
Outstanding game for Linux, or a plot to undermine productivity for Linux desktop users? The official release of the Ryzom client for Linux is, perhaps, a little bit of both.
Ryzom is a Massively Multiplayer Online Roleplaying Game (MMORPG), akin to games like World of Warcraft and produced by Winch Gate. The big difference for Ryzom, aside from a different storyline and such, is that Ryzom is free software. The game, including both client and server software, was released under the GNU Affero General Public License (AGPLv3) in May 2010 and the "artistic assets" are available under the Creative Commons Attribution-ShareAlike (CC-BY-SA) license.
![[Ryzom introduction]](https://static.lwn.net/images/2010/ryzom-intro-sm.png)
The release of the Linux client is the latest in a long saga of taking Ryzom from the failed remains of Nevrax via a campaign to release the game as free software — not to mention keep it online for the community that it had accrued so far.
The community is still small when held up to massive moneymakers like World of Warcraft. Ryzom CTO Vianney Lecroart declined to give specific numbers, but said the game has "thousands of active users
" spread across three servers — one for English-speaking players, one for German-speaking users, and one for French-speaking users. (Of those, about half of the users are participating in English, and then about evenly split between French and German).
Ryzom Gameplay
Ryzom is set in a "science-fantasy" world, which combines elements of science fiction and fantasy. Picture the movie Avatar and you start to get the idea. The game is set 3,000 years in the future, but it has a very strong swords and sorcery flavor.
The game is being distributed as a binary that can be installed on most major Linux distributions. Source is also available for those who'd prefer to compile their own, of course. Ryzom requires an account be set up to play — this is free for 21 days without providing a credit card. Afterward, users pay $10.95 a month or $105.95 for a year. (To compare, Blizzard's World of Warcraft subscription options start at $14.99 a month.)
Admittedly, this reporter is something of a newbie when it comes to MMORPGs. Most of my gaming time over the past several years has been spent with first person shooters like Quake III Arena and OpenArena. Ryzom is a much different experience. Those who've played games like World of Warcraft will likely find Ryzom comfortable right away. For Linux users who may be coming to MMORPGs for the first time with Ryzom, it may be surprising how much there is to learn and do before becoming productive within the game.
![[Ryzom death]](https://static.lwn.net/images/2010/ryzom-dead-sm.png)
Obviously, gameplay is much slower than a first person shooter. On the first venture out, it's possible to spend hours running (well, walking) through tutorials and training to "build" your character before actually being involved with other players. Users start by creating a character and configuring its race, appearance, etc.
Once you have a character, it's time to drop into the world and start playing, or at least start with the tutorials. For newbies, the tutorials are very helpful — if a bit dull. The Ryzom interface is somewhat complex. Users have several boxes to monitor things like their ongoing missions, health and vital statistics, and toolbars with spells and attacks. Those who'd like to plunge right in can start with the Ryzom Starter Guide comic [PDF], which has 18 pages of screenshots with explanatory dialogs. Additional guides are available for those who'd like to delve into the details of the game.
Some things in the interface are a bit unintuitive, at least for those unfamiliar with MMORPGs. For example, if your avatar in the game is standing particularly close to another character and initiates a conversation, the dialog bubble with speech is displayed off screen.
The game was tested on a Core Duo machine with 3GB of RAM and an NVIDIA-based GeForce Go 7800 chipset, which was well above the recommended system requirements. Note that the page specifies NVIDIA, but Lecroart confirms that it also works with ATI chips and should work with any card that supports OpenGL. Others confirmed that it works with Intel chipsets as well.
Start up for the game was a bit sluggish, but once the game was loaded, there was no lag or choppiness in the graphics and gameplay was smooth as silk. Ryzom did not seem to like running in Twinview mode (NVIDIA's proprietary solution for multi-head displays, similar to Xinerama). The game crashed a few times or simply lost the display, but continued playing audio and kept up the connection to Ryzom's servers in the background. After turning off Twinview, the problems disappeared.
The graphics are quite good, and the art is top-notch. Though MMORPGs are still not this reporter's cup of tea, Ryzom should please players who prefer the MMORPG genre to fast-paced shooters.
The Future of Ryzom
Now that Ryzom is out, what's next? On the community front, encouraging more Linux users and free software enthusiasts to contribute. Lecroart, who also wears the community manager hat, says that the community has already contributed a great deal, in terms of helping to improve the Linux version. "The community helped in finding and providing patches for bugs, for example. The community has also helped to make certain Ryzom features to work better, faster and on more systems. We've already integrated some of these features into the game.
"
Ryzom also participated in the past Google Summer of Code and is participating in the Google Code-In taking place now. Ryzom has a developer portal with the requisite forums, bug tracker, wiki, etc., and the company hosts an "open shard," for developers to connect to test modifications to client software without having to run an instance of Ryzom on their own.
The developer channel had 36 participants when I signed in to ask
questions about Ryzom's contributor agreement and copyright policies. The
Web site is relatively mute on this, and according to Matt Raykowski, a
community contributor to the project, the topic hasn't really "come
up
" yet because "substantial contributions
" that might require a contributor agreement have yet to come in. It was discussed on the developer message boards about seven months ago, and Raykowski wrote that the community was considering a contributor agreement:
The Ryzom community is considering a Ryzom Contributor Agreement modeled after the Sun Contributor Agreement or the OpenNMS Contributor Agreement in which you as an author you retain your copyright but also afford full rights to the work to Ryzom. This allows you to use your source anywhere you want and under any license you want but allows us to as well use it as we like in the project. This is important as if there are too many copyright holders involved in the infringement of the code legal action can be difficult if not impossible. Again, we're working on this and expect to have something ready before we have serious contributions.
For those thinking about contributions, the roadmap gives some insight into features planned for coming releases. Most are modest features and known bugs to be quashed. Lecroart says the next major step for Ryzom is to finish a native client for Mac OS X.
On the content side, Lecroart says that the company is always prepping more content for the game as well as special events for holidays. Lecroart says that the company is busy working on a "Christmas event" right now, and points to the recent Halloween event as an example, where players could explore special mazes, collect extra treasure, and see special artwork made exclusively for the event.
As a promotion for the release, the Ryzom folks are inviting Linux users to participate in a contest to win a ZaReason laptop. Through January 10, players can search for seven "GNU/Linux artifacts," and answer questions about the game world. The prize is a ZaReason Terra-HD netbook, with two runner-up prizes of free one-year subscription to Ryzom.
The release of the Linux client is, if not a happy ending to the story of Ryzom, at least a completed quest. After years of work to release the code and art under free licenses, the game is finally readily available to Linux users who are looking for a way to kill many, many hours in front of the computer.
Brief items
Quote of the week
So, when people say, "I can't read Perl", it only tells me they haven't studied it.
The Internet Distributed Open Name System project
Lauren Weinstein has sent out an announcement for a just-forming project which is intended to replace the current domain name system with something which is better distributed, less subject to governmental interference, fully encrypted, and free of registrars. "The scope of the project on which I've been working, which I call IDONS - Internet Distributed Open Name System -- is in early stages, but would ultimately be significant both in terms of technology and time. It may perhaps be reasonably compared with the scale of IPv6 deployment in some ways." Those who are not daunted by this challenge are invited to join in.
libgit2 - a git library
The libgit2 project has hit the net. "libgit2 is a portable, pure C implementation of the Git core methods provided as a re-entrant linkable library with a solid API, allowing you to write native speed custom Git applications in any language which supports C bindings." Bindings for Ruby, Python, and Erlang are available now. (Thanks to Don Marti).
PyPy 1.4 released
Version 1.4 of the PyPy Python compiler is available. "This is a major breakthrough in our long journey, as PyPy 1.4 is the first PyPy release that can translate itself faster than CPython. Starting today, we are using PyPy more for our every-day development. So may you." LWN looked at PyPy last May.
Python 2.7.1 and Python 3.1.3
The Python development team has released Python 2.7.1 and Python 3.1.3.
The 2.7 series "includes many features that were first released in
Python 3.1. The faster io module, the new nested with statement syntax,
improved float repr, set literals, dictionary views, and the memoryview
object have been backported from 3.1. Other features include an ordered
dictionary implementation, unittests improvements, a new sysconfig module,
auto-numbering of fields in the str/unicode format method, and support for
ttk Tile in Tkinter.
"
The 3.1 series "focuses on the stabilization and optimization
of the features and changes that Python 3.0 introduced. For example, the
new I/O system has been rewritten in C for speed. File system APIs that
use unicode strings now handle paths with undecodable bytes in them. Other
features include an ordered dictionary implementation, a condensed syntax
for nested with statements, and support for ttk Tile in Tkinter.
"
Spyder v2.0.0
The Spyder 2.0.0 release is available. "Spyder (previously known as Pydee) is a free open-source Python development environment providing MATLAB-like features in a simple and light-weighted software." New features include IPython integration, a source editor, a Sphinx-powered "rich text mode" in the object inspector, and more.
Newsletters and articles
Development newsletters from the past week
- Caml Weekly News (November 30)
- OpenMoko community update (December 1)
- PostgreSQL Weekly News (November 28)
Using the Canon Hack Development Kit (Spectrum)
IEEE Spectrum discovers the joy of hackable devices using the Canon Hack Development Kit. "Most camera traps use an infrared motion detector, but a CHDK-enhanced camera can itself detect motion. Van Barel's script gives you control of such things as the delay between the motion and the shot and whether the focus is fixed or variable. Without any tuning at all of the script's parameters, I was able to get some fascinating photos of birds cavorting around the family bird feeder. Others have used similar scripts to produce some stunning photographs of lightning."
Hands-on: a first look at Diaspora's private alpha test (ars technica)
Ars technica takes a peek at an early test release of the social networking project, Diaspora "The core functionality of Diaspora right now revolves around posting short text messages and photos. You can "reshare" and comment on individual messages. You can select an aspect from the tab bar at the top of the site in order to post a message that will be visible to only the people in that aspect. You can also post from the "Home" tab to send a message to all of your aspects. When you send a global message, you can also optionally choose to make it public, which will cause Diaspora to make it accessible through a public RSS feed and cross-post it to your external social networking accounts on Twitter and Facebook."
Haas: MySQL vs. PostgreSQL, Part 1: Table Organization
PostgreSQL developer Robert Haas has begun a series of articles comparing the architecture of PostgreSQL and MySQL. "So, all that having been said, what I'd like to talk about in this post is the way that MySQL and PostgreSQL store tables and indexes on disk. In PostgreSQL, table data and index data are stored in completely separate structures.... Under MySQL's InnoDB, the table data and the primary key index are stored in the same data structure. As I understand it, this is what Oracle calls an index-organized table. Any additional ('secondary') indexes refer to the primary key value of the tuple to which they point, not the physical position, which can change as leaf pages in the primary key index are split."
Page editor: Jonathan Corbet
Announcements
Non-Commercial announcements
Symbian Foundation web sites shutting down
The short-lived Symbian open-source project is truly at an end, and its web sites will be shutting down soon. "As a result, we expect our websites will be shutting down on 17th December. We are working hard to make sure that most of the content accessible through web services (such as the source code, kits, wiki, bug database, reference documentation & Symbian Ideas) is available in some form, most likely on a DVD or USB hard drive upon request to the Symbian Foundation."
Commercial announcements
A brief message from Novell
Novell has put out a brief message for those worried about its copyrights: "Novell will continue to own Novells UNIX copyrights following completion of the merger as a subsidiary of Attachmate."
Articles of interest
Garrett-Glaser: Patent skullduggery: Tandberg rips off x264 algorithm
Jason Garrett-Glaser (aka Dark Shikari) writes about yet another misuse of the patent system on his blog. "There's been a lot of very obnoxious misuse of software patents in recent years. This ranges from patent trolls wielding submarine patents to overly-generic patents being used to scare everyone else out of a business. But at least in most of the cases, the patents were an original idea of some sort, even if that idea was far too general to be patented. [...] The situation just got worse. We now have a company scraping open source commit logs and patenting them." (Thanks to Martin [Jeppesen?])
It's Not Easy Being Green: Life for SUSE with Attachmate (Linux Magazine)
Joe "Zonker" Brockmeier speculates on Attachmate's plans for Novell. "Now we know. But does Attachmate plan to hold on to SUSE very long? Magic 8-ball says no. Here's why: Attachmate isn't a company sitting around with $2.2 billion in cash. It's a private company with a lot of legacy tech. The Novell sale was, remember, triggered by an offer from Elliott Management Corporation (itself a big Novell shareholder). Guess where Attachmate is getting the money to buy Novell? Elliott Management Corporation, which will become an equity shareholder in Attachmate."
Google search goes under EU antitrust microscope (ars technica)
Ars technica reports that the European Commission has opened a formal antitrust investigation on Google search results. "The investigation was sparked by complaints from other search service providers that Google ranks their results much lower in both unpaid and paid results rankings, and uses its overarching dominance in online search to plug its own services instead."
Legal Announcements
EFF: Supreme Court to decide standard for proving invalidity of a patent
The Electronic Frontier Foundation has sent out a release celebrating the US Supreme Court's decision to take up a case - on request from Microsoft - which may make it easier to invalidate patents. "EFF argued in its brief that the Federal Circuit's requirement that an accused infringer prove patent invalidity by 'clear and convincing' evidence unfairly burdens patent defendants, especially in the free and open source software context. The standard undermines the traditional patent bargain between private patent owners and the public and threatens to impede innovation and the dissemination of knowledge."
New Books
Cassandra: The Definitive Guide--New from O'Reilly
O'Reilly Media has released "Cassandra: The Definitive Guide", by Eben Hewitt.The Unofficial LEGO MINDSTORMS NXT 2.0 Inventor's Guide--New from No Starch Press
No Starch Press has released "The Unofficial LEGO MINDSTORMS NXT 2.0 Inventor's Guide", by David J. Perdue.
Calls for Presentations
RailsConf Asks Ruby on Rails Community for Proposals
The call for proposals has opened for RailsConf 2011 which will be held in Baltimore, MD, May 16-19, 2011. Proposals are due by February 17, 2011
Upcoming Events
DebConf11 in July 2011
The annual Debian Conference, DebConf11, will take place in Banja Luka, Bosnia and Herzegovina, July 24-30, 2011. DebCamp will be held July 17-23, 2011. The call for papers and registration will open in early 2011.Events: December 9, 2010 to February 7, 2011
The following event listing is taken from the LWN.net Calendar.
Date(s) | Event | Location |
---|---|---|
December 11 | Open Source Conference Fukuoka 2010 | Fukuoka, Japan |
December 13 December 18 |
SciPy.in 2010 | Hyderabad, India |
December 15 December 17 |
FOSS.IN/2010 | Bangalore, India |
January 16 January 22 |
PyPy Leysin Winter Sprint | Leysin, Switzerland |
January 22 | OrgCamp 2011 | Paris, France |
January 24 January 29 |
linux.conf.au 2011 | Brisbane, Australia |
January 27 January 28 |
Southwest Drupal Summit 2011 | Houston, Texas, USA |
January 27 | Ubuntu Developer Day | Bangalore, India |
January 29 January 31 |
FUDCon Tempe 2011 | Tempe, Arizona, USA |
February 2 February 3 |
Cloud Expo Europe | London, UK |
February 5 February 6 |
FOSDEM 2011 | Brussels, Belgium |
February 5 | Open Source Conference Kagawa 2011 | Takamatsu, Japan |
If your event does not appear here, please tell us about it.
Page editor: Rebecca Sobol