It is frustrating for free software folks to see their friends and family disappear into the maw of Facebook's walled garden, seemingly unable to communicate via any other means. Looking around at Linux conferences and seeing lots of Apple laptops is equally frustrating—disheartening even—especially given Apple's draconian policies that seem to be hell-bent on creating, and enforcing, its own garden. And let's not forget that Apple is currently pursuing a patent attack that could do quite a bit of damage to Linux and free software. When looking at those things, it's important to remember that the struggle for freedom is rarely convenient. There are entrenched interests—and enormous sums of money—arrayed against software freedom, but that hasn't halted the movement's progress.
While it's clear that Apple, Facebook, Google, and others have made compelling platforms for users, it's equally clear that they have also ignored, or actively thwarted, user freedom. They have their reasons for doing so, not least the profit motive, but those with long memories have seen this kind of thing before. After all, 20 years ago one would have been hard-pressed to find a usable free operating system. The reasons were much the same then as they are now: money and power.
That particular obstacle has been overcome, with a lot of hard work by a lot of people, so it's a little early to be overly concerned that Apple (or Facebook or ...) is somehow siphoning off the energy of the free software movement. There are lots of Windows laptops at Linux conferences as well, but one would guess the percentage has drastically decreased over the years and only a small part of that has ended up in Apple's lap (or with an Apple in their lap).
Companies like Apple and Microsoft have huge advantages that are extremely difficult for free software to overcome, yet we still make progress. Anyone who has been a part of the community as a developer or user for 10 or 15 years—or even less—should be astonished at the advances made. But in order to do that, some folks will have to take the less convenient path, whether that means foregoing the latest user interface enhancements from Apple, or missing out on the über-cool (and trendy!) social networking widgets and games from Facebook. So far, fortunately, we haven't really lacked for people willing to make those choices.
It is probably quite obvious to most LWN readers, but it still bears repeating: the freedom to use hardware, software, and data as we desire—rather than as the purveyors want us to use them—sometimes comes with a price. Inconvenience, fewer capabilities, and sometimes being ostracized are some of the possible costs. But, as we have seen, the payoff is huge and the cost can be amortized over not only years, but also over a large number of users and developers.
The alternative, while perhaps convenient, is, in the end, bleak. It is no surprise that various mega-corporations are constantly trying to distract us with "shiny", because if they can just get past these pesky freedoms that some clamor about, they can get on with the profit-making tasks at hand. If they can redefine users as strictly "consumers of content", with that content controlled by the corporations and their allies, they can push more and more restrictions (a la the DMCA and the ACTA treaty) and further perpetuate their control.
By walling themselves off from the rest of the computing world, while enticing as many as possible to move inside their walls, Apple and Facebook (and others, those two are just today's high-profile offenders) are doing those users a grave disservice, at least in the long run. The battle for software freedom—really, any freedom—is a war of ideas, and ideals. Software freedom may be the pragmatic choice given a long enough view, but it often runs counter to the conventional thinking, which is why education needs to be a big part of the effort.
In the conflict between free and closed systems, there are many fronts. The Free Software Foundation (FSF) has generally been in the forefront of the battle to help people understand software freedom and why it's important. Other organizations and projects, including things like the Linux kernel and the FSF's own GNU project, have taken on other parts of the struggle. There is plenty of room in our movement for different approaches. Just as we don't require a single choice for editor, desktop environment, or distribution, diversity in how we work towards software freedom is important and useful.
Like my colleague, Joe "Zonker" Brockmeier, I am not always impressed with the campaigns that the FSF comes up with. I don't, however, see them as any kind of impediment to achieving the free software goals that we all likely share. Some of the phrases that have come from the FSF's campaigns (DRM == "digital restrictions management" and the lesser-known but still perfectly descriptive "defective by design" for example) have been exactly what was needed to help in the education process. Sometimes, negative campaigns have their place; what alternative to DRM should the FSF be pushing? In that case, "Gno" seems like the right answer.
None of that is to say that providing alternatives is not also important. Projects to make the Linux desktop more "usable" and user-friendly exist. There are various nascent efforts to create freedom and privacy-respecting alternatives to Facebook as well. These things will take time, but we will get there. Just look back a decade or two.CyanogenMod 5.0.8 announcement as the perfect opportunity to avoid real work for while. In short: CyanogenMod is a classic demonstration of what can happen when we have control over our gadgets.
CyanogenMod is a rebuild of the Android environment with a lot of added stuff. Some of what's there is code from Google which has not yet made it into an official Android release; for example, CyanogenMod users got essential features like color trackball notifications, animated GIF support, and 360-degree rotation ahead of stock Android users. They also got features that really are essential, with wireless tethering being at the top of the list. CyanogenMod also includes newer kernels with more features enabled, busybox and a whole set of command-line utilities, proper virtual private network (VPN) support, proper support for applications on the SD card, a cellular access-point name list which takes the guesswork out of using the phone with most providers, and lots more.
CyanogenMod also supports older handsets like the G1/ADP1 which, otherwise, remain stuck with old versions on the Android system.
It's worth noting that the CyanogenMod experience actually starts with the recovery image provided by Amon_RA. This image makes it easy to flash new versions of firmware into the phone. Even more importantly, though, is the full integration of nandroid backup and restore. Your editor can attest that this feature is able to take a handset which no longer even boots after a botched update and return it to its previous state. Needless to say, this capability makes experimenting with new versions a much lower-stress affair - if one remembers to make a backup first.
So what is new in 5.0.8? The headline features include:
There's also a number of bug fixes and performance improvements. Some users are reporting that CyanogenMod 5.0.8 feels a lot faster than its predecessors; your editor is inclined to agree but it's not entirely clear why that would be the case. One other nice little change is that the practice of hiding some settings under "spare parts" appears to have ended; all settings are, once again, available from the "settings" application.
There have been a few complaints about problems with this release, mostly associated with video recording. Those may all be due to a failure to wipe (factory reset) the phone before installing the update, though. Over a couple of days of usage and testing, your editor has not been able to find anything that has gone obviously wrong. It appears to be a solid release.
Naturally, CyanogenMod is not the only customized distribution available for Android phones; a number of alternatives are available. These include Kang-o-rama (2.2-based with claimed high speed and good battery life), AsimROM (2.2-based with some theme work), LeoFroYo (2.2-based with the nice feature that the Facebook and Twitter applications have been made removable), MoDaCo (2.2, "designed to feel as far as possible like a stock ROM, with optimisations, tweaks and complimentary additions that enhance the user experience"), and many more. There are also projects creating specialized kernels, attempting to enable the FM radio said to be built into the Nexus One, and so on. In summary: there's no lack of Android distributions for those who wish to play with them. At least, if one has a Nexus One; there appear to be fewer developers targeting other handsets.
A word of caution is in order, though. CyanogenMod appears to be developed with a fair amount of care and should be solid, but there are no guarantees, and some releases are better than others. The other projects seem to come and go; the perceived risk level with them may be higher. As with any computer, good backups are important. One other thing to keep in mind is this: someday, somebody will certainly yield to the temptation to build a release with some sort of back door or other malware built into it; for all we know this may have already happened. A handset running this software would be thoroughly compromised at the most fundamental level, and this situation could persist for some time; there are few people looking at the code being shipped in these distributions. Until such a time as we have an ecosystem of trusted distributors for handsets, one must proceed with caution and care.
These concerns reflect the fact that the development of real distributions for handsets has really just begun. Even so, we can begin to see the potential for where things may go: we have developers updating device firmware with versions which are more featureful, more power-efficient, and more tuned to the needs of specific users. If all goes well, we can look forward to a future with increasingly open handsets and a wider choice of operating systems to run on those handsets. Interesting things will certainly come of it.
Like most community-run events, the second SouthEast LinuxFest (SELF) featured the standard set of positive talks on Linux and open source. It also featured a somewhat more controversial talk about failures to get some features merged into the Linux kernel by Ryan "icculus" Gordon.
The talk was delivered without slides, and Gordon started by admitting the talk was biased — unlikely to win many friends in the kernel community. The focus of the talk, which was not terribly obvious from the schedule, was on several high-profile attempts to add new features into the Linux kernel that failed, due — according to Gordon — to kernel politics or personality conflicts. He said he had spoken to a number of kernel developers who experienced failure, but few were willing to go "on the record," for the talk. As examples he used his own experiences, Eric S. Raymond's CML2, Con Kolivas's failure with the Completely Fair Scheduler, along with Hans Reiser and the Reiser4 filesystem.
Gordon is behind icculus.org, and says that he spends most of his time porting video games to Linux. In the course of working with Linux, Gordon says that he discovered that "Linux sucks at a lot of important tasks." He noted that Apple has solved a number of the things that Linux does poorly (though he ceded that Mac OS X also does many things badly), and that Linux developers should be "stealing some stuff" from Apple. Gordon pointed to the Time Machine backups and universal binaries that allow users to install software on PowerPC or Intel-based machines and have them "just work."
Gordon wasn't using universal binaries as an idle example. Gordon attempted to solve that problem himself by creating FatELF, a universal binary format for Linux. He described the process of creating the patch, sending it to the kernel list for acceptance, and expecting success. Instead of succeeding, Gordon said it was a "spectacular failure."
The problem? Gordon says that he misinterpreted a response from Alan Cox as being in favor of the patch, when actually he was against it. According to Gordon, "the worst thing you can do is have a kernel maintainer tell you what they don't like and ignore it. Once Alan Cox was openly hostile, people came out of the woodwork." He then drew an analogy between the kernel community and the movie Mean Girls, likening Cox to one of the popular girls and other community members as following Cox's example. Gordon says that the kernel community has a "herd mentality" and that "you can't move the kernel maintainers."
After recounting his own experience with FatELF, Gordon talked about Raymond and the problems getting CML2 — a replacement for the kernel build system — into the Linux kernel. Gordon highlighted the fact that CML2 going into the kernel seemed a foregone conclusion to Raymond, and that it was originally supposed to go into the 2.5.1 kernel. He also talked at length about Raymond and Linus Torvalds's discussions about CML2 and Torvalds's general agreement that CML2 could go into the kernel.
Gordon said that the kernel community was hostile towards Python and Raymond, but little was mentioned about Raymond's sometimes caustic responses to the kernel developers. Gordon also mentioned a little, but very little, of the technical problems of CML2, such as 30-second wait time to compile its kernel-rules.cml into a binary format for use with the cmlconfigure program. He did discuss the — perhaps not entirely reasonable — objection that CML2 consisted of a major change to the kernel where the maintainers typically prefer a set of small incremental changes. He wondered: how does one implement a new configuration language as an incremental change?
One might question the wisdom of using Hans Reiser as an example of the kernel development process gone wrong, as Gordon did. Reiser had a contentious relationship with the kernel community to begin with, and the fact that it was necessary to cite correspondence from Reiser's cell tends to undermine his credibility. But Gordon is to be credited with, at least, being thorough in interviewing his sources. Gordon showed several letters from Reiser gathered over months of correspondence about the failure to get Reiser4 into the mainline kernel.
He described the problems that Reiser had in attempting to get Reiser4 into the kernel. In an unfortunate analogy, he said that Reiser failed to get a "fair trial" from the kernel community in considering Reiser4. Gordon also hinted that corporate interests may have sabotaged Reiser4 from being adapted into mainline — a longstanding contention of Reiser's — because of conflicting interests on behalf of the maintainers, though he cited no evidence for it. Gordon painted the kernel community as abusive towards Reiser, but omitted any mention that Reiser could also be caustic in return. Certainly, Gordon's point that the FAQ on KernelNewbies is unnecessarily personal is well-taken.
But little was said about any of the real technical problems with Reiser4. Personality issues aside, real technical problems stood (and still stand) in the way of merging Reiser4 into the mainline kernel.
Due to time, Gordon rushed through the discussion of Kolivas's failure to get the Completely Fair Scheduler into the kernel. He characterized Kolivas's treatment as "rude," and suggested that it was particularly bad treatment that Kolivas's ideas made it into the kernel but his actual code didn't.
Gordon ended his talk by throwing out a few ideas to improve the kernel development process. He suggested that the audience, and other Linux users, join the kernel mailing list and lobby for features that they want. He also challenged the idea that developers should have a "thick skin" to participate in kernel development, and suggested that the atmosphere of lkml needs to improve.
Despite the focus on the "spectacular failures," Gordon did acknowledge that this was a sporadic problem at worst — not a systemic failure. He also took great pains to say, both during and after the talk, that everyone he spoke with held Andrew Morton in the highest regard. Gordon said that developers should "study Andrew Morton" with great intensity.
The talk was interesting because it provides a different view of the kernel development process than is normally given to general audiences. The kernel development community, and larger FOSS development community, has an understanding of the development process as imperfect. It is well-known that it can be political and personal, as Torvalds himself pointed out in response to Kolivas's departure. The SELF audience, by and large, was composed of Linux enthusiasts that are far removed from the development process. It will be interesting to see if any of the audience decides to take Gordon's advice and begin lobbying the kernel list.
Kernel development is often held up as a shining example of the open source development process to larger audiences. Gordon's talk presented a narrative that demonstrates the less pleasant side of kernel hacking, and the disappointment that developers feel when their contributions are rejected. It might have been more valuable by presenting a less biased view, but it is a story that should be heard.
There are probably very few system administrators who haven't at least contemplated some kind of retribution against attackers. Some may have envisioned something physical—perhaps involving red hot pokers—but it's likely that the majority considered extracting a payback via the same route they were attacked: the internet. A French company has taken that idea to its logical extreme by presenting thirteen "zero-day" exploits against tools used by attackers at the SyScan security conference, which was recently held in Singapore.
Many attackers use various applications—exploit packs and toolkits—that they install on the web sites they have compromised. These applications launch attacks against the web browsers of site visitors by probing for vulnerabilities, often in plugins like PDF or Java, and using those it finds to compromise the visitor's machine. TEHTRI-Security investigated several of these applications and found exploitable vulnerabilities in half a dozen of them. Unlike what it might have done for more benign applications, TEHTRI released the information at the conference with no warning to the projects and, not surprisingly, those who usually clamor for "responsible disclosure" were rather mute.
These exploit toolkits typically have two components: the payload delivery mechanism and an administrative interface. Payload delivery runs on the compromised web site, looks at the browser to try to find vulnerabilities, and then delivers the appropriate exploit. The web-based administrative interface often aggregates information from multiple compromised web sites and allows the attacker to see what browsers were successfully attacked, which vulnerabilities were used, where the user came from, and so on—essentially a web analytics tool for malware purveyors. TEHTRI found vulnerabilities in both of these components, which could lead to administrative interface defacement, attack management database destruction, authentication cookie disclosure, disclosure of attackers' IP addresses, and more.
The kinds of vulnerabilities that were found read like a laundry list of the most common web application flaws: cross-site scripting, SQL injection, cross-site request forgery, remote file disclosure, authentication bypass, and so on. Even those who exploit web application flaws for a "living"—exploit packs typically cost $500-1000 or more—seem to be unable or unwilling to write code that avoids those same flaws. It is rather ironic that the victims of these web attacks can turn around and use the same techniques to attack the attacker.
As TEHTRI and others point out, though, it may well be illegal to turn the tables on the attackers, no matter how satisfying—and reasonable—the idea seems. Self-defense is likely not a defense against computer crime statutes, at least in many jurisdictions. The administrative interfaces typically run on systems under the control of the attacker, but not necessarily a host that is "owned" (in the legal sense) by them. It is probable that an unsuspecting victim's server has been compromised to the extent that the web interface could be installed, which makes an attack against it even riskier.
While some specifics were given at the SyScan talk, TEHTRI is keeping the details of these vulnerabilities (and others that it hints about) to itself for now. There is another SyScan conference in early July (in Hangzhou, China) where TEHTRI's Laurent Oudot is once again presenting on this topic so, in order to keep up the interest in the talk, "it has been decided that we would not disclose the whole content of our findings before this upcoming event", he said. As with much security research, TEHTRI clearly sees these vulnerabilities as a marketing tool, and is, unfortunately in some ways, treating them as such. On the other hand, it's hard to feel much in the way of sympathy for the developers or users of the tools, so disclosure of the flaws, and how to exploit them, is not a particularly high priority.
Given that there aren't enough details, yet, to actually strike back against attackers using these exploit toolkits, there is some time to consider the ramifications of "defensive attacks". Computer crime statutes are typically written rather loosely, such that any access other than what the site owner wants can be considered a violation. As various folks have found out, intent means very little when it comes to computer "crime". In addition, judges and lawyers are not terribly savvy about these technical issues, which makes it that much harder for "white hats" to defend themselves. All of that makes it extremely risky for anyone to use these exploits (or other offensive methods) against attackers.
One way to use the vulnerabilities that TEHTRI has found would be by, or in conjunction with, law enforcement. Exploiting some of those holes could lead to other systems under the control of the attacker, potentially including a host that can be associated with a specific individual or group. That could lead to prosecution, and possibly unravel a larger network of attackers. Unfortunately, except for high-profile attacks, there seem to be few resources available to track down and prosecute these crimes.
In the end, the lasting legacy of these vulnerabilities is likely to be their amusement value. It's probably too risky for "white hats" to use them, and those who could use them without fear of prosecution (e.g. police) don't have enough time, money, or interest to do so. That's sad in many ways, and disappointing to system administrators who would like to extract a small measure of retribution, but it's also hard to see it changing anytime soon.
|Created:||June 22, 2010||Updated:||June 23, 2010|
|Description:||From the Red
Graham Barr reported that beanstalkd v1.4.5 and earlier, improperly sanitized job data, sent together with put command from client. A remote attacker, providing a specially-crafted job data in request, could use this flaw to bypass intended beanstalk client commands dispatch mechanism, leading to unauthorized execution of beanstalk client commands.
|Package(s):||cups||CVE #(s):||CVE-2010-0540 CVE-2010-0542 CVE-2010-1748|
|Created:||June 18, 2010||Updated:||March 2, 2011|
|Description:||From the Red Hat advisory:
A missing memory allocation failure check flaw, leading to a NULL pointer dereference, was found in the CUPS "texttops" filter. An attacker could create a malicious text file that would cause "texttops" to crash or, potentially, execute arbitrary code as the "lp" user if the file was printed. (CVE-2010-0542)
A Cross-Site Request Forgery (CSRF) issue was found in the CUPS web interface. If a remote attacker could trick a user, who is logged into the CUPS web interface as an administrator, into visiting a specially-crafted website, the attacker could reconfigure and disable CUPS, and gain access to print jobs and system files. (CVE-2010-0540)
Note: As a result of the fix for CVE-2010-0540, cookies must now be enabled in your web browser to use the CUPS web interface.
An uninitialized memory read issue was found in the CUPS web interface. If an attacker had access to the CUPS web interface, they could use a specially-crafted URL to leverage this flaw to read a limited amount of memory from the cupsd process, possibly obtaining sensitive information. (CVE-2010-1748)
|Created:||June 22, 2010||Updated:||October 14, 2010|
|Description:||From the Drupal advisory:
The Content Construction Kit (CCK) project is a set of modules that allows you to add custom fields to nodes using a web browser.
The CCK "Node Reference" module can be configured to display referenced nodes as hidden, title, teaser or full view. Node access was not checked when displaying these which could expose view access on controlled nodes to unprivileged users.
In addition, Node Reference provides a backend URL that is used for asynchronous requests by the "autocomplete" widget to locate nodes the user can reference. This was not checking that the user had field level access to the source field, allowing direct queries to the backend URL to return node titles and IDs which the user would otherwise be unable to access. Note that as Drupal 5 CCK does not have any field access control functionality, this issue only applies to the Drupal 6 version.
|Created:||June 22, 2010||Updated:||June 23, 2010|
|Description:||Drupal has reported multiple vulnerabilities in the views module, including cross-site request forgery and cross-site scripting.|
|Created:||June 22, 2010||Updated:||June 23, 2010|
|Description:||From the Ubuntu advisory:
Dan Rosenberg discovered that fastjar incorrectly handled file paths containing ".." when unpacking archives. If a user or an automated system were tricked into unpacking a specially crafted jar file, arbitrary files could be overwritten with user privileges.
|Package(s):||firefox thunderbird seamonkey||CVE #(s):||CVE-2010-1121 CVE-2010-1125 CVE-2010-1196 CVE-2010-1197 CVE-2010-1198 CVE-2010-1199 CVE-2010-1200 CVE-2010-1202 CVE-2010-1203|
|Created:||June 23, 2010||Updated:||August 30, 2010|
|Description:||The firefox 3.6.4 release contains fixes for several new vulnerabilities, some of which may be remotely exploitable.|
|Package(s):||moodle||CVE #(s):||CVE-2010-2228 CVE-2010-2229 CVE-2010-2230 CVE-2010-2231|
|Created:||June 23, 2010||Updated:||October 11, 2010|
|Description:||The moodle 1.8.13 and 1.9.9 releases fix four different cross-site scripting vulnerabilities.|
|Created:||June 22, 2010||Updated:||July 21, 2011|
|Description:||From the Ubuntu advisory:
Maksymilian Arciemowicz and Adam Zabrocki discovered that OPIE incorrectly handled long usernames. A remote attacker could exploit this with a crafted username and make applications linked against libopie crash, leading to a denial of service.
|Created:||June 18, 2010||Updated:||June 23, 2010|
|Description:||From the Debian advisory:
Dan Rosenberg discovered that pmount, a wrapper around the standard mount program which permits normal users to mount removable devices without a matching /etc/fstab entry, creates files in /var/lock insecurely. A local attacker could overwrite arbitrary files utilising a symlink attack.
|Created:||June 21, 2010||Updated:||June 23, 2010|
|Description:||From the Mandriva advisory:
A vulnerability was reported in the SquirrelMail Mail Fetch plugin, wherein (when the plugin is activated by the administrator) a user is allowed to specify (without restriction) any port number for their external POP account settings. While the intention is to allow users to access POP3 servers using non-standard ports, this also allows malicious users to effectively port-scan any server through their SquirrelMail service (especially note that when a SquirrelMail server resides on a network behind a firewall, it may allow the user to explore the network topography (DNS scan) and services available (port scan) on the inside of (behind) that firewall). As this vulnerability is only exploitable post-authentication, and better more specific port scanning tools are freely available, we consider this vulnerability to be of very low severity.
|Package(s):||tiff||CVE #(s):||CVE-2010-1411 CVE-2010-2065 CVE-2010-2067|
|Created:||June 22, 2010||Updated:||March 8, 2011|
|Description:||From the Ubuntu advisory:
Kevin Finisterre discovered that the TIFF library did not correctly handle certain image structures. If a user or automated system were tricked into opening a specially crafted TIFF image, a remote attacker could execute arbitrary code with user privileges, or crash the application, leading to a denial of service. (CVE-2010-1411)
Dan Rosenberg and Sauli Pahlman discovered multiple flaws in the TIFF library. If a user or automated system were into opening a specially crafted TIFF image, a remote attacker could execute arbitrary code with user privileges, or crash the application, leading to a denial of service. (CVE-2010-2065, CVE-2010-2067)
|Created:||June 21, 2010||Updated:||June 23, 2010|
|Description:||From the Red
A Debian bug report noted that ZNC would segfault under certain conditions, such as clicking "traffic" in the webadmin pages or issuing the traffic command on the /znc shell.
Page editor: Jake Edge
There have been no stable updates since 220.127.116.11 on June 1.
So I ack this patch - it's the only way to find out.
Currently, that control is exercised through a number of individual system parameters. One controls whether the scheduler tries to coalesce processes onto a subset of the system's CPUs in the hope of letting others sleep. Another knob tells the idle governor which sleep states it is able to use. Yet another controls CPU frequency and voltage response. Simply knowing about all of the available parameters is hard; keeping them all tuned properly can be harder yet.
Len Brown has proposed the addition of an overall control parameter for power management, to be found in /sys/power/policy_preference. This knob would have five settings, ranging from "maximum performance at all times" to "save as much power as possible without actually turning the system off." With a control like this, system administrators could control system power policy without having to learn about all of the individual parameters involved; policy choices would also be applied to any new power-management parameters added in the future.
The idea was not universally loved, though. Some commenters asked for more than five settings, but Len argued that anybody needing more complex configurations should just continue to use the individual parameters. Others fear that the single policy might be interpreted differently by different drivers, leading to inconsistent results; they would rather see the continued use of individual parameters which exactly describe the desired behavior. The real discussion, though, cannot happen until some actual code has been posted, if and when that happens.this story from Edward Allcutt:
Edward's response to this non-fun situation was a patch limiting the number of core dumps which can be underway simultaneously; any dumps which would exceed the limit would simply be skipped.
It was generally agreed that a better approach would be to limit the I/O bandwidth of offending processes when contention gets too high. But that approach is not entirely straightforward to implement, especially since core dumps are considered to be special and not subject to normal bandwidth control. So what's likely to happen instead is a variant of Edward's patch where processes trying to dump core simply wait if too many others are already doing the same.this patch by Rafael Wysocki. Rafael is trying to solve the problem of "wakeup events" (events requiring action which would wake a suspended device) being lost if they show up while the system is suspending.
In Rafael's approach, there would be a new sysfs attribute called /sys/power/wakeup_count; it would contain the number of wakeup events seen by the system so far. Any process can read this attribute at any time to obtain this count; privileged processes can also write a count back to the file. There is a twist, though: if the count written to the file does not match the count which would be read from it, the write will fail. A write also triggers a mechanism whereby any subsequent wakeup events will cause an attempted suspend operation to abort.
As with some other scenarios which have been posted, Rafael is assuming the existence of a user-space power management daemon which would decide when to suspend the system. This decision would be made when the daemon knows that no important user-space program has work to do. Without extra help, though, there will be a window between the time that the daemon decides to suspend the system and when that suspend actually happens; a wakeup event which arrives within that window could be lost, or at least improperly delayed until after the system is resumed again. But, with the wakeup_count mechanism, the kernel would notice when this kind of race had happened and abort the suspend process, allowing user space to process the new event.
For this mechanism to work, the kernel must be able to count wakeup events; that, in turn, requires sprinkling calls to a new pm_wakeup_event() function into drivers which can generate such events. So internally it doesn't look all that different from suspend blockers. Some of the comments have suggested that the scheme is quite similar to suspend blockers on the user-space side too, though Rafael believes that it avoids the aspects of that API which generated the most criticism. Regardless, reviews were mixed, and Android developer Arve Hjønnevåg thinks that this approach will not meet that project's needs. So this discussion probably has more rounds to go in the future.
Kernel development news
Kees Cook is back with another proposal for a kernel change that would, at least in his mind, provide more security, this time by restricting the ptrace() system call. But, like his earlier symbolic link patch, this one is not being particularly well-received on linux-kernel. It has, however, sparked some discussion of a topic that seems to recur with some frequency in that venue: stacking Linux security modules (LSMs).
Cook's patch is fairly straightforward; it creates a sysctl called ptrace_scope that defaults to zero, which chooses the existing behavior. If it is set to one, though, it only allows ptrace() to be called on descendants of the tracing process. The idea is to stop a vulnerability in one program (Pidgin, say) from being used to trace another program (like Firefox or GPG-agent), which would allow extracting credentials or other sensitive information. Like the previous symlink patch, it is based on a patch that has long been in the grsecurity kernel.
As with the previous proposal, Alan Cox was quick to suggest that it be put into an LSM:
But, one problem with that plan is that LSMs do not stack. One can have SELinux, Smack, and TOMOYO enabled in a kernel, but only one—chosen at boot time—can be active. There have been discussions and proposals for LSM stacking (or chaining) along the way, but nothing has ever been merged. So, two "specialized" LSMs cannot do their separate jobs in the kernel and users will have to choose between them.
For "full-featured" solutions, like SELinux, that isn't really a problem, as users can find or create policies to handle their security requirements. In addition, James Morris points out that SELinux has a boolean, allow_ptrace, to do what Cook is trying to do: "You don't need to write any policy, just set it [allow_ptrace] to 0". But, for those that don't want to use SELinux, that's no solution. As Ted Ts'o puts it:
Yet I would really like a number of features such as this ptrace scope idea --- which I think is a useful feature, and it may be that stacking is the only way we can resolve this debate. The SELinux people will never believe that their system is too complicated, and I don't like using things that are impossible for me to understand or configure, and that doesn't seem likely to change anytime in the near future.
Others were also favorably disposed toward combining LSMs, though the consensus seems to be for chaining LSMs in the security core rather than stacking, as was done with SELinux and Linux capabilities (i.e. security/commoncap.c). In the stacking model, each LSM is responsible for calling out to any other secondary LSMs for each security operation, whereas chaining is "just a walk over a list of security_operations" calling each LSM's version from the core, as Eric W. Biederman described. But it's not as easy as it might seem at first glance, as Serge E. Hallyn, who proposed a stacking mechanism in 2004, points out:
It seems that there may be some discussion of LSM stacking/chaining at the Linux security summit, as part of Cook's presentation on "widely used, but out-of-tree" security solutions, but perhaps also in a "beer BOF" that Hallyn is proposing.
The way forward for both of Cook's recent proposals looks to be as an LSM and, to that end, he has posted the Yama LSM, which incorporates the symlink protections and ptrace() limitations that he previously posted. In addition, it adds the ability to restrict hard links such that they cannot be created for files that are either sensitive (e.g. setuid) or those that are not readable and writable by the link creator. Each of these measures can be enabled separately by sysctls in /proc/sys/kernel/yama/.
While "Yama" might make one start looking for completions of an acronym ("Yet Another ..."), it is actually named for a deity: "Yama is roughly the 'lord of death/underworld' in Buddhist and Hindu tradition, kind of over-seeing the rehabilitation of impure entities", Cook said. Given the number of NAKs that his recent patch proposals have received, calling Yama the "NAKed Access Control system", shows a bit of a sense of humor about the situation. DAC, MAC, RBAC, and others would now be joined by NAC if Yama gets merged.
So far, discussion of Yama has been fairly light, and without any major complaints. While some are rather skeptical of the protections that Cook has been proposing, they are much less likely to care if they live in an LSM, rather than "junk randomly spewed [all] over the tree", as Cox put it.
Once these simpler security tasks are encapsulated into an LSM, Morris said, the kernel hackers "can evaluate the possible need for some form of stacking or a security library API" to allow these measures to coexist with SELinux, AppArmor, and others. Given the fairly broad support for the LSM approach, it would seem that Yama, or some descendant, could make it into the mainline. Whether that translates to some kind of general mechanism for combining LSMs in interesting ways remains to be seen—it should be worth watching, stay tuned.Meego has chosen Btrfs as its default filesystem. So when a filesystem developer started calling Btrfs "broken by design," people took notice.
Edward Shishkin, perhaps better known for his efforts to keep reiser4 development alive, first posted some concerns on June 3. It seems he ran a simple test: create a new Btrfs filesystem, then create 2048-byte files until space runs out. Others have talked about suboptimal space efficiency in Btrfs before, but Edward was still surprised that he was only able to use 17% of the nominal space in the filesystem before it was reported as being full. Such poor efficiency was, according to Edward, evidence the Btrfs was "broken by design" and should not be used:
Part of the problem comes down to the use of "inline extents" in Btrfs. The core data structure on a Btrfs filesystem is a B-tree which provides access to all of the objects stored in the filesystem. For larger files, the actual file data is stored in extents, which are pointed to from within the tree. Small extents, though, can be stored in the tree itself, hopefully yielding both better space efficiency and better performance. If these extents are sized inconveniently, though, they can cause a lot of wasted space. There's only room for one 2048-byte inline extent in a B-tree node, leaving 1800 bytes or so of unused space. That is a lot of internal fragmentation - a lot of wasted space.
As noted in Chris Mason's response, there are two approaches which can be taken to mitigate this kind of problem. One is to turn off inline extents altogether; Btrfs has a max_inline= mount option which can be used for just that purpose. The other approach would be to allow inline extents to be split between tree nodes so that the pieces could be sized to fill those nodes exactly. Btrfs cannot do that, and probably will not be able to anytime soon:
Chris also noted that most of the other variable-size items stored in B-tree nodes - extended attributes, for example - can be split between nodes if need be. So these items should not cause fragmentation problems; it's mainly the inline extents which are at fault there.
But, as Edward pointed out, there's more to the problem than inline extents. In his investigations, he's found numerous places where groups of nearly-empty nodes exist; some were less than 1% utilized. That, in all likelihood, is the real source of the worst space utilization problems. To Edward, this behavior is another sign that the algorithms used in Btrfs are all wrong and in need of a redesign.
Chris sees it a little differently, though:
He has promised to track it down and post a fix. Between the bug fix and turning off inline extents (or, at least, reducing their maximum size), it is hoped that the worst space utilization problems in Btrfs will be no more.
That fix has not been posted as of this writing, so its effectiveness cannot yet be judged. But, chances are, this is not a case of a filesystem needing a fundamental redesign. Instead, all it needs is more extensive testing, some performance tuning, and, inevitably, some bug fixes. The good news is that the process seems to be working as it should be: these problems have been found before any sort of wide-scale deployment of this very new filesystem.concurrency-managed workqueues (CMWQ) rework has the potential to be a significant improvement as well, but its path toward merging has not been so smooth. The fifth iteration of the patch set is currently under discussion. While a number of concerns have been addressed, others have come out of the woodwork to replace them.
The CMWQ work is intended to address a number of problems with current kernel workqueues. At the top of the list is the proliferation of kernel threads; current workqueues can, on a large system, run the kernel out of process IDs before user space ever gets a chance to run. Despite all these threads, current workqueues are not particularly good at keeping the system busy; workqueues may contain a backlog of work while the CPU sits idle. Workqueues can also be subject to deadlocks if locking is not handled very carefully. As a result, the kernel has grown a number of workarounds and some competing deferred-work mechanisms.
To resolve these problems, the CMWQ code maintains a set of worker threads on each processor; these threads are shared between workqueues, so the system is not overrun with workqueue-specific threads. The special scheduler class once used by CMWQ is long gone, but the code still has hooks into the scheduler which it can use to track which worker threads are actually executing at any given time. If all workqueue threads on a CPU have blocked waiting on some resource, and if there is queued work to do, the CMWQ code will kick off a new thread to work on it. The CMWQ code can run multiple jobs from the same CPU concurrently - something the current workqueue code will not do. In this way, the CPU is always kept busy as long as there is work to be done.
The first complaint that came back this time was that many developers had long since forgotten what CMWQ was all about, and Tejun had not put that information into the patch series introduction. He made up for that with an overview document explaining the basics of the code. That led quickly to a new complaint: the lack of dedicated worker threads means that it is no longer possible to change the scheduling behavior of specific workqueues.
There were two variants of this complaint. Daniel Walker lamented the loss of the ability to change the priority of workqueue threads from user space. Tejun has firmly denied that this is a useful thing to be able to do, and Daniel has not, yet, shown an example of where it would be desirable. Andrew Morton, instead, worries about being able to change scheduling behavior from within the kernel; that is something that at least one driver does now. He might be willing to let this capability go, but he's not happy about it:
Tejun's reply to this concern takes a couple of forms. One is that workqueues are intended for general-purpose asynchronous work, and that is how almost all callers use it. It would be better, he says, to make special mechanisms for situations where they are really needed. To that end, he has posted a simple kthread_worker API which can be used for the creation of special-purpose worker threads. Essentially, one starts by setting up a kthread_worker structure:
DEFINE_KTHREAD_WORKER(worker); /* ... or ... */ struct kthread_worker worker; init_kthread_worker(&worker);
Then, a kernel thread should be set up using the (existing) kthread_create() or kthread_run() utilities, but passing a pointer to kthread_worker_fn() as the actual function to run:
struct task_struct thread; thread = kthread_run(kthread_worker_fn, &worker, "name" ...);
Thereafter, it's just a matter of filling in kthread_work structures with actual work to be done and queueing them with:
bool queue_kthread_work(struct kthread_worker *worker, struct kthread_work *work);
So far, there has been no real commentary on this patch.
The other thing which could be done is to associate attributes like priority and CPU affinity with the work to be done instead of with the thread doing the work. That would require expanding the workqueue API to allow this information to be specified; the CMWQ code would then tweak worker threads accordingly when passing jobs to them. At this point, though, it's not clear that there is enough need for this feature to justify the added complexity that it would require.
The CMWQ code certainly adds a bit of complexity already, though it makes up for some of that by replacing the slow work and asynchronous function call mechanisms. Tejun is hoping to drop it into linux-next shortly, and, presumably, to get it merged for 2.6.36. Whether that will happen remains to be seen; core kernel changes can be hard, and this one may not, yet, have cleared its last hurdle.
Patches and updates
Core kernel code
Filesystems and block I/O
Virtualization and containers
Benchmarks and bugs
Page editor: Jonathan Corbet
News and Editorials
Custom Linux distributions that tailor the operating system to the needs of a specific set of users are one of the joys of open source development. Classrooms, audio production and recording studios, and high-performance computing (HPC) clusters — all have individual requirements that diverge from the standard server or office-oriented desktop distribution. Poseidon Linux is a perfect example, a specialty distribution created for scientists. The core maintainers are oceanographers at Brazil's Universidade Federal do Rio Grande, but Poseidon has grown in popularity enough that releases are now made to support English, German, and Spanish in addition to Portuguese.
Poseidon Linux dates back to 2004, and was originally built on top of the Kurumin live CD distribution, a now-defunct Portuguese version of Debian running KDE. With 2008's 3.0 release, however, Poseidon migrated to Ubuntu as its base distribution and GNOME as its desktop environment. The project's site describes the latest version as 3.2, released in May 2010, but so far 3.2 only appears to be available on one of the mirror sites in Germany, although it is not a German localization. In addition, only the 32-bit version of 3.2 has been made publicly available (previous releases have always included both 32-bit and 64-bit builds).
The ISO image requires DVD media, weighing in at 2.4GB, and is provided as a direct HTTP download only. Poseidon can be run as a live DVD, or installed on the hard drive. Other than the cosmetic changes (which are a nice touch), Poseidon deviates from its Ubuntu parent base by stripping down the installed set of general-purpose applications, and packaging in a long list of scientific applications, libraries, and development tools.
Some of these applications are available in upstream Ubuntu and Debian, such as the GNU Octave computation system, GRASS geographic information system (GIS), or IBM's data visualization package OpenDX. Others are not, such as the Terraview and SPRING GIS programs. The support tools include Python and C libraries for numerical computation, the G77 FORTRAN compiler, and modules for using GIS data with PostgreSQL. It is also nice that the distribution includes a wide variety of R statistical packages that users would otherwise have to seek out and download individually.
The bulk of the specialty packages involve either mapping and GIS, or statistics and data modeling, which reflects the creators' field of study. There are physics, astronomy, math, chemistry, and biology packages in the default install, too, however. TeX is represented by the GUI editors LyX and Kile, and by the BibTeX bibliographic editor JabRef. The 3-D modeler Blender is included as well, and its placement as a computer-aided design (CAD) tool points to the lack of a high-quality open source 3-D CAD program.
I tested the 32-bit 3.2 release, from the mirror site mentioned above, in live DVD mode, with only a few hiccups along the way (not counting the lack of a Bittorrent release for the ISO image, which is its own, practical, issue). A few of the packages were (quite puzzlingly) not as up-to-date as upstream Ubuntu repositories, notably the R-Commander GUI interface to R, which reported a version conflict with the installed R package. In addition, only the German keyboard layout was functional, even after I added the appropriate US keyboard layout. These are minor difficulties that may get ironed out as the work on 3.2 continues — hopefully 64-bit ISOs and other updates are still to come.
Poseidon aims for the full-featured desktop Linux model, not a stripped-down environment with a minimalist window manager. As such, it feels exactly like a standard GNOME or Ubuntu desktop, and the traditional packages that Poseidon omits from the default installation can be installed through Apt as usual. The project provides package updates to the scientific applications through its own repository. Full releases of Poseidon Linux have been irregular; 3.0 and 3.1 both occurred within 2008, but it was more than a year between 3.1 and 3.2. Judging by the change log, however, this appears to have more to do with updates to the core scientific software applications than with any effort to align with Ubuntu's six-month release cycle.
Several of the add-on scientific packages are not likely to gain official Debian or Ubuntu maintainers thanks to their niche userbase (or, in some cases, outdated toolkit dependencies). MB-System, for example, is a sonar processing and display tool clearly of importance to oceanographers, and perhaps to others who live or work by the ocean, but requires such domain-specific knowledge that it is unlikely to get packaged by a typical distribution. The tide predictor XTide, on the other hand, is still packaged for Ubuntu, but is one of the few such applications using the X Athena Widgets toolkit (Xaw). Most other Xaw programs (xclock, xload, xbiff, etc.) have been supplanted by GTK+ and Qt replacements, but there is no alternative for XTide.
Many of the applications are produced by small teams (at least, compared to the organizations that work on Firefox or other widely-deployed programs) scratching their own research itch, often in an academic or institutional setting. That makes them afterthoughts to modern distributions focusing on the desktop, which in turn can mean that they are harder to install and keep up-to-date. In those cases, using a targeted distribution like Poseidon will undoubtedly save time and frustration, particularly if one has to maintain a laboratory's worth of computers.
The same is also true of the computational programming libraries, R packages, and other add-ons. Poseidon ships with Emacs support for Prolog and GRI, two languages with small user bases outside of their particular fields. While an individual user might have no problem checking for updates and installing the Lisp packages by hand, having to keep an entire team up-to-date simultaneously is more difficult.
There are at least a few other "scientific"-targeted Linux distributions under active development. Fermilab and CERN both originally maintained their own distributions that were source-compatible with Red Hat Enterprise Linux (RHEL), but combined forces to work on Scientific Linux (SL). Like Poseidon, SL includes scientific applications and libraries, but SL also incorporates several important system-level tweaks, such as including support for the distributed Andrew File System (AFS). Quantian is a Knoppix-based live DVD distribution that focuses exclusively on numerical and quantitative analysis, such as the ability to quickly set up an OpenMosix cluster of nodes running Quantian.
If there is one area in which Poseidon falls short, it is the lack of community tools. The Poseidon site does not maintain a mailing list or discussion forum, although searching on the web indicates that it is used at several other institutions that do oceanographic research, and that it has a following among GIS and mapping enthusiasts as well. The project site ostensibly has an RSS news feed, but it is a full release cycle out of date. There is one contact email address, and a list of requested packages to be considered for future versions, but otherwise development is a black box.
Perhaps this is attributable to Poseidon's creators' full-time jobs in oceanographic research; that is an acceptable excuse, but it would still be nice to see the team open up some form of public discussion outlet, if not a full issue tracker and other large-distribution paraphernalia. They are doing good work in bringing useful applications and libraries to scientific users in a turn-key distribution — reaching out to the community is simply the next step, for the feedback, additional volunteer-hours, and increased exposure overall.
New Releasesreleased. The final release is scheduled for July 15, 2010.
Debian GNU/LinuxWe intend to copy Etch to archive.debian.org on the evening (UTC) of Sunday 20th June. Etch will then gradually disappear from the mirrors; the dists tree will be immediately removed and the files in the pool will be removed in groups over the following few days." the first post and here is the second post.
SUSE Linux and openSUSEannounced the posting of a set of proposals for the distribution's future strategy. There is a "community statement" which is under discussion now; the various strategies ("Base for derivatives," "Home for developers," and "Mobile and cloud-ready distribution") are set to be discussed on subsequent days.
Newsletters and articles of interest
Page editor: Rebecca Sobol
DevelopmentSphinx, which is headed toward a 1.0 release one of these days.
Sphinx was originally created for use in the generation of the Python documentation, but it has grown to the point that quite a few other projects (including Bazaar, Blender, Calibre, Django, Mayavi, etc) are using it as well. It remains heavily oriented toward code documentation, and it has a number of features (such as extracting documentation strings from functions) to support that goal, but it's not limited to that role.
At the core of Sphinx is the reStructuredText markup language. It's a simple language, aimed at minimizing excessive decoration and making the source look as much like the final product as possible. So paragraphs are delineated by blank lines, *italics* becomes italics, bulleted list items begin with asterisks, etc. It's a straightforward, low-ceremony way of getting the text into the system.
What Sphinx has done, though, is to add a long list of features and directives on top of reStructuredText. Much of it is intended to make code documentation easy; for example, a C function would be introduced with:
.. cfunction:: void kthread_bind(struct task_struct *k, unsigned int cpu);
When the documentation is generated, this code will be marked up as desired (Pygments is used for that task) and incorporated into the document. Similar directives exist for the documentation of variables, exceptions, classes, methods, command-line options, and so on.
Beyond that, there are a number of features intended to help the creation of extensive, multi-volume documents. Sphinx can generate tables of contents which span multiple documents. Index entries will automatically be made from function definitions, but there's a mechanism for adding manual index entries as well. Code examples can be included directly from source files - an invaluable feature for anybody trying to keep documentation and code in sync. There is also an extension mechanism allowing projects to add any special directives they may need.
Sphinx can generate output in HTML format or in PDF by way of either LaTeX or rst2pdf. There is a theme mechanism allowing the customization of the output, which generally looks pretty nice, if perhaps just a little on the gaudy side.
The current version is 0.6.7, which was released in early June. The development version is 1.0b2, as of this writing. The headline feature for 1.0 appears to be "domains," which provide collections of directives for specific programming languages. Domains, in other words, are another step in turning Sphinx from a Python-specific tool into something more generally applicable. Another important new feature is a builder which can output documents in the Epub format for mobile reader devices. Some new themes have been added. There's quite a bit more; see the release notes for details. While 1.0 is still in development, a look through the discussions on the project's mailing list suggests that quite a few people are using it.
This article, clearly, is a superficial overview. To truly describe the ups and downs of a tool like Sphinx would require writing a book with it - something your editor was unable to find the time to do by the weekly LWN deadline. There are larger projects currently under consideration, though, and Sphinx is very much on the radar for those. Any tool which makes the creation of high-quality documentation easier can only be a good thing.
Don't be a slave to syntax. Syntax is the Maya of programming, forever blinding many of the weaker souls to the eternal light of symbolic expressions.
Newsletters and articles
Page editor: Jonathan Corbet
Non-Commercial announcementsFSF's operations manager John Sullivan added, "Now that some details of ACTA have been made public, we know that our previous concerns were justified. We are asking the free software community to join us in speaking out against this attack on the public's freedom, and I hope that people will not only sign the statement, but also write and publish their own specific thoughts about the issues. This is a time for people to show -- in as many ways as possible -- that they value the freedoms ACTA threatens. The more signatures and visible support we have, the weaker ACTA will look.""
Commercial announcementsannounced the launch of its Associate Member Program and that Canonical has become the first Associate Member. "OIN Associate Members, such as Canonical, demonstrate support and commitment to limiting the effects of patent disputes in Linux. Canonical's activities and those of all companies seeking to adopt and use Linux will be facilitated as OIN works closely with Canonical and other companies that are supporting Linux's growth and expansion. Through the support of its founding members including IBM, NEC, Novell, Philips, Red Hat and Sony, OIN has amassed a broad portfolio of patents, including patents held by nominees on its behalf."
Articles of interestreporting (in French) that some unnamed investors have injected more money into Mandriva, which will not be sold after all. Business model changes may follow. English text can be had via Google, but don't be confused when it bizarrely translates "les utilisateurs Mandriva" as "Gentoo users." (Via O'Reilly). a discussion with Michael Meeks on the future of OpenOffice.org. "The problem is we're not starting from square one. OO.o has quality issues, maybe fewer than we did have, which is encouraging, but they're certainly there. And in consequence if you slow down changes you may improve quality, but from what base? Interestingly, the Linux kernel quality metric is improving and yet the rate of change is accelerating, so there are other ways to work. Maybe you have to go down hill a bit before you can find a better way up, so its possible if we relax QA just a bit, we make things worse, but if we increase the number of commits very radically we move to a different place in the graph, and bugs get traced and fixed much more quickly." suggests a different course for the free software movement. He says that users aren't responding well to being told "No" (or "Gno" as he puts it in a little dig at the FSF), so those who want to push freedom need to provide realistic alternatives to things like iPads, DRM-encrusted media, proprietary "cloud" services, and so on. "The free software movement, though, seems to be shrinking. It still has its adherents, of course. But, when I look around at Linux events I see a sea of Mac OS X. Most contributors I know see no problem with proprietary services like Dropbox and Ubuntu One. With very few exceptions, most companies that work in the community have settled on some mixture of proprietary and open source services to try to find a working revenue model. In short, the free software philosophy seems to have gone out the window for most users and contributors. And I'll freely admit, I've advocated the pragmatic approach — because after more than 10 years of working in the community, it's clear that getting things done with a purist approach isnt working." he's not done yet. "Now that copyrights have been made an issue, someone capturing Novel [sic] can most probably really do what SCO's lawyers only thought they could do: issue real Linux licenses and make them stick."
New BooksVolume 2, No. 1. There are several new articles, covering topics like GPL obligations, copyright boundaries, and a piece on "challenges and opportunities for open source legal communities" by Luis Villa.
Contests and AwardsAll artists in the Open Clip Art Library Community are encouraged to submit entries to be judged by an esteemed panel that includes Donna Benjamin and Giorgos Cheliotis. The logo selected to represent the Conference will earn the artist credit and promotion on the event's pages, as well as an invitation to attend the conference, itself. Any Community artist wishing to submit a logo for consideration should be sure to choose "fcrc logo" in the 'submit to contest' selection box, while filling out the standard clip art upload form." The submission deadline is July 9, 2010.
Education and Certification
Meeting Minutesappointed its officers. "Guido van Rossum returns as President of the PSF. As President, Guido serves as the principal representative of the Foundation."
|SciPy 2010||Austin, TX, USA|
|Linux Vacation / Eastern Europe||Grodno, Belarus|
|Euromicro Conference on Real-Time Systems||Brussels, Belgium|
|11th Libre Software Meeting / Rencontres Mondiales du Logiciel Libre||Bordeaux, France|
|State Of The Map 2010||Girona, Spain|
|Ottawa Linux Symposium||Ottawa, Canada|
|EuroPython 2010: The European Python Conference||Birmingham, United Kingdom|
|Community Leadership Summit 2010||Portland, OR, USA|
|O'Reilly Open Source Convention||Portland, Oregon, USA|
|11th International Free Software Forum||Porto Alegre, Brazil|
|ArchCon 2010||Toronto, Ontario, Canada|
|Haxo-Green SummerCamp 2010||Dudelange, Luxembourg|
|Gnome Users And Developers European Conference||The Hague, The Netherlands|
|Debian Camp @ DebConf10||New York City, USA|
|PyOhio||Columbus, Ohio, USA|
|DebConf10||New York, NY, USA|
|YAPC::Europe 2010 - The Renaissance of Perl||Pisa, Italy|
|Debian MiniConf in India||Pune, India|
|KVM Forum 2010||Boston, MA, USA|
|August 9||Linux Security Summit 2010||Boston, MA, USA|
|August 13||Debian Day Costa Rica||Desamparados, Costa Rica|
|August 14||Summercamp 2010||Ottawa, Canada|
|Conference for Open Source Coders, Users and Promoters||Taipei, Taiwan|
|Free and Open Source Software Conference||St. Augustin, Germany|
|European DrupalCon||Copenhagen, Denmark|
|August 28||PyTexas 2010||Waco, TX, USA|
If your event does not appear here, please tell us about it.
Page editor: Rebecca Sobol
Copyright © 2010, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds