Building a service around crowd-sourced information is a difficult undertaking — even top-tier projects like OpenStreetMap and Wikipedia have had their share of troubles, particularly when it came to steering the community of volunteer contributors. Now, the owner of the once-dominant travel wiki Wikitravel is squaring off against its former community members in an acrimonious dispute that has already resulted in multiple lawsuits.
At the root of the case is the desire of some former Wikitravel contributors to start a new travel-oriented site and import the original site's data. Wikitravel consists of user-contributed tourism and travel information for sites around the globe; as with many crowd-sourced projects, the data is licensed to permit re-use: Wikitravel uses Creative Commons Attribution-ShareAlike (CC-BY-SA).
The site was founded in 2003, and was sold to its current owner Internet Brands (IB) in 2006. IB owns a number of unrelated web properties, but it initially pledged to the Wikitravel community that it would keep things as it found them, and limit its revenue-seeking efforts to "unobtrusive, targeted, well-identified ads." Like Wikipedia, Wikitravel's content was divided into language-specific subdomains, each run more-or-less independently by a local contributor community. After the sale to IB, the German and Italian Wikitravel communities promptly left and started the rival site Wikivoyage, but other language communities stayed, including the largest, English.
Of course, the unobtrusive-advertising pledge was not a legally-binding contract, and over the years the ads on Wikitravel became more intrusive (including animated Flash ads, which were frequently flagged by users as being against the site's advertising policy), which led to dissatisfaction among the remaining editors. The community also complained that IB did not respond to bug reports, and allowed the version of MediaWiki running the site to languish several years without updates. But the final straw came in a 2011 proposal to integrate sponsored hotel-and-travel booking elements directly into article pages.
Opinion might vary as to how "intrusive" any given monkey-punching Flash ad is, but the booking tool seemed to detract from the site's purpose as an unbiased information source. Volunteer Peter Fitzgerald, like many others, voiced his opposition, saying in the advertising policy discussion:
Nevertheless, IB proceeded with integration of the booking tool in early 2012.
In April, Wikitravel volunteer editors decided that they had had enough, and reached out to Wikimedia (the organization behind Wikipedia and related sites) with a proposal to start a new travel wiki, based on a merger of content from Wikivoyage, Wikitravel, and a few smaller efforts. The proposal became a formal "request for comments" and was subsequently approved by Wikimedia's board following a public vote.
IB, however, did not greet the proposal with similar enthusiasm. According to Wikitravel contributor Jani Patokallio, IB responded first by removing discussion of the migration from Wikitravel's talk pages, blocking some participating users, sending threatening messages to others, and removing the administrator privileges of several. On August 29, IB stepped up its response significantly when it filed a lawsuit against James Heilman and Ryan Holliday, two volunteer Wikitravel contributors who participated in the Wikimedia migration plan. The suit charges Heilman and Holliday with trademark infringement, trade name infringement, unfair competition, and civil conspiracy.
The filing [PDF] is eleven pages of lawyerly prose, but the gist of it is the accusation that Heilman and Holliday offered a "competitive website by trading on Internet Brands’ Wikitravel Trademark" — which seems to mean that they used the name Wikitravel when proposing and discussing the migration to MediaWiki. The suit also repeatedly refers to the rival site as "Wiki Travel Guide," a name that IB claims infringes on its trademark. Nevertheless, no actual site has yet resulted from the migration proposal, and only on October 16 did the new project decide on a name — which will evidently remain Wikivoyage. Patokallio examined the suit in detail in another post, including the trademark infringement claims.
Notably, the lawsuit does not take issue with the right of the departing contributors (or anyone else) to take Wikitravel page content and import it into a rival project, but it does suggest (in claim 31) that the volunteers are misappropriating the CC-BY-SA-licensed content by not properly attributing Wikitravel as its source. This is still a problematic claim in light of the fact that, when the suit was filed, the Wikimedia-run site did not yet exist. But Creative Commons highlighted the licensing dimension of the suit in a post on its blog, where it observed that if the licensor of CC-BY-SA content (in this case, IB) "wants to completely disassociate themselves from particular reuses, they have the right to request that all attribution and mention of them be removed, and those reusing the work must do so to the extent practicable." Consequently, if IB so wished, it could request that Wikitravel not be mentioned on the new site as the source of the imported content.
Such an action would presumably stop the alleged trademark infringement at the new site, were it not that IB concurrently claims that the new site fails to properly credit Wikitravel as a source. To that, the EFF post also notes that even if Wikitravel or another licensor feels that it is not receiving proper attribution for a derived work, the license only requires the creator of the derivative work to provide attribution in "a manner 'reasonable to the medium or means' used by the licensee, and for credit to be provided in a 'reasonable manner.'" The Creative Commons post does not go into detail, but the suggestion is that this is simple to do — perhaps through the wiki engine's existing revision tracking system at import-time.
Wikimedia took an even harsher view of the lawsuit against Heilman and Holliday, calling it a clear attempt to "intimidate other community volunteers from exercising their rights to freely discuss the establishment of a new community" that ultimately seeks to prevent Wikimedia from starting a competing project. The IB filing does mention Wikimedia, though briefly, alleging that it may add Wikimedia and other "co-conspirators" to the list of defendants at a later time. Wikimedia then sued [PDF] IB on September 5 seeking a declaratory judgment that IB has no right to impede or disrupt the creation of a rival travel wiki.
Wikimedia's suit seeks to recover attorney's fees for the defendants, but it primarily seeks relief by asking the court to keep IB from interfering either in the import of data from the Wikitravel site or in communication between Wikitravel contributors — including ex-contributors. Holliday has taken proactive measures, first seeking to transfer the lawsuit from state to federal court, then seeking [PDF] a dismissal of the case under anti-"Strategic Lawsuit Against Public Participation" (SLAPP) legislation. A SLAPP suit is one in which the plaintiff is primarily trying to censor or intimidate the defendant (as opposed to one where it actually believes it will win at trial). IB denied that assertion in an October 16 response [PDF], and argued that the case is a legitimate dispute among business competitors. Holliday has until October 22 to file a response to this latest response, and at present the first court date is scheduled for November.
Reading through it, IB's suit does not seem to have much depth. It repeatedly refers to the rival travel wiki site (and in particular to the "confusingly similar" name of that site) in the present tense, despite the fact that the site and its name did not exist when the suit was filed, and it couches its allegations in business terms (such as claiming the defendants are "profiting" from IB's trademark and have subsequently been "unjustly enriched"), despite the projects' not-for-profit nature. The "civil conspiracy" charge is also puzzling, and appears to amount solely to the fact that the listed defendants discussed the project. But then again, one can rarely predict where a lawsuit will head; IB is no stranger to acrimonious litigation — it is currently embroiled in another suit against former employees for writing a rival web discussion forum package similar to IB's vBulletin product.
Alienating one's users and crowd-sourcing contributors is rarely a wise move; actively suing them in addition probably guarantees that Wikitravel (at least under IB's care) is doomed. But that alone does not guarantee success for the new Wikivoyage project. Should the lawsuit be dismissed or turn out in the new project's favor, the new travel wiki project will still have a technical hurdle to overcome. Although IB does not dispute the license under which Wikitravel's content is published, the company has never provided database dumps or other convenient ways to export the data in bulk. That leaves page-scraping and other tedious procedures to extract the thousands of pages and uploaded media, not to mention the extra challenge of removing spam and vandalism (which have been on the rise since the exodus of the main Wikitravel editor community).
A still bigger hurdle might be attracting the eyes of site visitors — the web is littered with failed attempts to fork popular properties, and even a powerful advocate like Wikimedia does not guarantee success. For example, most of us have probably forgotten about Amazon's "Amapedia" project, which attempted to build a crowd-sourced product review wiki. Despite being directly integrated with the web's number one retailer, it flopped.
No doubt Wikimedia is the largest player in the open data and crowdsourced-material market, but that does not mean all of its projects can automatically unseat pre-existing competitors; consider the relative mind-share enjoyed by its ebook library Wikisource compared to Project Gutenberg. The vast majority of the old Wikitravel editors seem to already be on board with the new effort, which is vital — but it could still be a long, rocky road ahead.
On October 26, Microsoft will release Windows 8. Normally, a release of that kind would be largely uninteresting to Linux users, but Windows 8 branding brings with it a hardware requirement that will definitely impact those wanting to use Linux: UEFI secure boot. Distributions such as Fedora, SUSE, and Ubuntu have been working on their plans for supporting secure boot for more than a year now, so it may be something of a surprise that the Linux Foundation (LF) recently announced its own entrant into the secure boot derby. But the LF "pre-bootloader" is mostly meant as a stop-gap measure for those distributions that have yet to decide on their secure boot strategy.
Secure boot is meant to protect operating systems from pre-boot malware by cryptographically protecting the first step of the boot process. Only binaries signed by keys stored in the UEFI firmware will be allowed to run when secure boot is enabled—which must be the default for certified hardware. On Windows and other systems, the first-stage bootloader will check the signatures of the next steps, but that is not strictly required. Since the keys stored in the firmware are under the control of the hardware makers, it is likely that there will only be keys from Microsoft and the manufacturer stored there. That leads to the distasteful, but unavoidable, need to get bootloaders signed by Microsoft in order to boot on secure-boot-enabled systems.
Much of the hue and cry surrounding secure boot has been about the need for Microsoft-signed bootloaders, but there is little alternative. One could imagine some kind of "Linux certificate authority" that could get its key installed on systems, but that ship has sailed at this point—hardware will be available soon with just the keys from Microsoft and the vendor. Any solution for booting Linux on those systems requires a bootloader signed by Microsoft or disabling secure boot altogether. The solutions vary on exactly what that bootloader actually does.
The LF approach is the "smallest piece of code we could think of" that would allow Linux to boot on Windows 8 systems, James Bottomley, member of the LF Technical Advisory Board (TAB), said in email. Bottomley did much of the work on what he calls the "pre-bootloader" that will boot any Linux—signed or unsigned—using a "present user" test. That test ensures that there is a user present at boot time. The pre-bootloader is, of course, signed by Microsoft, but instead of requiring a signature on the next stage of the boot process (typically a full-blown bootloader like GRUB2), it will load and run any code if the user at the keyboard allows it.
Beyond that, if the system is in "setup mode", the pre-bootloader will ask the user if they want the signature of the second-stage bootloader installed into the UEFI secure database. That will allow unattended booting of the system in the future. While it is mandatory under the certification requirements for hardware vendors to provide a way to put systems into setup mode (or to disable secure boot), the user interface to do so is left unspecified. That means there will be several—possibly many—different ways to put systems into setup mode. In order to collect information about these different mechanisms, the pre-bootloader will refer users to an LF web site that will be gathering and disseminating instructions for putting systems into setup mode.
The intent, Bottomley said, is "to ensure that smaller distributions [have] a policy free option they could turn to as an interim solution while they sorted our what their own security policy around secure boot would be". The pre-bootloader binary, once signed by Microsoft, will be available on the LF web site for anyone to use. Distributions that don't want to have their own bootloader signed or, indeed, participate in secure boot at all, will be able to use the pre-bootloader for both live distributions (on CD/DVD or USB devices) and for installations to the hard disk. However, users will either need to get their systems into setup mode and install the second-stage bootloader signature/hash—or be present on every boot.
Requiring users to find and enable setup mode has a minor side benefit, Bottomley said. The information on how to do that will be useful for others, to either permanently disable secure boot or to install their own keys (either in addition to the existing keys or supplanting them). For booting a live distribution, though, testing for user presence should not be that much of a burden.
There is an alternative to requiring systems to be put into setup mode or for users to be at the keyboard when booting: a pre-bootloader being used by (at least) Fedora and SUSE, called "shim". Shim takes a different approach, but still allows distributions to set their own policies. There are two main differences between it and the LF pre-bootloader, though. Shim can contain an internal keyring for keys used to sign the second-stage bootloader (or just the cryptographic hash(es) of authorized bootloaders), which will be checked in addition to the factory-installed keys. In addition, in its present incarnation, shim will not provide a way to circumvent signature/hash checking with a present user test. According to shim developer Matthew Garrett, there is strong consideration of adding that kind of test for removable media in support of live distributions and installation media, though.
Even a shim that has an internal keyring can support booting binaries that are signed by keys that don't appear on that keyring (and are not present in the firmware). It does that by using an idea that SUSE came up with for storing keys and hashes without requiring users to find and enable setup mode: the "Machine Owner Key" (MOK). It turns out that the UEFI specification provides some secure storage locations that can only be accessed (and, importantly, changed) during the boot process. Shim uses that storage as a place to put keys or hashes if the user directs it to, which avoids requiring user presence on subsequent boots.
Both Fedora and SUSE will be releasing shim binaries that contain their distribution keys on the internal keyring and are signed by Microsoft. Because of the MOK storage idea, though, those binaries could be used by other distributions. Even distributions that are not planning to sign their second stage (and their kernel) could use one of the signed shims. A recently added shim feature will add hashes (rather than just signatures) to the MOK. So a distribution that wants to minimize its dealings with secure boot can simply ship one of those shims and instruct its users to store the second-stage hash in the MOK as prompted by shim. Or those distributions can use the LF pre-bootloader.
That last part is a bit of a sticking point, at least for Garrett. Because the pre-bootloader comes with an LF "stamp of approval", it may well be seen as "the Linux solution" for secure boot. But Garrett believes that the pre-bootloader isn't "terribly useful". All of the functionality that it provides is also available in shim, he said, except for the ability to "hit y and it'll run whatever you want". Instead, shim users can just use the interface to add keys or hashes to the MOK and boot unattended forevermore.
There are also some dangers in the LF approach, Garrett said. Because non-technical users are easily fooled into clicking through security warnings, the pre-bootloader could be used to attack Windows:
While trojaned Windows binaries aren't directly a problem for Linux, they could be a problem for signed first-stage bootloaders. Secure boot has a database of blacklisted keys and hashes that can be used to stop malware from running on the system. If the LF pre-bootloader is used by some form of malware, it could be blacklisted. That's also true of any shim similarly used, but shim's present user "test" is a bit more complicated than that of the pre-bootloader, so malware authors may be more inclined to target the simpler test.
In any case, signed Linux first-stage bootloaders are clearly at some level of risk of being blacklisted down the road. That risk is inherent in the fact that the secure boot requirements are set by Microsoft to further its own ends. One would guess it won't use its power over the contents of the blacklist indiscriminately, but there is no technical obstacle to it doing so.
An alternative that the LF could have taken would be to create a shim with an empty keyring and get that signed, shim developer Peter Jones said in email. A shim built that way would only run code signed by the keys in the firmware. Since a shim built that way wouldn't have the key or hash for the second-stage bootloader available in the databases it consults, it wouldn't run the second stage, but it would prompt users to add that key or hash to the MOK on first boot. That would allow smaller distributions or those uninterested in signing their binaries to use shim—and avoid the present user test (or setup mode) except for the first boot.
Garrett in particular seems irritated by the LF approach. Because of the uncertainty on how to get systems into setup mode, he believes the pre-bootloader risks making Linux a second-class citizen. In the comment linked above, he warned:
Furthermore, he is unhappy that the LF went its own way, rather than working with Fedora and SUSE on shim. In another comment, Garrett is not particularly pleased with the decision to build a separate pre-bootloader after the TAB had been urged to work together on shim:
That's not quite the way Bottomley sees things, however. The LF and TAB were "exclusively concentrating on tools that keep linux booting and installing", with an emphasis on the simplest solution. As he noted, "Shim was originally designed as a solution to take advantage of secure boot and enforce a security policy rather than one that simply permits any linux distribution to boot". As a neutral party, at least with respect to distributions, the LF did not want to take sides on what kinds of secure boot policies distributions should choose:
It may well turn out that the LF pre-bootloader is simply a temporary measure and that shim can handle all of the different use cases—the LF code uses some parts of shim, after all. Or perhaps the simplicity of the pre-bootloader code will be attractive to some distributions. The pre-bootloader requirement to get the system into setup mode might be attractive to some users or distributions that want to ensure their keys are the only ones present in the system, for example. Bottomley and the LF would be fine with any of the possible outcomes:
Booting Linux on new x86 hardware is clearly going to be a bit more difficult than it has been in the past. Due to a lot of hard work from various folks, though, it will be a lot easier than it could have been. In the end, there is room for both solutions, though there is merit to Garrett's concern that the LF solution will be taken as "the Linux solution" for secure boot. At last report, Ubuntu planned to use its own first-stage bootloader, and other options may arise, so the "one true Linux secure boot solution" may never really exist.Korea Linux Forum (KLF) in Seoul, South Korea, in mid-October. The stated goal was "to foster a stronger relationship between South Korea and the global Linux development community." In truth, South Korea is already a strong presence in this community; arguably KLF was more of a recognition and celebration of that relationship. In any case, one conclusion was clear: there is a lot going on in this part of the world.
Some years ago, the Open Source Development Laboratories recognized that Japanese companies were increasingly making use of Linux but were not always participating in the development community. To help close the loop, OSDL began a series of events where Japanese developers could hear from — and talk with — developers from the wider community; that practice continued into the Linux Foundation era. Your editor was lucky enough to be able to attend a number of these events, starting in 2007. These conferences cannot claim all of the credit for the marked increase in contributions from Japan over the last several years, but it seems clear that they helped. The Japanese Linux Symposium has since transformed into LinuxCon Japan, a proper development conference in its own right.
KLF is clearly meant to follow the same pattern, but there is a big difference this time around: community participation from Korea is already significant and increasing in a big way. For example, Samsung first appeared in the list of top kernel contributors in the 2.6.35 development cycle over two years ago; it has held its place on that list ever since. Contributions from Korean developers are clearly not in short supply. That made the job of the KLF speakers easy; rather than encouraging Korean developers to participate more, they were able to offer their thanks and talk more about how to get things done in the community.
The first talk (after the inevitable cheerleading session by Linux Foundation head Jim Zemlin) was by Samsung vice president Wonjoo Park; his goal was to make it clear that Linux is an important resource for Samsung, the "host sponsor" for the event as a whole. Software, he said, is the means for product differentiation in today's market; it is the most important part of any product and drives the business as a whole. Samsung, it seems, is a software company.
The company got its start with Linux in 2003, using a distribution from MontaVista. Use of Linux expanded over the years: appliances in 2005, televisions in 2006, and so on. Samsung's first Linux smartphone came out in 2004; it featured a voice-activated phone book. In 2007 Samsung joined LiMo; the first LiMo-based phone came out in 2009. In 2012, products all across the Samsung line, from phones and tablets to home theater systems, cameras and printers, are all based on Linux.
Now, of course, much of the company's efforts are going into furthering the Tizen distribution. He mentioned the recently-posted F2FS filesystem: Samsung could have held onto that code and kept F2FS proprietary, he said, but that would have deterred innovation; sharing it, instead, allows the company to accept changes from others. Samsung has also put together an extensive license compliance process after a "rough start" that forced the company to apologize to the community. One of the results is opensource.samsung.com, one-stop shopping for the source code for Samsung's products.
In summary, he said, Linux has become a "core competitive competence" for Samsung; the company would not be able to do what it does without it.
Korean rockstar hacker Tejun Heo gave a well-received keynote presentation on what it is like to be a community developer. It is hard, he said, but then, working in Korean companies, where the expectations are high, is hard in general. Developers who can succeed in the corporate setting can make it in the community as well. Developing in the community has a lot of rewards, including the fact that credit for the work stays with the developer rather than accruing to the sponsoring company. It is a challenging path, but full of benefits.
KLF was, like the early Japan events, oriented toward information delivery rather than the sort of critical discussion of ongoing work that one finds at a serious development conference. That does not mean that there was no development work on display, though. Arguably the most interesting talk was Kisoo Yu's discussion of the big.LITTLE switcher (originally written by Nicolas Pitre). Big.LITTLE is an ARM-based system-on-chip architecture that combines a number of slow, power-efficient processors with fast, power-hungry processors on the same chip. In this particular case, Kisoo discussed an upcoming Samsung Exynos processor combining four Cortex A7 processors with four Cortex A15's — yes, an eight-core SoC.
Big.LITTLE poses a number of interesting challenges for the kernel: how does one schedule tasks across the system to optimize both throughput and power consumption? Kisoo described two approaches, the first of which involves running Linux under a simple hypervisor that transparently switches the hardware from slow mode (running on all four A7's) to fast mode (all four A15's) without the kernel's participation or awareness. The alternative approach has the kernel itself explicitly managing the SoC as a four-processor system, switching each one independently between the fast and slow cores as if it were simply adjusting the CPU's clock frequency. Either way, a number of heuristics have been developed to try to determine the best time to make the switch from one to the other. This SoC offers an interesting hardware feature that can quickly transfer interesting L2 cache entries from one core to another to speed the switching process, which can be done in 30µs or so.
Perhaps the most interesting takeaway from the talk is that we still don't really have a good idea for how to manage these systems. This SoC is a true eight-core processor; it would seem that an optimal approach would manage it as such rather than as a four-core system with a big "turbo" button. The fact that we are, thus far, unable to do so is not an indictment of the developers working on the task in any way; it is clearly a hard problem without much in the way of useful solutions in the literature. As is the case with many other hard operating system problems, the work being done now will get us closer to an understanding of the issues and the eventual development of better solutions.
One thing that became clear at the inaugural KLF is that Korea is increasingly supplying a lot of sharp minds ready to work on problems like this, and that this trend looks set to continue indefinitely. Energy abounds, as does, seemingly, a good sense of fun. Your editor would like to thank our hosts in Korea for hosting an engaging event, treating us so well, and even for inflicting "Gangnam style" K-pop music on us at the conference dinner. And, of course, thanks are due to the Linux Foundation for supporting your editor's travel to the event.
At times it can seem like protecting one's online privacy is a Sisyphean struggle. Even when the software industry listens to the concerns of privacy advocates, the site owners and secretive data-collectors who profit from pillaging private information are quick to find every loophole and work-around in existence to regain their access to profitable data. Such seems to be the case with the Do Not Track HTTP header (DNT), which has garnered support from browser vendors — plus a steady stream of assaults aimed at undermining it, courtesy of advertisers.
Although "opt out" mechanisms for web tracking have been discussed for years, the DNT HTTP header approach was first proposed by Mozilla's Mike Shaver. It has subsequently been developed under the stewardship of the World Wide Web Consortium's (W3C) Tracking Protection Working Group. According to the latest draft of the specification, DNT is an optional HTTP header field that can take either 0 or 1 as a value, where 1 indicates that the user prefers not to be tracked, and 0 indicates that the user prefers to allow tracking. The key issue, however, is that the header is intended to represent a user preference — which most interpret to mean a conscious choice on the user's part — and it must not be sent at all if the user has not expressed such a preference to the browser.
Initially Mozilla was the only browser vendor behind DNT, but Opera added support in July in Opera 12, as did Apple a few weeks later in Safari 6. Google added support in Chromium on September 13. In all four browsers, the DNT setting must be manually enabled in the application preferences. Mozilla contended from quite early on that this is a critical facet of making DNT a workable solution. If DNT were enabled automatically or by default, it would no longer represent "a choice made by the person behind the keyboard," but one made by the browser vendor.
The decision was controversial — after all, reasoned critics, who in their right mind wants to be tracked? But Mozilla stood firm, and eventually the other browser makers followed suit. Until June 2012, that is, when Microsoft announced that Internet Explorer (IE) 10 (which is scheduled to ship with Windows 8) would present the DNT option as a check-box shown to the user during installation, with the do-not-track option selected by default.
But enabling DNT by default violates the specification, opponents argued, and strips it of its meaning. And if the DNT header does not reflect an actual user's decision, the argument goes, advertisers will be justified in ignoring it. Apache's Roy Fielding objected strongly enough that he committed a change that causes the web server to un-set the DNT header when it is sent by IE 10. Fielding is a member of the W3C Tracking Protection Working Group, and his log message for the commit said that "Apache does not tolerate deliberate abuse of open standards." He elaborated on that interpretation in the inevitable argument that followed on GitHub, calling Microsoft's decision broken because it violates the specification's requirement that the DNT header default to "unset." Apache, he said, "has no particular interest in what goes in the open standard -- only in that the protocol means what the WG says it means when the extra eight bytes are sent on the wire."
Conspiracy theorists might suspect that Microsoft's decision is a subtle ploy to undermine DNT entirely to curry favor with advertisers and other user-tracking firms. If so, the advertising world is doing an excellent job of maintaining a cover story; the Association of National Advertisers (ANA) publicly criticized the decision in an open letter to Microsoft management.
Regardless of what happens on the browser and server fronts, DNT still relies on voluntary compliance on behalf of site administrators and service providers — and, by extension, compliance that matches up with what the user intends. The meaning of DNT might seem to be straightforward, but the people who make their money tracking users cannot be forced to agree. In September, Ed Bott at ZDNet reported that the Interactive Advertising Bureau (IAB) and the Digital Advertising Alliance (DAA) "devised their own interpretation" of DNT, under which they would continue to collect information, but would refrain from using that information to deliver targeted ads to the browser. Presumably that restraint lasts only for the duration of the browsing session in which DNT is sent.
Lest anyone propose a "Do Not Target Ads" HTTP header that IAB and DAA might conversely interpret as a reason to stop collecting tracking information, remember that nothing obligates advertisers or other information brokers to react to the header at all. Grant Gross at IDG said at least one site, a "tech-focused think tank" called the Information Technology and Innovation Foundation (ITIF), has unilaterally decided it will simply ignore the DNT header, and its site will report that fact to visitors.
Other members of the advertising business have embarked on their own campaigns to nip DNT in the bud. In June, the US Senate held hearings about tracking and DNT in particular. As the Electronic Frontier Foundation (EFF) observed, ANA representative Bob Liodice testified at the hearings that DNT would undermine cybersecurity, including "issues such as online sexual predators and identity theft." The Senate did not seem to buy Liodice's argument (Senator Jay Rockefeller, chairman of the Committee on Commerce, Science, and Transportation, declared the cybersecurity argument "a total red herring"), although the EFF noted that online tracking does raise important law enforcement questions in addition to its advertising angle.
Most recently, DNT critics gathered at the W3C Tracking Protection Working Group meeting in Amsterdam, where the Direct Marketing Association (DMA) proposed that an exception be added to the DNT specification for "marketing." The EFF blog entry about the meeting quotes the DMA representative as saying:
Such an "exception" would seem to cover the precise tracking scenario for which DNT is designed, and indeed other members of the working group fought back. Fielding accused DMA of "raising issues that you know quite well will not be adopted." The EFF views DMA's participation in the meeting as an attempt to undermine or derail the specification-writing process. That is a bit of a judgment call, but it is clear from the latest traffic on the working group's mailing list that DMA, DAA, and other advertising groups are not meshing well with the software industry representatives who typically account for the bulk of W3C participation. In recent weeks there have been multiple threads about redefining basic terms like "service provider" and "user agent" that indicate (at the very least) a culture clash.
On the plus side, there have been sites and web services that have voluntarily announced their intention to comply with DNT; Twitter is the highest-profile. But the specification is far from completion, and as recent events show, voluntary compliance will only take care of a subset of the data-collecting entities on the web today. In the GitHub comment linked to above, Fielding speculated that the long-term ploy of DNT advocates was to get widespread adoption, then to push for mandatory compliance through legislation. Whether that will happen is anyone's guess; the US Federal Trade Commission (FTC) has endorsed DNT, which in addition to the US Senate hearings might provide enough evidence to make the advertising industry wary.
Implementing a campaign of "good enough for most" self-regulation would be one path to avoiding such government oversight, and derailing or gutting the specification could be effective, too. At the moment, the advertising business seems to be pursuing both tactics. It is up to the W3C and privacy advocates to respond, but at least for the time being the only guaranteed way for users to safeguard their privacy remains the do-it-yourself approach: Tor, NoScript, Adblock Plus, and so on. A world where user-tracking is simply not an issue sounds nice — it just doesn't sound likely in the near-term.
Mozilla has now released version 16.0.1 of Firefox, fixing the security hole discovered October 10 in Firefox 16, as well as a few other incidental issues. The H has a brief recap of the situation, including availability of the corresponding update for other Mozilla products.
|Package(s):||cxf||CVE #(s):||CVE-2012-2379 CVE-2012-2378 CVE-2012-3451|
|Created:||October 12, 2012||Updated:||October 17, 2012|
From the Fedora advisory:
A flaw was found in the way Apache CXF verifies that XML elements were signed or encrypted by a particular Supporting Token. CXF checks to ensure these elements are signed or encrypted by a Supporting Token, but not whether the correct token is used. A remote attacker could use this flaw to transmit confidential information without the appropriate security, and potentially to circumvent access controls on web services exposed via CXF. (CVE-2012-2379)
A flaw was found in the way Apache CXF enforced child policies of WS-SecurityPolicy 1.1 on the client side. In certain circumstances, this could lead a client failing to sign or encrypt certain elements as directed by the security policy, leading to information disclosure and insecure transmission of information. (CVE-2012-2378)
Apache CXF is vulnerable to SOAPAction spoofing attacks under certain conditions. If web services are exposed via Apache CXF that use a unique SOAPAction for each service operation, then a remote attacker could perform SOAPAction spoofing to call a forbidden operation if it accepts the same parameters as an allowed operation. WS-Policy validation is performed against the operation being invoked, and an attack must pass validation to be successful. (CVE-2012-3451)
|Created:||October 15, 2012||Updated:||December 9, 2013|
|Description:||From the CVE entry:
dracut.sh in dracut, as used in Red Hat Enterprise Linux 6, Fedora 16 and 17, and possibly other products, creates initramfs images with world-readable permissions, which might allow local users to obtain sensitive information.
|Created:||October 16, 2012||Updated:||April 8, 2013|
|Description:||From the Mageia advisory:
Directory traversal vulnerability in html2ps before 1.0b7 allows remote attackers to read arbitrary files via directory traversal sequences in SSI directive.
|Package(s):||java||CVE #(s):||CVE-2012-3216 CVE-2012-4416 CVE-2012-5068 CVE-2012-5069 CVE-2012-5071 CVE-2012-5072 CVE-2012-5073 CVE-2012-5075 CVE-2012-5077 CVE-2012-5079 CVE-2012-5081 CVE-2012-5084 CVE-2012-5085 CVE-2012-5086 CVE-2012-5089|
|Created:||October 17, 2012||Updated:||December 3, 2012|
|Description:||From the Red Hat advisory:
Multiple improper permission check issues were discovered in the Beans, Swing, and JMX components in OpenJDK. An untrusted Java application or applet could use these flaws to bypass Java sandbox restrictions. (CVE-2012-5086, CVE-2012-5084, CVE-2012-5089)
Multiple improper permission check issues were discovered in the Scripting, JMX, Concurrency, Libraries, and Security components in OpenJDK. An untrusted Java application or applet could use these flaws to bypass certain Java sandbox restrictions. (CVE-2012-5068, CVE-2012-5071, CVE-2012-5069, CVE-2012-5073, CVE-2012-5072)
It was discovered that java.util.ServiceLoader could create an instance of an incompatible class while performing provider lookup. An untrusted Java application or applet could use this flaw to bypass certain Java sandbox restrictions. (CVE-2012-5079)
It was discovered that the Java Secure Socket Extension (JSSE) SSL/TLS implementation did not properly handle handshake records containing an overly large data length value. An unauthenticated, remote attacker could possibly use this flaw to cause an SSL/TLS server to terminate with an exception. (CVE-2012-5081)
It was discovered that the JMX component in OpenJDK could perform certain actions in an insecure manner. An untrusted Java application or applet could possibly use this flaw to disclose sensitive information. (CVE-2012-5075)
A bug in the Java HotSpot Virtual Machine optimization code could cause it to not perform array initialization in certain cases. An untrusted Java application or applet could use this flaw to disclose portions of the virtual machine's memory. (CVE-2012-4416)
It was discovered that the SecureRandom class did not properly protect against the creation of multiple seeders. An untrusted Java application or applet could possibly use this flaw to disclose sensitive information. (CVE-2012-5077)
It was discovered that the java.io.FilePermission class exposed the hash code of the canonicalized path name. An untrusted Java application or applet could possibly use this flaw to determine certain system paths, such as the current working directory. (CVE-2012-3216)
This update disables Gopher protocol support in the java.net package by default. Gopher support can be enabled by setting the newly introduced property, "jdk.net.registerGopherProtocol", to true. (CVE-2012-5085)
Note: If the web browser plug-in provided by the icedtea-web package was installed, the issues exposed via Java applets could have been exploited without user interaction if a user visited a malicious website.
|Package(s):||java||CVE #(s):||CVE-2012-5070 CVE-2012-5074 CVE-2012-5076 CVE-2012-5087 CVE-2012-5088|
|Created:||October 17, 2012||Updated:||November 21, 2012|
|Description:||From the Red Hat advisory:
It was discovered that the JMX component in OpenJDK could perform certain actions in an insecure manner. An untrusted Java application or applet could possibly use these flaws to disclose sensitive information. (CVE-2012-5070, CVE-2012-5075)
The default Java security properties configuration did not restrict access to certain com.sun.org.glassfish packages. An untrusted Java application or applet could use these flaws to bypass Java sandbox restrictions. This update lists those packages as restricted. (CVE-2012-5076, CVE-2012-5074)
Multiple improper permission check issues were discovered in the Beans, Libraries, Swing, and JMX components in OpenJDK. An untrusted Java application or applet could use these flaws to bypass Java sandbox restrictions. (CVE-2012-5086, CVE-2012-5087, CVE-2012-5088, CVE-2012-5084, CVE-2012-5089)
|Created:||October 11, 2012||Updated:||November 20, 2012|
From the Red Hat advisory:
A flaw was found in libvirtd's RPC call handling. An attacker able to establish a read-only connection to libvirtd could use this flaw to crash libvirtd by sending an RPC message that has an event as the RPC number, or an RPC number that falls into a gap in the RPC dispatch table. (CVE-2012-4423)
|Package(s):||firefox, thunderbird, seamonkey, xulrunner||CVE #(s):||CVE-2012-4193|
|Created:||October 15, 2012||Updated:||October 17, 2012|
|Description:||From the Red Hat advisory:
A flaw was found in the way XULRunner handled security wrappers. A web page containing malicious content could possibly cause an application linked against XULRunner (such as Mozilla Firefox) to execute arbitrary code with the privileges of the user running the application.
|Package(s):||firefox, thunderbird, seamonkey||CVE #(s):||CVE-2012-4191 CVE-2012-4192|
|Created:||October 15, 2012||Updated:||October 17, 2012|
|Description:||From the CVE entries:
The mozilla::net::FailDelayManager::Lookup function in the WebSockets implementation in Mozilla Firefox before 16.0.1, Thunderbird before 16.0.1, and SeaMonkey before 2.13.1 allows remote attackers to cause a denial of service (memory corruption and application crash) or possibly execute arbitrary code via unspecified vectors. (CVE-2012-4191)
Mozilla Firefox 16.0, Thunderbird 16.0, and SeaMonkey 2.13 allow remote attackers to bypass the Same Origin Policy and read the properties of a Location object via a crafted web site, a related issue to CVE-2012-4193. (CVE-2012-4192)
|Package(s):||firefox||CVE #(s):||CVE-2012-3977 CVE-2012-3987|
|Created:||October 17, 2012||Updated:||October 17, 2012|
|Description:||From the SUSE advisory:
CVE-2012-3977: Security researchers Thai Duong and Juliano Rizzo reported that SPDY's request header compression leads to information leakage, which can allow the extraction of private data such as session cookies, even over an encrypted SSL connection. (This does not affect Firefox 10 as it does not feature the SPDY extension. It was silently fixed for Firefox 15.)
CVE-2012-3987: Security researcher Warren He reported that when a page is transitioned into Reader Mode in Firefox for Android, the resulting page has chrome privileges and its content is not thoroughly sanitized. A successful attack requires user enabling of reader mode for a malicious page, which could then perform an attack similar to cross-site scripting (XSS) to gain the privileges allowed to Firefox on an Android device. This has been fixed by changing the Reader Mode page into an unprivileged page.
|Created:||October 11, 2012||Updated:||April 8, 2014|
From the SUSE Bugzilla entry:
A vulnerability has been reported in OptiPNG, which can be exploited by malicious people to potentially compromise a user's system.
The vulnerability is caused due to a use-after-free error related to the palette reduction functionality. No further information is currently available.
Success exploitation may allow execution of arbitrary code.
|Created:||October 15, 2012||Updated:||October 22, 2012|
|Description:||From the CVE entry:
Cross-site scripting (XSS) vulnerability in the HTML-Template-Pro module before 0.9507 for Perl allows remote attackers to inject arbitrary web script or HTML via template parameters, related to improper handling of > (greater than) and < (less than) characters.
|Created:||October 16, 2012||Updated:||October 29, 2012|
|Description:||From the phpMyAdmin advisories , :
 Multiple XSS due to unescaped HTML output in Trigger, Procedure and Event pages. When creating/modifying a trigger, event or procedure with a crafted name, it is possible to trigger an XSS.
|Created:||October 15, 2012||Updated:||October 17, 2012|
|Description:||From the qt advisory:
A security vulnerability has been discovered in the SSL/TLS protocol, which affects connections using compression.
All versions of TLS are believed to be affected. To address this, Qt will disable TLS compression by default.
If the attacker can insert data into the SSL connection, then by looking at the length of the compressed data it is possible to determine if the inserted data matches secret data or not.
|Created:||October 11, 2012||Updated:||October 17, 2012|
From the Mageia advisory:
Cross-site scripting (XSS) vulnerability in Roundcube Webmail 0.8.1 and earlier allows remote attackers to inject arbitrary web script or HTML via the signature in an email (CVE-2012-4668).
|Created:||October 11, 2012||Updated:||March 8, 2013|
From the Ubuntu advisory:
Shugo Maedo and Vit Ondruch discovered that Ruby incorrectly allowed untainted strings to be modified in protective safe levels. An attacker could use this flaw to bypass intended access restrictions. (CVE-2012-4466, CVE-2012-4481)
|Package(s):||ruby1.9.1||CVE #(s):||CVE-2012-4464 CVE-2012-4466|
|Created:||October 11, 2012||Updated:||November 5, 2012|
From the Ubuntu advisory:
Tyler Hicks and Shugo Maeda discovered that Ruby incorrectly allowed untainted strings to be modified in protective safe levels. An attacker could use this flaw to bypass intended access restrictions. (CVE-2012-4464, CVE-2012-4466)
|Created:||October 11, 2012||Updated:||December 3, 2012|
From the SUSE Bugzilla entry:
The HSRP dissector could go into an infinite loop. wnpa-sec-2012-26 CVE-2012-5237
The PPP dissector could abort. wnpa-sec-2012-27 CVE-2012-5238
Martin Wilck discovered an infinite loop in the DRDA dissector. wnpa-sec-2012-28 CVE-2012-5239 CVE-2012-3548 (see bnc#778000)
Laurent Butti discovered a buffer overflow in the LDP dissector. wnpa-sec-2012-29 CVE-2012-5240
Page editor: Michael Kerrisk
Brief itemsreleased on October 14. See the separate article, below, for a summary of the final items added during the 3.7 merge window.
There are two solutions that were contemplated for disabling the module: having a kind of global status of the crypto API that makes it non-responsive in case of an integrity/self-test error. The other solution is to simply terminate the entire kernel. As the former one also will lead to a kernel failure eventually as many parts of the kernel depend on the crypto API, the implementation of the latter option was chosen.
Should I just hope the sender realizes their foolishness on their own and give them N hours to rescind the statement and fix up their insane patch and resend it, thereby giving them a grace period? If so, what is the proper value for N?
Or is it fair game to let loose and channel up the Torvalds-like daemons within my keyboard, with the hope that it would actually do some good and they would learn from their mistakes?
Kernel development news
Interestingly, Linus expressed some skepticism about some of this cycle's work in the 3.7-rc1 announcement. For example, the discussion on the 64-bit ARM patch set concluded some time ago, but Linus came in with a late opinion of his own:
He also expressed some grumpiness about the user-space API header file split — an enormous set of patches that is only partially merged for 3.7. Header file cleanups, he says, are just too much pain for the benefit that results, so he will not consider any more of them in the future.
Grumbles notwithstanding, he pulled all of this work — and much more — for 3.7. The user-visible changes merged since last week's summary include:
Changes visible to kernel developers include:
At this point it is time to perform the final stabilization work on all these changes. If things go according to the usual schedule, that should result in the final 3.7 release sometime in early December.
Other than the merging of the server-side component of TCP Fast Open, one of the few user-space API changes that has gone into the just-closed 3.7 merge window is the addition of a new EPOLL_CTL_DISABLE operation for the epoll_ctl() system call. It's interesting to look at this operation as an illustration of the sometimes unforeseen complexities of dealing with multithreaded applications; that examination is the subject of this article. However, the addition of the EPOLL_CTL_DISABLE feature highlights some common problems in the design of the APIs that the kernel presents to user space. (To be clear: EPOLL_CTL_DISABLE is the fix to a past design problem, not a design problem itself.) These design problems will be the subject of a follow-on article next week.
Understanding the need for EPOLL_CTL_DISABLE requires an understanding of several features of the epoll API. For those who are unfamiliar with epoll, we begin with a high-level picture of how the API works. We then look at the problem that EPOLL_CTL_DISABLE is designed to solve, and how it solves that problem.
The (Linux-specific) epoll API allows an application to monitor multiple file descriptors in order to determine which of the descriptors are ready to perform I/O. The API was designed as a more efficient replacement for the traditional select() and poll() system calls. Roughly speaking, the performance of those older APIs scales linearly with the number of file descriptors being monitored. That behavior makes select() and poll() poorly suited for modern network applications that may handle thousands of file descriptors simultaneously.
The poor performance of select() and poll() is an inescapable consequence of their design. For each monitoring operation, both system calls require the application to give the kernel a complete list of all of the file descriptors that are of interest. And on each call, the kernel must re-examine the state of all of those descriptors and then pass a data structure back to the application that describes the readiness of the descriptors.
The underlying problem of the older APIs is that they don't allow an application to inform the kernel about its ongoing interest in a (typically unchanging) set of file descriptors. If the kernel had that information, then, as each file descriptor became ready, it could record the fact in preparation for the next request by the application for the set of ready file descriptors. The epoll API allows exactly that approach, by splitting the monitoring API up across three system calls:
Schematically, the epoll API operates as shown in the following diagram:
Because the kernel is able to maintain internal state about the set of file descriptors in which the application is interested, epoll_wait() is much more efficient than select() and poll(). Roughly speaking, its performance scales according to the number of ready file descriptors, rather than the total number of file descriptors being monitored.
The author of the patch that implements EPOLL_CTL_DISABLE, Paton Lewis, is not a regular kernel hacker. Rather, he's a developer with a particular user-space itch, and it would seem that a kernel change is the only way of scratching that itch. In the description accompanying the first iteration of his patch, Paton began with the following observation:
It is not currently possible to reliably delete epoll items when using the same epoll set from multiple threads. After calling epoll_ctl with EPOLL_CTL_DEL, another thread might still be executing code related to an event for that epoll item (in response to epoll_wait). Therefore the deleting thread does not know when it is safe to delete resources pertaining to the associated epoll item because another thread might be using those resources.
The deleting thread could wait an arbitrary amount of time after calling epoll_ctl with EPOLL_CTL_DEL and before deleting the item, but this is inefficient and could result in the destruction of resources before another thread is done handling an event returned by epoll_wait.
The fact that the kernel records internal state is the source of a complication for multithreaded applications. The complication arises from the fact that applications may also want to maintain state information about file descriptors. One possible reason for doing this is to prevent file descriptor starvation, the phenomenon that can occur when, for example, an application determines that a file descriptor has data available for reading and then attempts to read all of the available data. It could happen that there is a very large amount of data available (for example, another application may be continuously writing data on the other end of a socket connection). Consequently, the reading application would be tied up for a long period; meanwhile, it does not service I/O events on the other file descriptors—those descriptors are starved of service by the application.
The solution to file descriptor starvation is for the application to maintain a user-space data structure that caches the readiness of each of the file descriptors that it is monitoring. Whenever epoll_wait() informs the application that a file descriptor is ready, then, instead of performing as much I/O as possible on the descriptor, the application makes a record in its cache that the file descriptor is ready. The application logic then takes the form of a loop that (a) periodically calls epoll_wait() and (b) performs a limited amount of I/O on the file descriptors that are marked as ready in the user-space cache. (When the application finds that I/O is no longer possible on one of the file descriptors, then it can mark that descriptor as not ready in the cache.)
Thus, we have a scenario where the both kernel and a user-space application are maintaining state information about the same resources. This can potentially lead to race conditions when competing threads in a multithreaded application want to update state information in both places. The most fundamental piece of state information maintained in both places is "existence".
For example, suppose that an application thread determines that it is no longer necessary to monitor a file descriptor. The thread would first check to see whether the file descriptor is marked as ready in the user-space cache (i.e., there may still be some outstanding I/O to perform), and then, if the file descriptor is not ready, the thread would delete the file descriptor from the user-space cache and from the kernel's epoll interest list using the epoll_ctl(EPOLL_CTL_DEL) operation. However, these steps could fall afoul in scenarios such as the following involving two threads operating on file descriptor 9:
Thread 1 Thread 2 Determine from the user-space cache that descriptor 9 is not ready. Call epoll_wait(); the call indicates descriptor 9 as ready. Record descriptor 9 as being ready inside the user-space cache so that I/O can later be performed. Delete descriptor 9 from the user-space cache. Delete descriptor 9 from the kernel's epoll interest list using epoll_ctl(EPOLL_CTL_DEL).
Following the above scenario, some data will be lost. Other scenarios could lead to a corrupted cache or an application crash.
No use of (per-file-descriptor) mutexes can eliminate the sorts of races described here, short of protecting the calls to epoll_wait() with a (global) mutex, which has the effect of destroying concurrency. (If one thread is blocked in a epoll_wait() call, then any other thread that tries to acquire the corresponding mutex will also block.)
Paton's solution to this problem is to extend the epoll API with a new operation that atomically prevents other threads from receiving further indications that a file descriptor is ready, while at the same time informing the caller whether another thread has "recently" been told the file descriptor is ready. The new operation relies on some of the inner workings of the epoll API.
When adding (EPOLL_CTL_ADD) or modifying (EPOLL_CTL_MOD) a file descriptor in the interest list, the application specifies a mask of I/O events that are of interest for the descriptor. For example, the mask might include both EPOLLIN and EPOLLOUT, if the application wants to know when the file descriptor becomes either readable or writable. In addition, the kernel implicitly adds two further flags to the events mask in the interest list: EPOLLERR, which requests monitoring for error conditions, and EPOLLHUP, which requests monitoring for a "hangup" (e.g., we are monitoring the read end of a pipe, and the write end is closed). When a file descriptor becomes ready, epoll_wait() returns a mask that contains all of the requested events for which the file descriptor is ready. For example, if an application requests monitoring of the read end of a pipe using EPOLLIN and the write end of the pipe is closed, then epoll_wait() will return an events mask that includes both EPOLLIN and EPOLLHUP.
As well as the flags that can be used to monitor file descriptors for various I/O events, there are a few "operational flags"—flags that modify the semantics of the monitoring operation itself. One of these is EPOLLONESHOT. If this flag is specified in the events mask for a file descriptor, then, once the file descriptor becomes ready and is returned by a call to epoll_wait(), it is disabled from further monitoring (but remains in the interest list). If the application is interested in monitoring file descriptor once more, then it must re-enable the file descriptor using the epoll_ctl(EPOLL_CTL_MOD) operation.
Per-descriptor events mask recorded in an epoll interest list Operational flags I/O event flags EPOLLONESHOT, EPOLLET, ... EPOLLIN, EPOLLOUT, EPOLLHUP, EPOLLERR, ...
The implementation of EPOLLONESHOT relies on a trick. If this flag is set, then, if the file descriptor indicates as being ready via epoll_wait(), the kernel clears all of the "non-operational flags" (i.e., the I/O event flags) in the events mask for that file descriptor. This serves as a later cue to the kernel that it should not track I/O events for this file descriptor.
By now, we finally have enough details to understand Paton's extension to the epoll API—the epoll_ctl(EPOLL_CTL_DISABLE) operation—that allows multithreaded applications to avoid the kind of races described above. To successfully use this extension requires the following:
In addition, calls to epoll_ctl(EPOLL_CTL_DISABLE) and accesses to the user-space cache must be suitably protected with per-file-descriptor mutexes. We won't go into details here, but the second version of Paton's patch adds a sample application to the kernel source tree (under tools/testing/selftests/epoll/test_epoll.c) that demonstrates the principles.
The new epoll operation is employed via the following call:
epoll_ctl(epfd, EPOLL_CTL_DISABLE, fd, NULL);epfd is a file descriptor referring to an epoll instance. fd is the file descriptor in the interest list that is to be disabled. The semantics of this operation handle two cases:
Thus, we see that with a moderate amount of effort, and a little help from a new kernel interface, a race can be avoided when deleting file descriptors in multithreaded applications that wish to avoid file descriptor starvation.
There was relatively little comment on the first iteration of Paton's patch. The only substantive comments came from Christof Meerwald; in response to these, Paton created the second version of his patch. That version received no comments, and was incorporated into 3.7-rc1. It would be nice to think that the relatively paucity of comments reflects the silent agreement that Paton's approach is correct. However, one is left with the nagging feeling that in fact few people have reviewed the patch, which leaves open the question: is this the best solution to the problem?
Although EPOLL_CTL_DISABLE solves the problem, the solution is neither intuitive nor easy to use. The main reason for this is that EPOLL_CTL_DISABLE is a bolt-on hack to the epoll API that satisfies the requirement (often repeated by Linus Torvalds) that existing user-space applications must not be broken by making a kernel ABI change. Within that constraint, EPOLL_CTL_DISABLE may be the best solution to the problem. However, it seems certain that a better solution might have been possible if it had incorporated during the original design of the epoll API. Next week's follow-on article will consider whether a better initial solution could have been found and also consider why it might not be possible to find a better solution within the constraints of the current API.
Finally, it's worth noting that the EPOLL_CTL_DISABLE feature is not yet cast in stone, although it will become so in about two months, when Linux 3.7 is released. In the meantime, if someone comes up with a better idea to solve the problem, then the existing approach could be modified or replaced.
In the announcement for the 3.6.1-rt1 patch set, Thomas Gleixner described software interrupts this way:
The softirq mechanism is meant to handle processing that is almost — but not quite — as important as the handling of hardware interrupts. Softirqs run at a high priority (though with an interesting exception, described below), but with hardware interrupts enabled. They thus will normally preempt any work except the response to a "real" hardware interrupt.
Once upon a time, there were 32 hardwired software interrupt vectors, one assigned to each device driver or related task. Drivers have, for the most part, been detached from software interrupts for a long time — they still use softirqs, but that access has been laundered through intermediate APIs like tasklets and timers. In current kernels there are ten softirq vectors defined; two for tasklet processing, two for networking, two for the block layer, two for timers, and one each for the scheduler and read-copy-update processing. The kernel maintains a per-CPU bitmask indicating which softirqs need processing at any given time. So, for example, when a kernel subsystem calls tasklet_schedule(), the TASKLET_SOFTIRQ bit is set on the corresponding CPU and, when softirqs are processed, the tasklet will be run.
There are two places where software interrupts can "fire" and preempt the current thread. One of them is at the end of the processing for a hardware interrupt; it is common for interrupt handlers to raise softirqs, so it makes sense (for latency and optimal cache use) to process them as soon as hardware interrupts can be re-enabled. The other possibility is anytime that kernel code re-enables softirq processing (via a call to functions like local_bh_enable() or spin_unlock_bh()). The end result is that the accumulated softirq work (which can be substantial) is executed in the context of whichever process happens to be running at the wrong time; that is the "randomly chosen victim" aspect that Thomas was talking about.
Readers who have looked at the process mix on their systems may be wondering where the ksoftirqd processes fit into the picture. These processes exist to offload softirq processing when the load gets too heavy. If the regular, inline softirq processing code loops ten times and still finds more softirqs to process (because they continue to be raised), it will wake the appropriate ksoftirqd process (there is one per CPU) and exit; that process will eventually be scheduled and pick up running softirq handlers. Ksoftirqd will also be poked if a softirq is raised outside of (hardware or software) interrupt context; that is necessary because, otherwise, an arbitrary amount of time might pass before softirqs are processed again. In older kernels, the ksoftirqd processes ran at the lowest possible priority, meaning that softirq processing was, depending on where it is being run, either the highest priority or the lowest priority work on the system. Since 2.6.23, ksoftirqd runs at normal user-level priority by default.
On normal systems, the softirq mechanism works well enough that there has not been much motivation to change it, though, as described in "The new visibility of RCU processing," read-copy-update work has been moved into its own helper threads for the 3.7 kernel. In the realtime world, though, the concept of forcing arbitrary processes to do random work tends to be unpopular, so the realtime patches have traditionally pushed all softirq processing into separate threads, each with its own priority. That allowed, for example, the priority of network softirq handling to be raised on systems where networking needed realtime response; conversely, it could be lowered on systems where response to network events was less critical.
Starting with the 3.0 realtime patch set, though, that capability went away. It worked less well with the new approach to per-CPU data adopted then, and, as Thomas said, the per-softirq threads posed configuration problems:
So, in 3.0, softirq handling looked very similar to how things are done in the mainline kernel. That improved the code and increased performance on untuned systems (by eliminating the context switch to the softirq thread), but took away the ability to finely tweak things for those who were inclined to do so. And realtime developers tend to be highly inclined to do just that. The result, naturally, is that some users complained about the changes.
In response, in 3.6.1-rt1, the handling of softirqs has changed again. Now, when a thread raises a softirq, the specific interrupt in question (network receive processing, say) is remembered by the kernel. As soon as the thread exits the context where software interrupts are disabled, that one softirq (and no others) will be run. That has the effect of minimizing softirq latency (since softirqs are run as soon as possible); just as importantly, it also ties processing of softirqs to the processes that generate them. A process raising networking softirqs will not be bogged down processing some other process's timers. That keeps the work local, avoids nondeterministic behavior caused by running another process's softirqs, and causes softirq processing to naturally run with the priority of the process creating the work in the first place.
There is an exception, of course: softirqs raised in hardware interrupt context cannot be handled in this way. There is no general way to associate a hardware interrupt with a specific thread, so it is not possible to force the responsible thread to do the necessary processing. The answer in this case is to just hand those softirqs to the ksoftirqd process and be done with it.
A logical next step, hinted at by Thomas, is to move from an environment where all softirqs are disabled to one where only specific softirqs are. Most code that disables softirq handling is only concerned with one specific handler; all the others could be allowed to run as usual. Going further, he adds: "the nicest solution would be to get rid of them completely." The elimination of the softirq mechanism has been on the "todo" list for a long time, but nobody has, yet, felt the pain strongly enough to actually do that work.
The nature of the realtime patch set has often been that its users feel the pain of mainline kernel shortcomings before the rest of us do. That has caused a great many mainline fixes and improvements to come from the realtime community. Perhaps that will eventually happen again for softirqs. For the time being, though, realtime users have an improved softirq mechanism that should give the desired results without the need for difficult low-level tuning. Naturally, Thomas is looking for people to test this change and report back on how well it works with their workloads.
Patches and updates
Core kernel code
Filesystems and block I/O
Virtualization and containers
Benchmarks and bugs
Page editor: Jonathan Corbet
Creating a distribution for anonymity on the internet has its challenges. But it's important, especially for those living under repressive regimes. Getting the details right is clearly an overriding concern, which is why distributions of this kind tend to turn to Tor to provide that anonymity. But, Tor alone does not necessarily insulate users from disclosing personally identifiable information.
We looked at The Amnesic Incognito Live System (Tails)—a Tor-based live distribution—back in April 2011. But, regular applications or malware on a Tails system can potentially leak some information (e.g. IP address) that might be used to make a link between the user and their internet activity. The new Whonix distribution, which released an alpha version on October 9, uses virtualization to isolate the Tor gateway from the rest of the system, in part to eliminate those kinds of leaks.
Whonix is based on Debian and VirtualBox. It creates two separate virtual machines (VMs), one that runs all of the applications, and another that acts as a Tor gateway. All of the network traffic from the application VM (which is called the Whonix-Workstation) is routed through the Whonix-Gateway VM. That means the only network access available to applications is anonymized by Tor.
That setup has a number of benefits. For one, malware running on the Whonix-Workstation has no visibility into the actual configuration of the underlying system, so things like IP address, MAC address, hardware serial numbers, and the like, are all hidden. In addition, Whonix can be used in a physically isolated way, where the Workstation and Gateway run on two separate machines. It isn't only Linux that can be protected with Whonix, either, as Windows or other operating systems can be installed as the Whonix-Workstation.
The iptables rules on the workstation redirect all traffic to the gateway and disallow any local network connections. In addition, the firewall on the gateway fails "closed", disallowing any connections if Tor fails. Whonix also configures the system and various applications to reduce or eliminate information leaks. That includes using UTC for the time zone, having the same desktop resolution, color depth, and installed fonts on all installations, and setting the same virtual MAC address on all workstations. The user on Whonix is "user" and applications like GPG are configured to not leak operating system version information
As envisioned, Whonix is a framework that is "agnostic about everything", including using alternatives for the anonymized network (e.g. JonDo, freenet), virtualization mechanism (e.g. KVM, Xen, VMWare), and host and guest operating systems (e.g. Windows, *BSD). Any of those pieces can be swapped out "with some development effort", but the developers are concentrating on the Debian/VirtualBox/Tor combination, at least currently.
Isolating applications in a single VM does not protect against all anonymity-piercing attacks. Malware can (and does) send the contents of files to remote hosts, which can, obviously, provide personally identifiable information. The Whonix documentation suggests using multiple workstation VMs, one for each type of activity. That idea is, in some ways, similar to the concept behind Qubes, another virtualization-based security-oriented operating system.
The security of Whonix is obviously dependent on its constituent parts, including the Linux kernel, VirtualBox, and Tor itself, but it also depends on how the system has been put together as well. It is perhaps not a surprise that the developer behind Whonix is pseudonymous, "adrelanos", but he or she seems keenly aware that vetting of Whonix is required before users can potentially put their lives at risk by using it. The release announcement says: "I hope skilled people look into the concept and implementation and fail to find anonymity related bugs." As with most (all?) projects, Whonix is also looking for more developers to work on it.
The project does come with an extensive Security document that covers the technology behind Whonix, its advantages and disadvantages, threat model, best practices, and so on. It also has an in-depth comparison of Whonix with Tails and the Tor Browser Bundle, which is a browser configured to use Tor and to avoid leaking identifiable information. Whonix is an ambitious project that overlaps with Tails to some extent (though there is an extensive justification for having separate projects), but the projects do collaborate, which bodes well for both.
openSUSEHowever, the community Evergreen team plans to provide ongoing maintenance for openSUSE 11.4. More details on this will be published when they are known."
Newsletters and articles of interest
Page editor: Rebecca Sobol
During the first few years of the 21st century, there was a great deal of discussion concerning the state of readiness of GNU/Linux for the mainstream desktop and how it could be furthered. An article, published in LWN almost a decade ago, is typical of the period. Today, with Linux happily ticking away on many end-user desktops and in many schools and libraries, one can no longer doubt that, though world domination may yet be a long way off, presence certainly has been achieved.
This achievement might well in turn lead to even greater recognition and could conceivably take the open desktop from the average user's computer to deployment in large, non-technically oriented corporations and governmental institutions. However, such a possibility brings up a question which may very likely have eluded many key players in the various free desktop communities: Are the various environments on offer as accessible as they are appealing and functional?
In the computing world, accessibility generally refers to the concept of allowing as wide a range of users to interact with a system as possible, either through initial design or through hardware or software palliatives, generally referred to as assistive technology. The special needs of users can vary greatly, but, in general, can be categorized as physical, perceptual, and cognitive. It follows that, in order for a system to be accessible, it must be capable of adapting and catering to such needs, which might imply as simple a feature as the ability to customize the blinking rate of a cursor in order to avoid triggering epileptic seizures or as complex as offering a fully voice-operated system to provide a working environment to a blind person also lacking the use of her hands.
Creating a system capable of accommodating even a subset of users requiring accessibility features is therefore a vast undertaking. This, however, may well be a task which the free software community needs to address seriously, not merely for the good of its users, but to ensure its credibility as a viable alternative to mainstream, proprietary platforms as well.
In the early 1990s, many governments introduced legislation seeking to protect the rights of people living with disabilities; the "Americans with Disabilities Act" (ADA) in the US and the "Disability Discrimination Act" (DDA, since replaced by the "Equality Act 2010") in the UK are typical outcomes of these efforts. One of the important issues these laws were trying to address was that of discrimination in the workplace and the right to equal employment opportunities for disabled people: the wording of Titles I and IV of the ADA, for instance, reflects such an attempt.
These efforts were a vast step forward, but they also came at a time when the workplace was about to be drastically transformed by the rise of the internet and the desktop computer. The laws enacted still implicitly required employers to provide accommodations to their workers in the fulfillment of their duties, regardless of whether those duties necessitated the use of a computer or not. Sadly, this was not made very clear to either employer or employee and could depend upon one's interpretation of the text: see, for example, this out-law.com article discussing the relevance of the DDA to networking and computing in the British workplace and some of the areas left undefined.
Clearly then, legislation needed to catch up with this new situation and specifically address the requirement for accessible computing and information in a working environment. Perhaps the first to react to that conclusion was the US Congress who responded with its adoption of the Section 508 amendment to the "Rehabilitation Act" in 1998, which essentially requires governmental agencies to provide accessible electronic environments to their employees and offers guidelines to that effect. This amendment, directly or coincidentally, seems to have set the tone for similar policies and laws to be enacted or amended in the 21st century.
Take, for instance, the Canadian federal government's "Policy on the Provision of Accommodation for Employees with Disabilities", which seems to be an echo of Section 508, or Germany's far more extensive "Behindertengleichstellungsgesetz", which makes "barrier-free information" one of its key concerns. More recently, in the Canadian province of Ontario, the government adopted the "Accessibility for Ontarians with Disabilities Act". This is an interesting piece of legislation insofar as it devotes a great deal of attention to equal access to information and communications in private and public employment as well as education, and refers specifically to software and self-service kiosks. Such laws, whether in North America or Europe, whether they apply solely to the public sector or to all organizations, do form a clear trend, and it is to be expected that more and more legislatures will follow suit, either at the local or national level.
The legal obligation for an employer to provide an accessible platform impacts all software, free or not, of course; unfortunately, free software platforms like GNU/Linux face an inherent disadvantage with regards to accessibility: the lack of third-party proprietary solutions. None of the mainstream commercial platforms had much stock accessibility in the beginning, some still do not, but they all almost immediately benefited from third-party offerings to bridge these gaps.
As we now know from experience, proprietary software on open desktops is scarce; commercial developers are difficult to entice and their reception by the community, should they take the plunge, can be rather mixed. The practical upshot of this is that it is highly unlikely that any assistive technology software vendor will step forward to fill the accessibility gaps on the open desktop. That leaves the responsibility to the community and associated commercial interests. Failure to provide adequate and easily integrated accessibility, however, could very well one day lead to a disaster scenario. An early convert to the free desktop could be fined or forced to provide a more accessible, commercial platform, thereby seriously undermining the credibility of free software as a worthwhile alternative in the workplace.
The next logical question is, "How are we doing and how far do we still need to go to achieve standards compliant accessibility on the open desktop?" In some areas, the progress has been very positive, whereas others seem to be experiencing difficulty coalescing into a meaningful movement.
Visual accessibility on the Linux console has now been adequate for over a decade, with long-standing projects, such as Speakup, Emacspeak, and Brltty, providing advanced screen reviewing functionalities through braille or speech. Reasonably good text-to-speech (TTS) processing has also been available for some time through such free software synthesizers as Festival and Espeak. This means that, when the time came to develop the Orca screen-reader for the GNOME desktop, well-tested output mechanisms already existed and could easily be integrated, allowing developers to focus on interface-related accessibility.
Orca itself has been gaining in stability and functionality steadily over the last few years, making critical applications, like Firefox and the LibreOffice suite, functionally available to the blind and visually impaired. Recently, as part of an accessibility push in GNOME, many bugs and shortcomings of Orca have received some attention, and the underlying accessibility framework and libraries it employs, ATK/AT-SPI, have become fully integrated in GNOME 3.6, becoming formal dependencies. This is very positive because, in the words of Joanmarie Diggs, the main Orca developer:
GNOME is not the only desktop environment accessible through Orca; there have been some efforts in other quarters, with the inclusion of preliminary accessibility support through AT-SPI in version 4.10 of Xfce4 and the early development of a Qt AT-SPI bridge for KDE.
This is all very good news for visual accessibility, but weak areas remain. There is no accessible PDF reader for the open desktop, for instance, and the accuracy of optical character recognition (OCR) software is improving at a very sedate pace. Yet these would be crucial applications to a visually impaired person in virtually any modern working environment.
There also have been recent examples of decisions by distributors which can affect out-of-the-box accessibility in a negative manner, such as the likely decision by Debian to make Xfce4 its default desktop environment in Wheezy, or the announcement by the Ubuntu team that the historically more accessible Unity 2D desktop will no longer be distributed as of release 12.10. The bulk of the recent accessibility improvements to Xfce4 were introduced in version 4.10; however, Wheezy will be shipping the older and virtually inaccessible 4.8.1 release. As for Unity 3D, Luke Yelavich, an Ubuntu accessibility developer, made it clear that he does not expect it to be as accessible as Unity 2D until the next LTS release. While a more accessible environment can usually be installed with reasonable ease, such decisions could result in a poor first impression for an inexperienced user and a wrong assessment of the level of accessibility available on the platform.
Such complaints are minor, however, when comparing the state of visual accessibility with that of physical palliatives; here, the results of GNOME's accessibility efforts seem to be rather mixed. Components key to accessibility for physically disabled people, such as the Dasher predictive text input engine or the Gnome-voice-control application do not seem to have undergone significant development, or indeed a release, in over a year and appear to be stalled just at the brink of basic usability.
This stall leads to a difficult situation, because the very people needed to test the software and provide feedback and bug reports cannot quite use it without significant help, thus placing a barrier on further development. If that barrier can be overcome, physical accessibility efforts will hopefully pull together and achieve the same kind of momentum seen with GNOME, Orca, and various related projects.
In the meantime, many interesting projects are still being developed and sponsorship can help nudge them in the right direction. The Sphinx speech recognition project was part of the Google Summer of Code this year, for example, while the Opengazer gaze tracking project received some support from AEGIS. F123.org also sponsored accessibility improvements in WebKitGtk+. Such support not only benefits projects directly, but also serves to give them visibility, which can help attract potential users and contributors, thus building a stronger community.
The matter of accessibility is by no means the only stumbling block on the road to a wider adoption of the open desktop; however, with ever more stringent laws regarding accessibility in the workplace and an aging population likely to require an increasing level of accessibility from public services, it certainly is not an issue likely to fade away of its own accord. It may well be that solid, out-of-the-box universal accessibility is not something which can be achieved by one FOSS project, but requires a greater level of collaboration and concerted vision across all the projects and sub-communities which make up the open desktop as we know it.
For more information, see this post from Aaron Seigo. "Unlike traditional file managers, Files doesn't directly expose the file system. We see that as an implementation detail like 'which kernel drivers are loaded.' Yes, it's needed for the device to function, but the person using the device shouldn't have to care. Instead, Files promotes meaning and content. On starting Files, you select what you wish to view such as documents, images, music, videos, etc.."We now have responsible error handling, we have a well-defined atomic update mechanism and event dispatching is thread safe. I've been very happy to see the effort and the amount of patches on the list recently, and without that we couldn't have wrapped up the protocol and client API changes in time." The 1.0 release is to be expected before the end of the month.
Version 3.1.3 of gnutls is available. Improvements include support for DANE, a protocol used to verify certificate validity with DNSSEC, and the OCSP Certificate Status extension.
Newsletters and articles
Wired features an editorial from Linux Foundation Executive Director Jim Zemlin, who writes about the emerging competition in the automotive software platform market. "As automakers get into the computing business, the biggest hurdle they have to overcome isn’t each other – it’s consumer expectations driven by the rise of ubiquitous mobile computing. This is where I’d argue the battle between open and closed is going to play out the hardest in coming years … the next OS wars." As one would expect, Zemlin highlights the benefits of openness, and Linux in particular.writes about why we don't have good video calling yet and what is being done to get there. "In addition to the nitty gritty of protocols and codecs there are other pieces that has been lacking to give users a really good experience. The most critical one is good echo cancellation. This is required in order to avoid having an ugly echo effect when trying to use your laptop built-in speakers and microphone for a call. So people have been forced to use a headset to make things work reasonably well."
On his blog, Tiago Vignatti compares the soon-to-reach-1.0 Wayland API against the existing X.org API, which is about 15 times larger. "Although X and Wayland’s intention are both to sit between the applications and the kernel graphics layers, a direct comparison of those two systems is not fair on most of the cases; while X encompasses Wayland in numerous of features, Wayland has a few other advantages" — chief among them, of course, is a much simpler API.
Page editor: Nathan Willis
Brief itemsFSF member Chris Webber started the GNU MediaGoblin project. He's leading a community team to write a next-generation social web system where users will share their experiences through photos, videos and audio, all without running proprietary software or centralizing personal data in the hands of a corporation. Right now MediaGoblin is partially developed, but the team needs financial support so that they can quit their day jobs for a year and perfect MediaGoblin's features to a professional level." PersonalWeb's software patent suit against Github and others threatens the freedom of the Web. In order to make sure that the Web can remain a free and accessible space for everyone, we need to rid ourselves of all the patents that threaten its viability." Since 2008, RFB has been subject to federal regulations that require the product of software development contracts to be published in the Brazilian Public [Free] Software Portal, licensed under the GNU GPL. Their contract with SERPRO (the Federal Data Processing Service), to develop several programs that RFB publishes on its web site for taxpayers to fill in and submit tax returns and other forms, should comply with the obligations established in this regulation, but RFB prefers to pretend the regulation “does not apply to these programs, because they do not meet the requirements to be published in the Portal,” as if their refusal to meet the requirements excused the non-compliance with the obligations."
Articles of interestopen source success story from the administration of Vieira do Minho, a municipality in the north of Portugal. "The municipality has been using open source on its servers for years. Its database management systems is Postgres, on top of which the IT department built many Geographic Information Systems. Web, email, file and print services are all provided using the Debian open source distribution. And for telephony the municipality relies on Asterisk. In March this year, the council decided it was time to use open source not only for its servers but also for its desktop computers." (Thanks to Paul Wise)
Calls for PresentationsWe look forward to your submissions on the use of Python for Scientific Computing and Education. This includes pedagogy, exploration, modeling and analysis from both applied and developmental perspectives. We welcome contributions from academia as well as industry."
Upcoming Eventsvideo interviews with the speakers. This announcement includes the last of the interviews and the news that live video streamimg will be available during the conference. have been announced. "No surprises here, SUSE, as the main sponsor of the openSUSE Project, is supporting the conference." The Columbus Python Workshop for women and their friends is a free hands-on introduction to computer programming that's fun, accessible, and practical even to those who've never programmed at all before. We empower women of all ages and backgrounds to learn programming in a beginner-friendly environment."
|Linux Driver Verification Workshop||Amirandes,Heraklion, Crete|
|OpenStack Summit||San Diego, CA, USA|
|LibreOffice Conference||Berlin, Germany|
|MonkeySpace||Boston, MA, USA|
|14th Real Time Linux Workshop||Chapel Hill, NC, USA|
|Gentoo miniconf||Prague, Czech Republic|
|PyCon Ukraine 2012||Kyiv, Ukraine|
|PyCarolinas 2012||Chapel Hill, NC, USA|
|LinuxDays||Prague, Czech Republic|
|openSUSE Conference 2012||Prague, Czech Republic|
|PyCon Finland 2012||Espoo, Finland|
|PostgreSQL Conference Europe||Prague, Czech Republic|
|Droidcon London||London, UK|
|Firebird Conference 2012||Luxembourg, Luxembourg|
|PyData NYC 2012||New York City, NY, USA|
|October 27||Central PA Open Source Conference||Harrisburg, PA, USA|
|October 27||pyArkansas 2012||Conway, AR, USA|
|October 27||Linux Day 2012||Hundreds of cities, Italy|
|Technical Dutch Open Source Event||Eindhoven, Netherlands|
|Linaro Connect||Copenhagen, Denmark|
|Ubuntu Developer Summit - R||Copenhagen, Denmark|
|PyCon DE 2012||Leipzig, Germany|
|October 30||Ubuntu Enterprise Summit||Copenhagen, Denmark|
|OpenFest 2012||Sofia, Bulgaria|
|MeetBSD California 2012||Sunnyvale, California, USA|
|ApacheCon Europe 2012||Sinsheim, Germany|
|Embedded Linux Conference Europe||Barcelona, Spain|
|LinuxCon Europe||Barcelona, Spain|
|Apache OpenOffice Conference-Within-a-Conference||Sinsheim, Germany|
|LLVM Developers' Meeting||San Jose, CA, USA|
|KVM Forum and oVirt Workshop Europe 2012||Barcelona, Spain|
|November 8||NLUUG Fall Conference 2012||ReeHorst in Ede, Netherlands|
|Free Society Conference and Nordic Summit||Göteborg, Sweden|
|Mozilla Festival||London, England|
|Python Conference - Canada||Toronto, ON, Canada|
|SC12||Salt Lake City, UT, USA|
|19th Annual Tcl/Tk Conference||Chicago, IL, USA|
|Qt Developers Days||Berlin, Germany|
|PyCon Argentina 2012||Buenos Aires, Argentina|
|Linux Color Management Hackfest 2012||Brno, Czech Republic|
|November 16||PyHPC 2012||Salt Lake City, UT, USA|
|8th Brazilian Python Conference||Rio de Janeiro, Brazil|
|Mini Debian Conference in Paris||Paris, France|
|November 24||London Perl Workshop 2012||London, UK|
|Computer Art Congress 3||Paris, France|
|Lua Workshop 2012||Reston, VA, USA|
|CloudStack Collaboration Conference||Las Vegas, NV, USA|
|Open Hard- and Software Workshop 2012||Garching bei München, Germany|
|Konferensi BlankOn #4||Bogor, Indonesia|
|December 2||Foswiki Association General Assembly||online and Dublin, Ireland|
|Qt Developers Days 2012 North America||Santa Clara, CA, USA|
|Open Source Developers Conference Sydney 2012||Sydney, Australia|
|December 5||4th UK Manycore Computing Conference||Bristol, UK|
|CISSE 12||Everywhere, Internet|
|26th Large Installation System Administration Conference||San Diego, CA, USA|
If your event does not appear here, please tell us about it.
Page editor: Rebecca Sobol
Copyright © 2012, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds