User: Password:
Subscribe / Log in / New account Weekly Edition for October 18, 2012

Wikitravel and Wikimedia on a collision course

By Nathan Willis
October 17, 2012

Building a service around crowd-sourced information is a difficult undertaking — even top-tier projects like OpenStreetMap and Wikipedia have had their share of troubles, particularly when it came to steering the community of volunteer contributors. Now, the owner of the once-dominant travel wiki Wikitravel is squaring off against its former community members in an acrimonious dispute that has already resulted in multiple lawsuits.

At the root of the case is the desire of some former Wikitravel contributors to start a new travel-oriented site and import the original site's data. Wikitravel consists of user-contributed tourism and travel information for sites around the globe; as with many crowd-sourced projects, the data is licensed to permit re-use: Wikitravel uses Creative Commons Attribution-ShareAlike (CC-BY-SA).

The site was founded in 2003, and was sold to its current owner Internet Brands (IB) in 2006. IB owns a number of unrelated web properties, but it initially pledged to the Wikitravel community that it would keep things as it found them, and limit its revenue-seeking efforts to "unobtrusive, targeted, well-identified ads." Like Wikipedia, Wikitravel's content was divided into language-specific subdomains, each run more-or-less independently by a local contributor community. After the sale to IB, the German and Italian Wikitravel communities promptly left and started the rival site Wikivoyage, but other language communities stayed, including the largest, English.

Of course, the unobtrusive-advertising pledge was not a legally-binding contract, and over the years the ads on Wikitravel became more intrusive (including animated Flash ads, which were frequently flagged by users as being against the site's advertising policy), which led to dissatisfaction among the remaining editors. The community also complained that IB did not respond to bug reports, and allowed the version of MediaWiki running the site to languish several years without updates. But the final straw came in a 2011 proposal to integrate sponsored hotel-and-travel booking elements directly into article pages.

Opinion might vary as to how "intrusive" any given monkey-punching Flash ad is, but the booking tool seemed to detract from the site's purpose as an unbiased information source. Volunteer Peter Fitzgerald, like many others, voiced his opposition, saying in the advertising policy discussion:

The more our site looks like a cynical tool for revenue, the less people will be enthusiastic about volunteering their own time and effort towards improving the site, and the less savvy readers will be inclined to trust our information as impartial and sincere.

Nevertheless, IB proceeded with integration of the booking tool in early 2012.

In April, Wikitravel volunteer editors decided that they had had enough, and reached out to Wikimedia (the organization behind Wikipedia and related sites) with a proposal to start a new travel wiki, based on a merger of content from Wikivoyage, Wikitravel, and a few smaller efforts. The proposal became a formal "request for comments" and was subsequently approved by Wikimedia's board following a public vote.

IB, however, did not greet the proposal with similar enthusiasm. According to Wikitravel contributor Jani Patokallio, IB responded first by removing discussion of the migration from Wikitravel's talk pages, blocking some participating users, sending threatening messages to others, and removing the administrator privileges of several. On August 29, IB stepped up its response significantly when it filed a lawsuit against James Heilman and Ryan Holliday, two volunteer Wikitravel contributors who participated in the Wikimedia migration plan. The suit charges Heilman and Holliday with trademark infringement, trade name infringement, unfair competition, and civil conspiracy.

The filing [PDF] is eleven pages of lawyerly prose, but the gist of it is the accusation that Heilman and Holliday offered a "competitive website by trading on Internet Brands’ Wikitravel Trademark" — which seems to mean that they used the name Wikitravel when proposing and discussing the migration to MediaWiki. The suit also repeatedly refers to the rival site as "Wiki Travel Guide," a name that IB claims infringes on its trademark. Nevertheless, no actual site has yet resulted from the migration proposal, and only on October 16 did the new project decide on a name — which will evidently remain Wikivoyage. Patokallio examined the suit in detail in another post, including the trademark infringement claims.

Claims and countersuits

Notably, the lawsuit does not take issue with the right of the departing contributors (or anyone else) to take Wikitravel page content and import it into a rival project, but it does suggest (in claim 31) that the volunteers are misappropriating the CC-BY-SA-licensed content by not properly attributing Wikitravel as its source. This is still a problematic claim in light of the fact that, when the suit was filed, the Wikimedia-run site did not yet exist. But Creative Commons highlighted the licensing dimension of the suit in a post on its blog, where it observed that if the licensor of CC-BY-SA content (in this case, IB) "wants to completely disassociate themselves from particular reuses, they have the right to request that all attribution and mention of them be removed, and those reusing the work must do so to the extent practicable." Consequently, if IB so wished, it could request that Wikitravel not be mentioned on the new site as the source of the imported content.

Such an action would presumably stop the alleged trademark infringement at the new site, were it not that IB concurrently claims that the new site fails to properly credit Wikitravel as a source. To that, the EFF post also notes that even if Wikitravel or another licensor feels that it is not receiving proper attribution for a derived work, the license only requires the creator of the derivative work to provide attribution in "a manner 'reasonable to the medium or means' used by the licensee, and for credit to be provided in a 'reasonable manner.'" The Creative Commons post does not go into detail, but the suggestion is that this is simple to do — perhaps through the wiki engine's existing revision tracking system at import-time.

Wikimedia took an even harsher view of the lawsuit against Heilman and Holliday, calling it a clear attempt to "intimidate other community volunteers from exercising their rights to freely discuss the establishment of a new community" that ultimately seeks to prevent Wikimedia from starting a competing project. The IB filing does mention Wikimedia, though briefly, alleging that it may add Wikimedia and other "co-conspirators" to the list of defendants at a later time. Wikimedia then sued [PDF] IB on September 5 seeking a declaratory judgment that IB has no right to impede or disrupt the creation of a rival travel wiki.

Wikimedia's suit seeks to recover attorney's fees for the defendants, but it primarily seeks relief by asking the court to keep IB from interfering either in the import of data from the Wikitravel site or in communication between Wikitravel contributors — including ex-contributors. Holliday has taken proactive measures, first seeking to transfer the lawsuit from state to federal court, then seeking [PDF] a dismissal of the case under anti-"Strategic Lawsuit Against Public Participation" (SLAPP) legislation. A SLAPP suit is one in which the plaintiff is primarily trying to censor or intimidate the defendant (as opposed to one where it actually believes it will win at trial). IB denied that assertion in an October 16 response [PDF], and argued that the case is a legitimate dispute among business competitors. Holliday has until October 22 to file a response to this latest response, and at present the first court date is scheduled for November.

Where to now?

Reading through it, IB's suit does not seem to have much depth. It repeatedly refers to the rival travel wiki site (and in particular to the "confusingly similar" name of that site) in the present tense, despite the fact that the site and its name did not exist when the suit was filed, and it couches its allegations in business terms (such as claiming the defendants are "profiting" from IB's trademark and have subsequently been "unjustly enriched"), despite the projects' not-for-profit nature. The "civil conspiracy" charge is also puzzling, and appears to amount solely to the fact that the listed defendants discussed the project. But then again, one can rarely predict where a lawsuit will head; IB is no stranger to acrimonious litigation — it is currently embroiled in another suit against former employees for writing a rival web discussion forum package similar to IB's vBulletin product.

Alienating one's users and crowd-sourcing contributors is rarely a wise move; actively suing them in addition probably guarantees that Wikitravel (at least under IB's care) is doomed. But that alone does not guarantee success for the new Wikivoyage project. Should the lawsuit be dismissed or turn out in the new project's favor, the new travel wiki project will still have a technical hurdle to overcome. Although IB does not dispute the license under which Wikitravel's content is published, the company has never provided database dumps or other convenient ways to export the data in bulk. That leaves page-scraping and other tedious procedures to extract the thousands of pages and uploaded media, not to mention the extra challenge of removing spam and vandalism (which have been on the rise since the exodus of the main Wikitravel editor community).

A still bigger hurdle might be attracting the eyes of site visitors — the web is littered with failed attempts to fork popular properties, and even a powerful advocate like Wikimedia does not guarantee success. For example, most of us have probably forgotten about Amazon's "Amapedia" project, which attempted to build a crowd-sourced product review wiki. Despite being directly integrated with the web's number one retailer, it flopped.

No doubt Wikimedia is the largest player in the open data and crowdsourced-material market, but that does not mean all of its projects can automatically unseat pre-existing competitors; consider the relative mind-share enjoyed by its ebook library Wikisource compared to Project Gutenberg. The vast majority of the old Wikitravel editors seem to already be on board with the new effort, which is vital — but it could still be a long, rocky road ahead.

Comments (7 posted)

Another approach to UEFI secure boot

By Jake Edge
October 17, 2012

On October 26, Microsoft will release Windows 8. Normally, a release of that kind would be largely uninteresting to Linux users, but Windows 8 branding brings with it a hardware requirement that will definitely impact those wanting to use Linux: UEFI secure boot. Distributions such as Fedora, SUSE, and Ubuntu have been working on their plans for supporting secure boot for more than a year now, so it may be something of a surprise that the Linux Foundation (LF) recently announced its own entrant into the secure boot derby. But the LF "pre-bootloader" is mostly meant as a stop-gap measure for those distributions that have yet to decide on their secure boot strategy.

Secure boot is meant to protect operating systems from pre-boot malware by cryptographically protecting the first step of the boot process. Only binaries signed by keys stored in the UEFI firmware will be allowed to run when secure boot is enabled—which must be the default for certified hardware. On Windows and other systems, the first-stage bootloader will check the signatures of the next steps, but that is not strictly required. Since the keys stored in the firmware are under the control of the hardware makers, it is likely that there will only be keys from Microsoft and the manufacturer stored there. That leads to the distasteful, but unavoidable, need to get bootloaders signed by Microsoft in order to boot on secure-boot-enabled systems.

Much of the hue and cry surrounding secure boot has been about the need for Microsoft-signed bootloaders, but there is little alternative. One could imagine some kind of "Linux certificate authority" that could get its key installed on systems, but that ship has sailed at this point—hardware will be available soon with just the keys from Microsoft and the vendor. Any solution for booting Linux on those systems requires a bootloader signed by Microsoft or disabling secure boot altogether. The solutions vary on exactly what that bootloader actually does.

The "pre-bootloader"

The LF approach is the "smallest piece of code we could think of" that would allow Linux to boot on Windows 8 systems, James Bottomley, member of the LF Technical Advisory Board (TAB), said in email. Bottomley did much of the work on what he calls the "pre-bootloader" that will boot any Linux—signed or unsigned—using a "present user" test. That test ensures that there is a user present at boot time. The pre-bootloader is, of course, signed by Microsoft, but instead of requiring a signature on the next stage of the boot process (typically a full-blown bootloader like GRUB2), it will load and run any code if the user at the keyboard allows it.

Beyond that, if the system is in "setup mode", the pre-bootloader will ask the user if they want the signature of the second-stage bootloader installed into the UEFI secure database. That will allow unattended booting of the system in the future. While it is mandatory under the certification requirements for hardware vendors to provide a way to put systems into setup mode (or to disable secure boot), the user interface to do so is left unspecified. That means there will be several—possibly many—different ways to put systems into setup mode. In order to collect information about these different mechanisms, the pre-bootloader will refer users to an LF web site that will be gathering and disseminating instructions for putting systems into setup mode.

The intent, Bottomley said, is "to ensure that smaller distributions [have] a policy free option they could turn to as an interim solution while they sorted our what their own security policy around secure boot would be". The pre-bootloader binary, once signed by Microsoft, will be available on the LF web site for anyone to use. Distributions that don't want to have their own bootloader signed or, indeed, participate in secure boot at all, will be able to use the pre-bootloader for both live distributions (on CD/DVD or USB devices) and for installations to the hard disk. However, users will either need to get their systems into setup mode and install the second-stage bootloader signature/hash—or be present on every boot.

Requiring users to find and enable setup mode has a minor side benefit, Bottomley said. The information on how to do that will be useful for others, to either permanently disable secure boot or to install their own keys (either in addition to the existing keys or supplanting them). For booting a live distribution, though, testing for user presence should not be that much of a burden.


There is an alternative to requiring systems to be put into setup mode or for users to be at the keyboard when booting: a pre-bootloader being used by (at least) Fedora and SUSE, called "shim". Shim takes a different approach, but still allows distributions to set their own policies. There are two main differences between it and the LF pre-bootloader, though. Shim can contain an internal keyring for keys used to sign the second-stage bootloader (or just the cryptographic hash(es) of authorized bootloaders), which will be checked in addition to the factory-installed keys. In addition, in its present incarnation, shim will not provide a way to circumvent signature/hash checking with a present user test. According to shim developer Matthew Garrett, there is strong consideration of adding that kind of test for removable media in support of live distributions and installation media, though.

Even a shim that has an internal keyring can support booting binaries that are signed by keys that don't appear on that keyring (and are not present in the firmware). It does that by using an idea that SUSE came up with for storing keys and hashes without requiring users to find and enable setup mode: the "Machine Owner Key" (MOK). It turns out that the UEFI specification provides some secure storage locations that can only be accessed (and, importantly, changed) during the boot process. Shim uses that storage as a place to put keys or hashes if the user directs it to, which avoids requiring user presence on subsequent boots.

Both Fedora and SUSE will be releasing shim binaries that contain their distribution keys on the internal keyring and are signed by Microsoft. Because of the MOK storage idea, though, those binaries could be used by other distributions. Even distributions that are not planning to sign their second stage (and their kernel) could use one of the signed shims. A recently added shim feature will add hashes (rather than just signatures) to the MOK. So a distribution that wants to minimize its dealings with secure boot can simply ship one of those shims and instruct its users to store the second-stage hash in the MOK as prompted by shim. Or those distributions can use the LF pre-bootloader.

Pre-bootloader vs. shim

That last part is a bit of a sticking point, at least for Garrett. Because the pre-bootloader comes with an LF "stamp of approval", it may well be seen as "the Linux solution" for secure boot. But Garrett believes that the pre-bootloader isn't "terribly useful". All of the functionality that it provides is also available in shim, he said, except for the ability to "hit y and it'll run whatever you want". Instead, shim users can just use the interface to add keys or hashes to the MOK and boot unattended forevermore.

There are also some dangers in the LF approach, Garrett said. Because non-technical users are easily fooled into clicking through security warnings, the pre-bootloader could be used to attack Windows:

Users are trained to click through pretty much any security prompt that they see, and if an attacker replaces a legitimate bootloader with one that asks them to press "y" to make their computer work, they'll press "y". If that bootloader then launches a trojaned Windows bootloader that launches a trojaned Windows kernel, that's kind of a problem.

While trojaned Windows binaries aren't directly a problem for Linux, they could be a problem for signed first-stage bootloaders. Secure boot has a database of blacklisted keys and hashes that can be used to stop malware from running on the system. If the LF pre-bootloader is used by some form of malware, it could be blacklisted. That's also true of any shim similarly used, but shim's present user "test" is a bit more complicated than that of the pre-bootloader, so malware authors may be more inclined to target the simpler test.

In any case, signed Linux first-stage bootloaders are clearly at some level of risk of being blacklisted down the road. That risk is inherent in the fact that the secure boot requirements are set by Microsoft to further its own ends. One would guess it won't use its power over the contents of the blacklist indiscriminately, but there is no technical obstacle to it doing so.

An alternative that the LF could have taken would be to create a shim with an empty keyring and get that signed, shim developer Peter Jones said in email. A shim built that way would only run code signed by the keys in the firmware. Since a shim built that way wouldn't have the key or hash for the second-stage bootloader available in the databases it consults, it wouldn't run the second stage, but it would prompt users to add that key or hash to the MOK on first boot. That would allow smaller distributions or those uninterested in signing their binaries to use shim—and avoid the present user test (or setup mode) except for the first boot.

Garrett in particular seems irritated by the LF approach. Because of the uncertainty on how to get systems into setup mode, he believes the pre-bootloader risks making Linux a second-class citizen. In the comment linked above, he warned:

Linux becomes the OS that can't reboot itself. It's the OS that pops up an ugly text entry box every time you turn your computer on. It's the OS that asks you if you're sure you want to run potentially insecure code. 10 years of progress in making Linux accessible to users, gone.

Furthermore, he is unhappy that the LF went its own way, rather than working with Fedora and SUSE on shim. In another comment, Garrett is not particularly pleased with the decision to build a separate pre-bootloader after the TAB had been urged to work together on shim:

We encouraged them to, but they felt that it was too complicated and violated their understanding of the trust model. I obviously disagree, and I'm obviously not impressed by the Linux Foundation picking an approach that's at odds with 100% of the member companies who've voiced opinions on the topic, but the Technical Advisory Board is an autonomous group with no community oversight so there's little opportunity to influence them.

That's not quite the way Bottomley sees things, however. The LF and TAB were "exclusively concentrating on tools that keep linux booting and installing", with an emphasis on the simplest solution. As he noted, "Shim was originally designed as a solution to take advantage of secure boot and enforce a security policy rather than one that simply permits any linux distribution to boot". As a neutral party, at least with respect to distributions, the LF did not want to take sides on what kinds of secure boot policies distributions should choose:

Originally we weren't sure that the distributions would be ready (or at least that the non-major distributions would be ready) for secure boot. Later there were disagreements between distributions over what security policies should be enforced.

It may well turn out that the LF pre-bootloader is simply a temporary measure and that shim can handle all of the different use cases—the LF code uses some parts of shim, after all. Or perhaps the simplicity of the pre-bootloader code will be attractive to some distributions. The pre-bootloader requirement to get the system into setup mode might be attractive to some users or distributions that want to ensure their keys are the only ones present in the system, for example. Bottomley and the LF would be fine with any of the possible outcomes:

The only LF concern is that as an organisation enabling the Linux ecosystem, we don't want to be seen as mandating policy for taking advantage of secure boot. If all distributions decide to adopt shim + MOK, we'd be completely happy, but if some decide not to, that's fine with us too.

Booting Linux on new x86 hardware is clearly going to be a bit more difficult than it has been in the past. Due to a lot of hard work from various folks, though, it will be a lot easier than it could have been. In the end, there is room for both solutions, though there is merit to Garrett's concern that the LF solution will be taken as "the Linux solution" for secure boot. At last report, Ubuntu planned to use its own first-stage bootloader, and other options may arise, so the "one true Linux secure boot solution" may never really exist.

Comments (22 posted)

A report from the first Korea Linux Forum

By Jonathan Corbet
October 16, 2012
The Linux Foundation held its first ever Korea Linux Forum (KLF) in Seoul, South Korea, in mid-October. The stated goal was "to foster a stronger relationship between South Korea and the global Linux development community." In truth, South Korea is already a strong presence in this community; arguably KLF was more of a recognition and celebration of that relationship. In any case, one conclusion was clear: there is a lot going on in this part of the world.

Some years ago, the Open Source Development Laboratories recognized that Japanese companies were increasingly making use of Linux but were not always participating in the development community. To help close the loop, OSDL began a series of events where Japanese developers could hear from — and talk with — developers from the wider community; that practice continued into the Linux Foundation era. Your editor was lucky enough to be able to attend a number of these events, starting in 2007. These conferences cannot claim all of the credit for the marked increase in contributions from Japan over the last several years, but it seems clear that they helped. The Japanese Linux Symposium has since transformed into LinuxCon Japan, a proper development conference in its own right.

KLF is clearly meant to follow the same pattern, but there is a big difference this time around: community participation from Korea is already significant and increasing in a big way. For example, Samsung first appeared in the list of top kernel contributors in the 2.6.35 development cycle over two years ago; it has held its place on that list ever since. Contributions from Korean developers are clearly not in short [Not the conference venue] supply. That made the job of the KLF speakers easy; rather than encouraging Korean developers to participate more, they were able to offer their thanks and talk more about how to get things done in the community.

The first talk (after the inevitable cheerleading session by Linux Foundation head Jim Zemlin) was by Samsung vice president Wonjoo Park; his goal was to make it clear that Linux is an important resource for Samsung, the "host sponsor" for the event as a whole. Software, he said, is the means for product differentiation in today's market; it is the most important part of any product and drives the business as a whole. Samsung, it seems, is a software company.

The company got its start with Linux in 2003, using a distribution from MontaVista. Use of Linux expanded over the years: appliances in 2005, televisions in 2006, and so on. Samsung's first Linux smartphone came out in 2004; it featured a voice-activated phone book. In 2007 Samsung joined LiMo; the first LiMo-based phone came out in 2009. In 2012, products all across the Samsung line, from phones and tablets to home theater systems, cameras and printers, are all based on Linux.

Now, of course, much of the company's efforts are going into furthering the Tizen distribution. He mentioned the recently-posted F2FS filesystem: Samsung could have held onto that code and kept F2FS proprietary, he said, but that would have deterred innovation; sharing it, instead, allows the company to accept changes from others. Samsung has also put together an extensive license compliance process after a "rough start" that forced the company to apologize to the community. One of the results is, one-stop shopping for the source code for Samsung's products.

In summary, he said, Linux has become a "core competitive competence" for Samsung; the company would not be able to do what it does without it.

[Tejun Heo] Korean rockstar hacker Tejun Heo gave a well-received keynote presentation on what it is like to be a community developer. It is hard, he said, but then, working in Korean companies, where the expectations are high, is hard in general. Developers who can succeed in the corporate setting can make it in the community as well. Developing in the community has a lot of rewards, including the fact that credit for the work stays with the developer rather than accruing to the sponsoring company. It is a challenging path, but full of benefits.

KLF was, like the early Japan events, oriented toward information delivery rather than the sort of critical discussion of ongoing work that one finds at a serious development conference. That does not mean that there was no development work on display, though. Arguably the most interesting talk was Kisoo Yu's discussion of the big.LITTLE switcher (originally written by Nicolas Pitre). Big.LITTLE is an ARM-based system-on-chip architecture that combines a number of slow, power-efficient processors with fast, power-hungry processors on the same chip. In this particular case, Kisoo discussed an upcoming Samsung Exynos processor combining four Cortex A7 processors with four Cortex A15's — yes, an eight-core SoC.

Big.LITTLE poses a number of interesting challenges for the kernel: how does one schedule tasks across the system to optimize both throughput and power consumption? Kisoo described two approaches, the first of which involves running Linux under a simple hypervisor that transparently switches the hardware from slow mode (running on all four A7's) to fast mode (all four A15's) without the kernel's participation or awareness. The alternative approach has the kernel itself explicitly managing the SoC as a four-processor system, switching each one independently between the fast and slow cores as if it were simply adjusting the CPU's clock frequency. Either way, a number of heuristics have been developed to try to determine the best time to make the switch from one to the other. This SoC offers an interesting hardware feature that can quickly transfer interesting L2 cache entries from one core to another to speed the switching process, which can be done in 30µs or so.

Perhaps the most interesting takeaway from the talk is that we still don't really have a good idea for how to manage these systems. This SoC is a true eight-core processor; it would seem that an optimal approach would manage it as such rather than as a four-core system with a big "turbo" button. The fact that we are, thus far, unable to do so is not an indictment of the developers working on the task in any way; it is clearly a hard problem without much in the way of useful solutions in the literature. As is the case with many other hard operating system problems, the work being done now will get us closer to an understanding of the issues and the eventual development of better solutions.

One thing that became clear at the inaugural KLF is that Korea is increasingly supplying a lot of sharp minds ready to work on problems like this, and that this trend looks set to continue indefinitely. Energy abounds, as does, seemingly, a good sense of fun. Your editor would like to thank our hosts in Korea for hosting an engaging event, treating us so well, and even for inflicting "Gangnam style" K-pop music on us at the conference dinner. And, of course, thanks are due to the Linux Foundation for supporting your editor's travel to the event.

Comments (4 posted)

Page editor: Jonathan Corbet


Do Not Track Does Not Conquer

By Nathan Willis
October 17, 2012

At times it can seem like protecting one's online privacy is a Sisyphean struggle. Even when the software industry listens to the concerns of privacy advocates, the site owners and secretive data-collectors who profit from pillaging private information are quick to find every loophole and work-around in existence to regain their access to profitable data. Such seems to be the case with the Do Not Track HTTP header (DNT), which has garnered support from browser vendors — plus a steady stream of assaults aimed at undermining it, courtesy of advertisers.

Preferences, browsers, and intent

Although "opt out" mechanisms for web tracking have been discussed for years, the DNT HTTP header approach was first proposed by Mozilla's Mike Shaver. It has subsequently been developed under the stewardship of the World Wide Web Consortium's (W3C) Tracking Protection Working Group. According to the latest draft of the specification, DNT is an optional HTTP header field that can take either 0 or 1 as a value, where 1 indicates that the user prefers not to be tracked, and 0 indicates that the user prefers to allow tracking. The key issue, however, is that the header is intended to represent a user preference — which most interpret to mean a conscious choice on the user's part — and it must not be sent at all if the user has not expressed such a preference to the browser.

Initially Mozilla was the only browser vendor behind DNT, but Opera added support in July in Opera 12, as did Apple a few weeks later in Safari 6. Google added support in Chromium on September 13. In all four browsers, the DNT setting must be manually enabled in the application preferences. Mozilla contended from quite early on that this is a critical facet of making DNT a workable solution. If DNT were enabled automatically or by default, it would no longer represent "a choice made by the person behind the keyboard," but one made by the browser vendor.

The decision was controversial — after all, reasoned critics, who in their right mind wants to be tracked? But Mozilla stood firm, and eventually the other browser makers followed suit. Until June 2012, that is, when Microsoft announced that Internet Explorer (IE) 10 (which is scheduled to ship with Windows 8) would present the DNT option as a check-box shown to the user during installation, with the do-not-track option selected by default.

But enabling DNT by default violates the specification, opponents argued, and strips it of its meaning. And if the DNT header does not reflect an actual user's decision, the argument goes, advertisers will be justified in ignoring it. Apache's Roy Fielding objected strongly enough that he committed a change that causes the web server to un-set the DNT header when it is sent by IE 10. Fielding is a member of the W3C Tracking Protection Working Group, and his log message for the commit said that "Apache does not tolerate deliberate abuse of open standards." He elaborated on that interpretation in the inevitable argument that followed on GitHub, calling Microsoft's decision broken because it violates the specification's requirement that the DNT header default to "unset." Apache, he said, "has no particular interest in what goes in the open standard -- only in that the protocol means what the WG says it means when the extra eight bytes are sent on the wire."

Conspiracy theorists might suspect that Microsoft's decision is a subtle ploy to undermine DNT entirely to curry favor with advertisers and other user-tracking firms. If so, the advertising world is doing an excellent job of maintaining a cover story; the Association of National Advertisers (ANA) publicly criticized the decision in an open letter to Microsoft management.

Step right up

Regardless of what happens on the browser and server fronts, DNT still relies on voluntary compliance on behalf of site administrators and service providers — and, by extension, compliance that matches up with what the user intends. The meaning of DNT might seem to be straightforward, but the people who make their money tracking users cannot be forced to agree. In September, Ed Bott at ZDNet reported that the Interactive Advertising Bureau (IAB) and the Digital Advertising Alliance (DAA) "devised their own interpretation" of DNT, under which they would continue to collect information, but would refrain from using that information to deliver targeted ads to the browser. Presumably that restraint lasts only for the duration of the browsing session in which DNT is sent.

Lest anyone propose a "Do Not Target Ads" HTTP header that IAB and DAA might conversely interpret as a reason to stop collecting tracking information, remember that nothing obligates advertisers or other information brokers to react to the header at all. Grant Gross at IDG said at least one site, a "tech-focused think tank" called the Information Technology and Innovation Foundation (ITIF), has unilaterally decided it will simply ignore the DNT header, and its site will report that fact to visitors.

Other members of the advertising business have embarked on their own campaigns to nip DNT in the bud. In June, the US Senate held hearings about tracking and DNT in particular. As the Electronic Frontier Foundation (EFF) observed, ANA representative Bob Liodice testified at the hearings that DNT would undermine cybersecurity, including "issues such as online sexual predators and identity theft." The Senate did not seem to buy Liodice's argument (Senator Jay Rockefeller, chairman of the Committee on Commerce, Science, and Transportation, declared the cybersecurity argument "a total red herring"), although the EFF noted that online tracking does raise important law enforcement questions in addition to its advertising angle.

Most recently, DNT critics gathered at the W3C Tracking Protection Working Group meeting in Amsterdam, where the Direct Marketing Association (DMA) proposed that an exception be added to the DNT specification for "marketing." The EFF blog entry about the meeting quotes the DMA representative as saying:

Marketing fuels the world. It is as American as apple pie and delivers relevant advertising to consumers about products they will be interested at a time they are interested. DNT should permit it as one of the most important values of civil society.

Such an "exception" would seem to cover the precise tracking scenario for which DNT is designed, and indeed other members of the working group fought back. Fielding accused DMA of "raising issues that you know quite well will not be adopted." The EFF views DMA's participation in the meeting as an attempt to undermine or derail the specification-writing process. That is a bit of a judgment call, but it is clear from the latest traffic on the working group's mailing list that DMA, DAA, and other advertising groups are not meshing well with the software industry representatives who typically account for the bulk of W3C participation. In recent weeks there have been multiple threads about redefining basic terms like "service provider" and "user agent" that indicate (at the very least) a culture clash.

On the plus side, there have been sites and web services that have voluntarily announced their intention to comply with DNT; Twitter is the highest-profile. But the specification is far from completion, and as recent events show, voluntary compliance will only take care of a subset of the data-collecting entities on the web today. In the GitHub comment linked to above, Fielding speculated that the long-term ploy of DNT advocates was to get widespread adoption, then to push for mandatory compliance through legislation. Whether that will happen is anyone's guess; the US Federal Trade Commission (FTC) has endorsed DNT, which in addition to the US Senate hearings might provide enough evidence to make the advertising industry wary.

Implementing a campaign of "good enough for most" self-regulation would be one path to avoiding such government oversight, and derailing or gutting the specification could be effective, too. At the moment, the advertising business seems to be pursuing both tactics. It is up to the W3C and privacy advocates to respond, but at least for the time being the only guaranteed way for users to safeguard their privacy remains the do-it-yourself approach: Tor, NoScript, Adblock Plus, and so on. A world where user-tracking is simply not an issue sounds nice — it just doesn't sound likely in the near-term.

Comments (27 posted)

Brief items

Security quotes of the week

But at least it's patented by a notorious patent troll, which means that other jackasses who try to implement this stupid idea will find themselves tied up in absurd, wasteful lawsuits. It's mutually assured dipshits.
-- Cory Doctorow on a patent by Intellectual Ventures for 3D printing DRM

Use of the card, accepted by every major Bay Area public transit system, is soaring with 689,000 transactions a day and more than 1 million active Clipper cards. Many cardholders might not realize that data tracking their every move on public transit is stored on computers and available to anyone with a search warrant or subpoena. Personal data can be stored for seven years after a Clipper account is closed, according to the commission's policy. In addition, a new smartphone app, called FareBot, allows anyone to scan a Clipper card and find out where the owner has been.
-- NBC Bay Area notes that San Francisco "Clipper Cards" reveal users' movements to authorities

Comments (8 posted)

Attack code for Firefox 16 privacy vulnerability now available online (ars technica)

Firefox 16, which was released on October 9, has subsequently been withdrawn due to a privacy leak. Ars technica looks at code that can exploit the flaw, which is not present in Firefox 15. "In short order, he was able to take advantage of his discovery to fashion proof-of-concept code that forced Firefox 16 to identify a visitor's Twitter handle whenever the user was logged in to the site. The eight-line code sample takes about 10 seconds to reveal the username, and it wouldn't be hard for developers to expand on that code to create attacks that extract personal information contained in URLs from other websites."

Comments (6 posted)

Firefox 16 re-released fixing multiple vulnerabilities (The H)

Mozilla has now released version 16.0.1 of Firefox, fixing the security hole discovered October 10 in Firefox 16, as well as a few other incidental issues. The H has a brief recap of the situation, including availability of the corresponding update for other Mozilla products.

Comments (10 posted)

New vulnerabilities

cxf: multiple vulnerabilities

Package(s):cxf CVE #(s):CVE-2012-2379 CVE-2012-2378 CVE-2012-3451
Created:October 12, 2012 Updated:October 17, 2012

From the Fedora advisory:

A flaw was found in the way Apache CXF verifies that XML elements were signed or encrypted by a particular Supporting Token. CXF checks to ensure these elements are signed or encrypted by a Supporting Token, but not whether the correct token is used. A remote attacker could use this flaw to transmit confidential information without the appropriate security, and potentially to circumvent access controls on web services exposed via CXF. (CVE-2012-2379)

A flaw was found in the way Apache CXF enforced child policies of WS-SecurityPolicy 1.1 on the client side. In certain circumstances, this could lead a client failing to sign or encrypt certain elements as directed by the security policy, leading to information disclosure and insecure transmission of information. (CVE-2012-2378)

Apache CXF is vulnerable to SOAPAction spoofing attacks under certain conditions. If web services are exposed via Apache CXF that use a unique SOAPAction for each service operation, then a remote attacker could perform SOAPAction spoofing to call a forbidden operation if it accepts the same parameters as an allowed operation. WS-Policy validation is performed against the operation being invoked, and an attack must pass validation to be successful. (CVE-2012-3451)

Fedora FEDORA-2012-15329 cxf 2012-10-12

Comments (none posted)

dracut: information disclosure

Package(s):dracut CVE #(s):CVE-2012-4453
Created:October 15, 2012 Updated:December 9, 2013
Description: From the CVE entry: in dracut, as used in Red Hat Enterprise Linux 6, Fedora 16 and 17, and possibly other products, creates initramfs images with world-readable permissions, which might allow local users to obtain sensitive information.

Scientific Linux SLSA-2013:1674-2 dracut 2013-12-09
Red Hat RHSA-2013:1674-02 dracut 2013-11-21
Oracle ELSA-2013-1674 dracut 2013-11-26
Mageia MGASA-2012-0303 dracut 2012-10-20
Fedora FEDORA-2012-14959 dracut 2012-10-13
Fedora FEDORA-2012-14953 dracut 2012-10-13

Comments (none posted)

html2ps: directory traversal

Package(s):html2ps CVE #(s):CVE-2009-5067
Created:October 16, 2012 Updated:April 8, 2013
Description: From the Mageia advisory:

Directory traversal vulnerability in html2ps before 1.0b7 allows remote attackers to read arbitrary files via directory traversal sequences in SSI directive.

Mandriva MDVSA-2013:041 html2ps 2013-04-05
Mageia MGASA-2012-0297 html2ps 2012-10-16

Comments (none posted)

java: multiple vulnerabilities

Package(s):java CVE #(s):CVE-2012-3216 CVE-2012-4416 CVE-2012-5068 CVE-2012-5069 CVE-2012-5071 CVE-2012-5072 CVE-2012-5073 CVE-2012-5075 CVE-2012-5077 CVE-2012-5079 CVE-2012-5081 CVE-2012-5084 CVE-2012-5085 CVE-2012-5086 CVE-2012-5089
Created:October 17, 2012 Updated:December 3, 2012
Description: From the Red Hat advisory:

Multiple improper permission check issues were discovered in the Beans, Swing, and JMX components in OpenJDK. An untrusted Java application or applet could use these flaws to bypass Java sandbox restrictions. (CVE-2012-5086, CVE-2012-5084, CVE-2012-5089)

Multiple improper permission check issues were discovered in the Scripting, JMX, Concurrency, Libraries, and Security components in OpenJDK. An untrusted Java application or applet could use these flaws to bypass certain Java sandbox restrictions. (CVE-2012-5068, CVE-2012-5071, CVE-2012-5069, CVE-2012-5073, CVE-2012-5072)

It was discovered that java.util.ServiceLoader could create an instance of an incompatible class while performing provider lookup. An untrusted Java application or applet could use this flaw to bypass certain Java sandbox restrictions. (CVE-2012-5079)

It was discovered that the Java Secure Socket Extension (JSSE) SSL/TLS implementation did not properly handle handshake records containing an overly large data length value. An unauthenticated, remote attacker could possibly use this flaw to cause an SSL/TLS server to terminate with an exception. (CVE-2012-5081)

It was discovered that the JMX component in OpenJDK could perform certain actions in an insecure manner. An untrusted Java application or applet could possibly use this flaw to disclose sensitive information. (CVE-2012-5075)

A bug in the Java HotSpot Virtual Machine optimization code could cause it to not perform array initialization in certain cases. An untrusted Java application or applet could use this flaw to disclose portions of the virtual machine's memory. (CVE-2012-4416)

It was discovered that the SecureRandom class did not properly protect against the creation of multiple seeders. An untrusted Java application or applet could possibly use this flaw to disclose sensitive information. (CVE-2012-5077)

It was discovered that the class exposed the hash code of the canonicalized path name. An untrusted Java application or applet could possibly use this flaw to determine certain system paths, such as the current working directory. (CVE-2012-3216)

This update disables Gopher protocol support in the package by default. Gopher support can be enabled by setting the newly introduced property, "", to true. (CVE-2012-5085)

Note: If the web browser plug-in provided by the icedtea-web package was installed, the issues exposed via Java applets could have been exploited without user interaction if a user visited a malicious website.

Gentoo 201406-32 icedtea-bin 2014-06-29
Gentoo 201401-30 oracle-jdk-bin 2014-01-26
Mageia MGASA-2012-0306 java-1.7.0-openjdk 2012-10-29
Scientific Linux SL-java-20121019 java-1.7.0-openjdk 2012-10-19
Red Hat RHSA-2012:1391-01 java-1.7.0-oracle 2012-10-18
Red Hat RHSA-2012:1392-01 java-1.6.0-sun 2012-10-18
Oracle ELSA-2012-1386 java-1.7.0-openjdk 2012-10-18
Oracle ELSA-2012-1384 java-1.6.0-openjdk 2012-10-18
Oracle ELSA-2012-1385 java-1.6.0-openjdk 2012-10-18
Scientific Linux SL-java-20121017 java-1.6.0-openjdk 2012-10-17
CentOS CESA-2012:1386 java-1.7.0-openjdk 2012-10-17
CentOS CESA-2012:1384 java-1.6.0-openjdk 2012-10-17
CentOS CESA-2012:1385 java-1.6.0-openjdk 2012-10-17
Red Hat RHSA-2012:1386-01 java-1.7.0-openjdk 2012-10-17
Red Hat RHSA-2012:1385-01 java-1.6.0-openjdk 2012-10-17
Red Hat RHSA-2012:1384-01 java-1.6.0-openjdk 2012-10-17
Red Hat RHSA-2012:1466-01 java-1.6.0-ibm 2012-11-15
openSUSE openSUSE-SU-2012:1424-1 java-1_6_0-openjdk 2012-10-31
openSUSE openSUSE-SU-2012:1423-1 java-1_6_0-openjdk 2012-10-31
Mandriva MDVSA-2012:169 java-1.6.0-openjdk 2012-11-01
SUSE SUSE-SU-2012:1398-1 OpenJDK 2012-10-24
SUSE SUSE-SU-2012:1595-1 IBM Java 1.6.0 2012-11-30
SUSE SUSE-SU-2012:1588-1 IBM Java 1.6.0 2012-11-28
SUSE SUSE-SU-2012:1489-2 IBM Java 1.7.0 2012-11-21
SUSE SUSE-SU-2012:1490-1 IBM Java 1.4.2 2012-11-16
Red Hat RHSA-2012:1465-01 java-1.5.0-ibm 2012-11-15
Red Hat RHSA-2012:1485-01 java-1.4.2-ibm 2012-11-22
SUSE SUSE-SU-2012:1489-1 IBM Java 1.5.0 2012-11-16
Red Hat RHSA-2012:1467-01 java-1.7.0-ibm 2012-11-15
openSUSE openSUSE-SU-2012:1419-1 java-1_7_0-openjdk 2012-10-31
Scientific Linux SL-java-20121030 java-1.6.0-sun 2012-10-30
Mageia MGASA-2012-0307 java-1.6.0-openjdk 2012-10-29
Mageia MGASA-2012-0308 java-1.6.0-openjdk 2012-10-29

Comments (none posted)

java: multiple vulnerabilities

Package(s):java CVE #(s):CVE-2012-5070 CVE-2012-5074 CVE-2012-5076 CVE-2012-5087 CVE-2012-5088
Created:October 17, 2012 Updated:November 21, 2012
Description: From the Red Hat advisory:

It was discovered that the JMX component in OpenJDK could perform certain actions in an insecure manner. An untrusted Java application or applet could possibly use these flaws to disclose sensitive information. (CVE-2012-5070, CVE-2012-5075)

The default Java security properties configuration did not restrict access to certain packages. An untrusted Java application or applet could use these flaws to bypass Java sandbox restrictions. This update lists those packages as restricted. (CVE-2012-5076, CVE-2012-5074)

Multiple improper permission check issues were discovered in the Beans, Libraries, Swing, and JMX components in OpenJDK. An untrusted Java application or applet could use these flaws to bypass Java sandbox restrictions. (CVE-2012-5086, CVE-2012-5087, CVE-2012-5088, CVE-2012-5084, CVE-2012-5089)

Gentoo 201406-32 icedtea-bin 2014-06-29
Gentoo 201401-30 oracle-jdk-bin 2014-01-26
Mageia MGASA-2012-0306 java-1.7.0-openjdk 2012-10-29
Scientific Linux SL-java-20121019 java-1.7.0-openjdk 2012-10-19
Red Hat RHSA-2012:1391-01 java-1.7.0-oracle 2012-10-18
Oracle ELSA-2012-1386 java-1.7.0-openjdk 2012-10-18
CentOS CESA-2012:1386 java-1.7.0-openjdk 2012-10-17
Red Hat RHSA-2012:1386-01 java-1.7.0-openjdk 2012-10-17
SUSE SUSE-SU-2012:1398-1 OpenJDK 2012-10-24
SUSE SUSE-SU-2012:1489-2 IBM Java 1.7.0 2012-11-21
Red Hat RHSA-2012:1467-01 java-1.7.0-ibm 2012-11-15
openSUSE openSUSE-SU-2012:1419-1 java-1_7_0-openjdk 2012-10-31

Comments (none posted)

libvirt: denial of service

Package(s):libvirt CVE #(s):CVE-2012-4423
Created:October 11, 2012 Updated:November 20, 2012

From the Red Hat advisory:

A flaw was found in libvirtd's RPC call handling. An attacker able to establish a read-only connection to libvirtd could use this flaw to crash libvirtd by sending an RPC message that has an event as the RPC number, or an RPC number that falls into a gap in the RPC dispatch table. (CVE-2012-4423)

openSUSE openSUSE-SU-2013:0274-1 libvirt 2013-02-12
Ubuntu USN-1708-1 libvirt 2013-01-29
Fedora FEDORA-2012-15640 libvirt 2012-10-17
Fedora FEDORA-2012-15634 libvirt 2012-10-15
Oracle ELSA-2012-1359 libvirt 2012-10-11
CentOS CESA-2012:1359 libvirt 2012-10-11
Scientific Linux SL-libv-20121011 libvirt 2012-10-11
Red Hat RHSA-2012:1359-01 libvirt 2012-10-11
SUSE SUSE-SU-2012:1503-1 libvirt 2012-11-19

Comments (none posted)

mozilla: code execution

Package(s):firefox, thunderbird, seamonkey, xulrunner CVE #(s):CVE-2012-4193
Created:October 15, 2012 Updated:October 17, 2012
Description: From the Red Hat advisory:

A flaw was found in the way XULRunner handled security wrappers. A web page containing malicious content could possibly cause an application linked against XULRunner (such as Mozilla Firefox) to execute arbitrary code with the privileges of the user running the application.

openSUSE openSUSE-SU-2014:1100-1 Firefox 2014-09-09
Gentoo 201301-01 firefox 2013-01-07
Mageia MGASA-2012-0353 iceape 2012-12-07
SUSE SUSE-SU-2012:1351-1 Mozilla Firefox 2012-10-16
Mageia MGASA-2012-0296 thunderbird 2012-10-16
Mageia MGASA-2012-0295 firefox 2012-10-16
Scientific Linux SL-thun-20121015 thunderbird 2012-10-15
Scientific Linux SL-xulr-20121015 xulrunner 2012-10-15
openSUSE openSUSE-SU-2012:1345-1 MozillaFirefox 2012-10-15
Mandriva MDVSA-2012:167 firefox 2012-10-13
Oracle ELSA-2012-1362 thunderbird 2012-10-12
Oracle ELSA-2012-1361 xulrunner 2012-10-13
Oracle ELSA-2012-1361 xulrunner 2012-10-12
CentOS CESA-2012:1361 xulrunner 2012-10-13
CentOS CESA-2012:1362 thunderbird 2012-10-13
CentOS CESA-2012:1362 thunderbird 2012-10-13
CentOS CESA-2012:1361 xulrunner 2012-10-12
Ubuntu USN-1611-1 thunderbird 2012-10-12
Red Hat RHSA-2012:1362-01 thunderbird 2012-10-12
Red Hat RHSA-2012:1361-01 xulrunner 2012-10-12

Comments (none posted)

mozilla: multiple vulnerabilities

Package(s):firefox, thunderbird, seamonkey CVE #(s):CVE-2012-4191 CVE-2012-4192
Created:October 15, 2012 Updated:October 17, 2012
Description: From the CVE entries:

The mozilla::net::FailDelayManager::Lookup function in the WebSockets implementation in Mozilla Firefox before 16.0.1, Thunderbird before 16.0.1, and SeaMonkey before 2.13.1 allows remote attackers to cause a denial of service (memory corruption and application crash) or possibly execute arbitrary code via unspecified vectors. (CVE-2012-4191)

Mozilla Firefox 16.0, Thunderbird 16.0, and SeaMonkey 2.13 allow remote attackers to bypass the Same Origin Policy and read the properties of a Location object via a crafted web site, a related issue to CVE-2012-4193. (CVE-2012-4192)

openSUSE openSUSE-SU-2014:1100-1 Firefox 2014-09-09
Gentoo 201301-01 firefox 2013-01-07
Mageia MGASA-2012-0353 iceape 2012-12-07
Fedora FEDORA-2012-15985 xulrunner 2012-10-12
Fedora FEDORA-2012-15986 xulrunner 2012-10-12
Fedora FEDORA-2012-15986 firefox 2012-10-12
Fedora FEDORA-2012-15985 firefox 2012-10-12
Slackware SSA:2012-285-01 mozilla 2012-10-11
Slackware SSA:2012-285-02 mozilla 2012-10-11
Ubuntu USN-1608-1 firefox 2012-10-11
SUSE SUSE-SU-2012:1351-1 Mozilla Firefox 2012-10-16
openSUSE openSUSE-SU-2012:1345-1 MozillaFirefox 2012-10-15
Ubuntu USN-1611-1 thunderbird 2012-10-12

Comments (none posted)

mozilla: multiple vulnerabilities

Package(s):firefox CVE #(s):CVE-2012-3977 CVE-2012-3987
Created:October 17, 2012 Updated:October 17, 2012
Description: From the SUSE advisory:

CVE-2012-3977: Security researchers Thai Duong and Juliano Rizzo reported that SPDY's request header compression leads to information leakage, which can allow the extraction of private data such as session cookies, even over an encrypted SSL connection. (This does not affect Firefox 10 as it does not feature the SPDY extension. It was silently fixed for Firefox 15.)

CVE-2012-3987: Security researcher Warren He reported that when a page is transitioned into Reader Mode in Firefox for Android, the resulting page has chrome privileges and its content is not thoroughly sanitized. A successful attack requires user enabling of reader mode for a malicious page, which could then perform an attack similar to cross-site scripting (XSS) to gain the privileges allowed to Firefox on an Android device. This has been fixed by changing the Reader Mode page into an unprivileged page.

Gentoo 201301-01 firefox 2013-01-07
SUSE SUSE-SU-2012:1351-1 Mozilla Firefox 2012-10-16

Comments (none posted)

optipng: code execution

Package(s):optipng CVE #(s):CVE-2012-4432
Created:October 11, 2012 Updated:April 8, 2014

From the SUSE Bugzilla entry:

A vulnerability has been reported in OptiPNG, which can be exploited by malicious people to potentially compromise a user's system.

The vulnerability is caused due to a use-after-free error related to the palette reduction functionality. No further information is currently available.

Success exploitation may allow execution of arbitrary code.

Gentoo 201404-03 optipng 2014-04-08
openSUSE openSUSE-SU-2012:1329-1 optipng 2012-10-11

Comments (none posted)

perl-HTML-Template-Pro: cross-site scripting

Package(s):perl-HTML-Template-Pro CVE #(s):CVE-2011-4616
Created:October 15, 2012 Updated:October 22, 2012
Description: From the CVE entry:

Cross-site scripting (XSS) vulnerability in the HTML-Template-Pro module before 0.9507 for Perl allows remote attackers to inject arbitrary web script or HTML via template parameters, related to improper handling of > (greater than) and < (less than) characters.

Mageia MGASA-2012-0302 perl-HTML-Template-Pro 2012-10-20
Fedora FEDORA-2012-15482 perl-HTML-Template-Pro 2012-10-14
Fedora FEDORA-2012-15490 perl-HTML-Template-Pro 2012-10-14

Comments (none posted)

phpmyadmin: multiple vulnerabilities

Package(s):phpmyadmin CVE #(s):
Created:October 16, 2012 Updated:October 29, 2012
Description: From the phpMyAdmin advisories [1], [2]:

[1] Multiple XSS due to unescaped HTML output in Trigger, Procedure and Event pages. When creating/modifying a trigger, event or procedure with a crafted name, it is possible to trigger an XSS.

[2] Fetching the version information from a non-SSL site is vulnerable to a MITM attack. To display information about the current phpMyAdmin version on the main page, a piece of JavaScript is fetched from the website in non-SSL mode. A man-in-the-middle could modify this script on the wire to cause mischief.

Fedora FEDORA-2012-15725 phpMyAdmin 2012-10-28
Fedora FEDORA-2012-15754 phpMyAdmin 2012-10-28
Mageia MGASA-2012-0298 phpmyadmin 2012-10-16

Comments (none posted)

qt: CRIME attacks

Package(s):qt CVE #(s):
Created:October 15, 2012 Updated:October 17, 2012
Description: From the qt advisory:

A security vulnerability has been discovered in the SSL/TLS protocol, which affects connections using compression.

All versions of TLS are believed to be affected. To address this, Qt will disable TLS compression by default.

If the attacker can insert data into the SSL connection, then by looking at the length of the compressed data it is possible to determine if the inserted data matches secret data or not.

Fedora FEDORA-2012-15203 qt 2012-10-17
Fedora FEDORA-2012-15194 qt 2012-10-13

Comments (none posted)

roundcubemail: cross-site scripting

Package(s):roundcubemail CVE #(s):CVE-2012-4668
Created:October 11, 2012 Updated:October 17, 2012

From the Mageia advisory:

Cross-site scripting (XSS) vulnerability in Roundcube Webmail 0.8.1 and earlier allows remote attackers to inject arbitrary web script or HTML via the signature in an email (CVE-2012-4668).

Mandriva MDVSA-2013:148 roundcubemail 2013-04-22
Mageia MGASA-2012-0292 roundcubemail 2012-10-11

Comments (none posted)

ruby: access restriction bypass

Package(s):ruby1.8 CVE #(s):CVE-2012-4481
Created:October 11, 2012 Updated:March 8, 2013

From the Ubuntu advisory:

Shugo Maedo and Vit Ondruch discovered that Ruby incorrectly allowed untainted strings to be modified in protective safe levels. An attacker could use this flaw to bypass intended access restrictions. (CVE-2012-4466, CVE-2012-4481)

Gentoo 201412-27 ruby 2014-12-13
Mandriva MDVSA-2013:124 ruby 2013-04-10
CentOS CESA-2013:0612 ruby 2013-03-09
Oracle ELSA-2013-0612 ruby 2013-03-08
Scientific Linux SL-ruby-20130307 ruby 2013-03-07
Red Hat RHSA-2013:0612-01 ruby 2013-03-07
CentOS CESA-2013:0129 ruby 2013-01-09
Oracle ELSA-2013-0129 ruby 2013-01-12
Ubuntu USN-1603-2 ruby1.8 2012-10-22
Mageia MGASA-2012-0294 ruby 2012-10-14
Ubuntu USN-1603-1 ruby1.8 2012-10-10
Scientific Linux SL-ruby-20130116 ruby 2013-01-16

Comments (none posted)

ruby: two access restriction bypass flaws

Package(s):ruby1.9.1 CVE #(s):CVE-2012-4464 CVE-2012-4466
Created:October 11, 2012 Updated:November 5, 2012

From the Ubuntu advisory:

Tyler Hicks and Shugo Maeda discovered that Ruby incorrectly allowed untainted strings to be modified in protective safe levels. An attacker could use this flaw to bypass intended access restrictions. (CVE-2012-4464, CVE-2012-4466)

Mandriva MDVSA-2013:124 ruby 2013-04-10
openSUSE openSUSE-SU-2013:0376-1 ruby19 2013-03-01
Red Hat RHSA-2013:0582-01 openshift 2013-02-28
Ubuntu USN-1603-2 ruby1.8 2012-10-22
Ubuntu USN-1614-1 ruby1.9.1 2012-10-22
Mageia MGASA-2012-0294 ruby 2012-10-14
Fedora FEDORA-2012-15507 ruby 2012-10-14
Fedora FEDORA-2012-15395 ruby 2012-10-14
Ubuntu USN-1603-1 ruby1.8 2012-10-10
Ubuntu USN-1602-1 ruby1.9.1 2012-10-10
openSUSE openSUSE-SU-2012:1443-1 ruby 2012-11-05

Comments (none posted)

wireshark: multiple vulnerabilities

Package(s):wireshark CVE #(s):
Created:October 11, 2012 Updated:December 3, 2012

From the SUSE Bugzilla entry:

The HSRP dissector could go into an infinite loop. wnpa-sec-2012-26 CVE-2012-5237

The PPP dissector could abort. wnpa-sec-2012-27 CVE-2012-5238

Martin Wilck discovered an infinite loop in the DRDA dissector. wnpa-sec-2012-28 CVE-2012-5239 CVE-2012-3548 (see bnc#778000)

Laurent Butti discovered a buffer overflow in the LDP dissector. wnpa-sec-2012-29 CVE-2012-5240

Mageia MGASA-2012-0348 wireshark 2012-11-30
openSUSE openSUSE-SU-2012:1328-1 wireshark 2012-10-11

Comments (none posted)

Page editor: Michael Kerrisk

Kernel development

Brief items

Kernel release status

The current development kernel is 3.7-rc1, released on October 14. See the separate article, below, for a summary of the final items added during the 3.7 merge window.

Stable updates: 3.0.46, 3.4.14, 3.5.7 and 3.6.2 were released on October 12. 3.5.7 is the end of the line for updates in the 3.5 series.

Comments (none posted)

Quotes of the week

Apparently it is a bad idea to compose and send a patch while in a C++ standards committee meeting where people are arguing about async futures...
Paul McKenney

I believe the answer is that recent vulnerabilities have lead us to abandon the idea that we can trust anything in userspace, and retreat to the kernel. The concept that the kernel is more secure because we didn't include lots of crap seems to be a heretical thought.
Rusty Russell

Long experience with file systems shows us that they are like fine wine; they take time to mature. Whether you're talking about ext2/3/4, btrfs, Sun's ZFS, Digital's ADVFS, IBM's JFS or GPFS etc., and whether you're talking about file systems developed using open source or more traditional corporate development processes, it takes a minimum of 3-5 years and 50-200 PY's of effort to create a fully production-ready file system from scratch.
Ted Ts'o

I went to prepare a patch to fix this, and ended up finding no such problem to fix - which fits with how no such problem has been reported.
— No-such-signoff-by: Hugh Dickins

The requirement for a FIPS 140-2 module is to disable the entire module if any component of its self test or integrity test failed....

There are two solutions that were contemplated for disabling the module: having a kind of global status of the crypto API that makes it non-responsive in case of an integrity/self-test error. The other solution is to simply terminate the entire kernel. As the former one also will lead to a kernel failure eventually as many parts of the kernel depend on the crypto API, the implementation of the latter option was chosen.

Stephan Mueller; don't try to load a unsigned module in FIPS mode

What is the proper amount of time to wait upon receiving an email containing obviously incorrect statements about Linux kernel code before sending a "you have got to be kidding" email response.

Should I just hope the sender realizes their foolishness on their own and give them N hours to rescind the statement and fix up their insane patch and resend it, thereby giving them a grace period? If so, what is the proper value for N?

Or is it fair game to let loose and channel up the Torvalds-like daemons within my keyboard, with the hope that it would actually do some good and they would learn from their mistakes?

Greg Kroah-Hartman

Comments (10 posted)

Al Viro's new execve/kernel_thread design

Al Viro has been busily reworking the creation of kernel threads, making the code cleaner and less architecture-specific. As part of that exercise, he has posted a lengthy document on how kernel thread creation works now and the changes that he is making. Worth a read for those interested in how this part of the core kernel works. "Old implementation of kernel_thread() had been rather convoluted. In the best case, it filled struct pt_regs according to its arguments and passed them to do_fork(). The goal was to fool the code doing return from syscall into *not* leaving the kernel mode, so that newborn would have (after emptying its kernel stack) end up in a helper function with the right values in registers. Running in kernel mode."

Full Story (comments: none)

Kernel development news

3.7 merge window: conclusion and summary

By Jonathan Corbet
October 17, 2012
Linus pulled a total of 10,409 non-merge changesets into the mainline before closing the merge window for the 3.7 development cycle. That makes 3.7 one of the most active development cycles in recent history; only 3.2, with 10,214 changesets in the merge window, comes close. Clearly, there is a lot going on in the kernel development community.

Interestingly, Linus expressed some skepticism about some of this cycle's work in the 3.7-rc1 announcement. For example, the discussion on the 64-bit ARM patch set concluded some time ago, but Linus came in with a late opinion of his own:

[L]et's see how many years we'll need before the arm people do what every single other 64-bit arch has ever done: merge back with the 32-bit code. As usual, people claimed that there were tons of reasons why *this* time was different, and as usual it's almost certainly going to be BS in the end, and a few years from now we'll have big patches trying to merge it all back. But maybe it really *was* different this time. Snicker.

He also expressed some grumpiness about the user-space API header file split — an enormous set of patches that is only partially merged for 3.7. Header file cleanups, he says, are just too much pain for the benefit that results, so he will not consider any more of them in the future.

Grumbles notwithstanding, he pulled all of this work — and much more — for 3.7. The user-visible changes merged since last week's summary include:

  • Support for signed kernel modules has been merged. With this feature turned on, the kernel will refuse to load modules that have not been signed with a recognized key. Among other users, full support of UEFI secure boot requires this capability. There is also a mode where unsigned modules will still be loaded, but the kernel will be tainted in the process.

  • NFS 4.1 support is no longer considered experimental.

  • The MD RAID layer now supports TRIM ("discard") operations.

  • New hardware support includes TI LM355x and LM3642 LED controllers, Atmel At91 two-wire interface controllers (replaced driver), and Renesas R-Car I2C controllers.

Changes visible to kernel developers include:

  • The "UAPI disintegration" patch sets have been pulled into quite a few subsystem trees, causing a lot of header file (and related) churn. A fair amount of this work was deferred to 3.8 as well, though, so this job is not yet done.

  • The kerneldoc subsystem can now output documents in the HTML5 format.

  • The kernel now has a generic cooling subsystem based on cpufreq; see Documentation/thermal/cpu-cooling-api.txt for (a few) details.

  • It's worth noting that some kernel developers have expressed grumpiness about the increase in build time caused by the addition of the signed module feature. Anybody whose work involves doing lots of fast kernel builds will probably want to turn that feature off.

At this point it is time to perform the final stabilization work on all these changes. If things go according to the usual schedule, that should result in the final 3.7 release sometime in early December.

Comments (none posted)

EPOLL_CTL_DISABLE and multithreaded applications

By Michael Kerrisk
October 17, 2012

Other than the merging of the server-side component of TCP Fast Open, one of the few user-space API changes that has gone into the just-closed 3.7 merge window is the addition of a new EPOLL_CTL_DISABLE operation for the epoll_ctl() system call. It's interesting to look at this operation as an illustration of the sometimes unforeseen complexities of dealing with multithreaded applications; that examination is the subject of this article. However, the addition of the EPOLL_CTL_DISABLE feature highlights some common problems in the design of the APIs that the kernel presents to user space. (To be clear: EPOLL_CTL_DISABLE is the fix to a past design problem, not a design problem itself.) These design problems will be the subject of a follow-on article next week.

Understanding the need for EPOLL_CTL_DISABLE requires an understanding of several features of the epoll API. For those who are unfamiliar with epoll, we begin with a high-level picture of how the API works. We then look at the problem that EPOLL_CTL_DISABLE is designed to solve, and how it solves that problem.

An overview of the epoll API

The (Linux-specific) epoll API allows an application to monitor multiple file descriptors in order to determine which of the descriptors are ready to perform I/O. The API was designed as a more efficient replacement for the traditional select() and poll() system calls. Roughly speaking, the performance of those older APIs scales linearly with the number of file descriptors being monitored. That behavior makes select() and poll() poorly suited for modern network applications that may handle thousands of file descriptors simultaneously.

The poor performance of select() and poll() is an inescapable consequence of their design. For each monitoring operation, both system calls require the application to give the kernel a complete list of all of the file descriptors that are of interest. And on each call, the kernel must re-examine the state of all of those descriptors and then pass a data structure back to the application that describes the readiness of the descriptors.

The underlying problem of the older APIs is that they don't allow an application to inform the kernel about its ongoing interest in a (typically unchanging) set of file descriptors. If the kernel had that information, then, as each file descriptor became ready, it could record the fact in preparation for the next request by the application for the set of ready file descriptors. The epoll API allows exactly that approach, by splitting the monitoring API up across three system calls:

  • epoll_create() creates an internal kernel data structure ("an epoll instance") that is used to record the set of file descriptors that the application is interested in monitoring. The call returns a file descriptor that is used in the remaining epoll APIs.

  • epoll_ctl() allows the application to inform the kernel about the set of file descriptors it would like to monitor by adding (EPOLL_CTL_ADD) and removing (EPOLL_CTL_DEL) file descriptors from the interest list of the epoll instance. epoll_ctl() can also modify (EPOLL_CTL_MOD) the set of events that are to be monitored for a file descriptor that is already in the interest list. Once a file descriptor has been recorded in the interest list, the kernel tracks I/O events for the file descriptor (e.g., the arrival of new input); if the event causes the file descriptor to become ready, the kernel places the descriptor on the ready list of the epoll instance, in preparation for the next call to epoll_wait().

  • epoll_wait() requests the kernel to return one or more ready file descriptors. The kernel satisfies this request by simply fetching items from the ready list (the call can block if there are no descriptors that are yet ready). The application uses epoll_wait() each time it wants to check for changes in the readiness of file descriptors. What is notable about epoll_wait() is that the application does not need to pass in a list of file descriptors on each call: the kernel already has that information via preceding calls to epoll_ctl(). In addition, there is no need to rescan the complete set of file descriptors to see which are ready; the kernel has already been recording that information on an ongoing basis because it knows which file descriptors the application is interested in.

Schematically, the epoll API operates as shown in the following diagram:

[Overview of the epoll

Because the kernel is able to maintain internal state about the set of file descriptors in which the application is interested, epoll_wait() is much more efficient than select() and poll(). Roughly speaking, its performance scales according to the number of ready file descriptors, rather than the total number of file descriptors being monitored.

Epoll and multithreaded applications: the problem

The author of the patch that implements EPOLL_CTL_DISABLE, Paton Lewis, is not a regular kernel hacker. Rather, he's a developer with a particular user-space itch, and it would seem that a kernel change is the only way of scratching that itch. In the description accompanying the first iteration of his patch, Paton began with the following observation:

It is not currently possible to reliably delete epoll items when using the same epoll set from multiple threads. After calling epoll_ctl with EPOLL_CTL_DEL, another thread might still be executing code related to an event for that epoll item (in response to epoll_wait). Therefore the deleting thread does not know when it is safe to delete resources pertaining to the associated epoll item because another thread might be using those resources.

The deleting thread could wait an arbitrary amount of time after calling epoll_ctl with EPOLL_CTL_DEL and before deleting the item, but this is inefficient and could result in the destruction of resources before another thread is done handling an event returned by epoll_wait.

The fact that the kernel records internal state is the source of a complication for multithreaded applications. The complication arises from the fact that applications may also want to maintain state information about file descriptors. One possible reason for doing this is to prevent file descriptor starvation, the phenomenon that can occur when, for example, an application determines that a file descriptor has data available for reading and then attempts to read all of the available data. It could happen that there is a very large amount of data available (for example, another application may be continuously writing data on the other end of a socket connection). Consequently, the reading application would be tied up for a long period; meanwhile, it does not service I/O events on the other file descriptors—those descriptors are starved of service by the application.

The solution to file descriptor starvation is for the application to maintain a user-space data structure that caches the readiness of each of the file descriptors that it is monitoring. Whenever epoll_wait() informs the application that a file descriptor is ready, then, instead of performing as much I/O as possible on the descriptor, the application makes a record in its cache that the file descriptor is ready. The application logic then takes the form of a loop that (a) periodically calls epoll_wait() and (b) performs a limited amount of I/O on the file descriptors that are marked as ready in the user-space cache. (When the application finds that I/O is no longer possible on one of the file descriptors, then it can mark that descriptor as not ready in the cache.)

Thus, we have a scenario where the both kernel and a user-space application are maintaining state information about the same resources. This can potentially lead to race conditions when competing threads in a multithreaded application want to update state information in both places. The most fundamental piece of state information maintained in both places is "existence".

For example, suppose that an application thread determines that it is no longer necessary to monitor a file descriptor. The thread would first check to see whether the file descriptor is marked as ready in the user-space cache (i.e., there may still be some outstanding I/O to perform), and then, if the file descriptor is not ready, the thread would delete the file descriptor from the user-space cache and from the kernel's epoll interest list using the epoll_ctl(EPOLL_CTL_DEL) operation. However, these steps could fall afoul in scenarios such as the following involving two threads operating on file descriptor 9:

Thread 1 Thread 2
Determine from the user-space cache that descriptor 9 is not ready.
Call epoll_wait(); the call indicates descriptor 9 as ready.
Record descriptor 9 as being ready inside the user-space cache so that I/O can later be performed.
Delete descriptor 9 from the user-space cache.
Delete descriptor 9 from the kernel's epoll interest list using epoll_ctl(EPOLL_CTL_DEL).

Following the above scenario, some data will be lost. Other scenarios could lead to a corrupted cache or an application crash.

No use of (per-file-descriptor) mutexes can eliminate the sorts of races described here, short of protecting the calls to epoll_wait() with a (global) mutex, which has the effect of destroying concurrency. (If one thread is blocked in a epoll_wait() call, then any other thread that tries to acquire the corresponding mutex will also block.)

Epoll and multithreaded applications: the solution

Paton's solution to this problem is to extend the epoll API with a new operation that atomically prevents other threads from receiving further indications that a file descriptor is ready, while at the same time informing the caller whether another thread has "recently" been told the file descriptor is ready. The new operation relies on some of the inner workings of the epoll API.

When adding (EPOLL_CTL_ADD) or modifying (EPOLL_CTL_MOD) a file descriptor in the interest list, the application specifies a mask of I/O events that are of interest for the descriptor. For example, the mask might include both EPOLLIN and EPOLLOUT, if the application wants to know when the file descriptor becomes either readable or writable. In addition, the kernel implicitly adds two further flags to the events mask in the interest list: EPOLLERR, which requests monitoring for error conditions, and EPOLLHUP, which requests monitoring for a "hangup" (e.g., we are monitoring the read end of a pipe, and the write end is closed). When a file descriptor becomes ready, epoll_wait() returns a mask that contains all of the requested events for which the file descriptor is ready. For example, if an application requests monitoring of the read end of a pipe using EPOLLIN and the write end of the pipe is closed, then epoll_wait() will return an events mask that includes both EPOLLIN and EPOLLHUP.

As well as the flags that can be used to monitor file descriptors for various I/O events, there are a few "operational flags"—flags that modify the semantics of the monitoring operation itself. One of these is EPOLLONESHOT. If this flag is specified in the events mask for a file descriptor, then, once the file descriptor becomes ready and is returned by a call to epoll_wait(), it is disabled from further monitoring (but remains in the interest list). If the application is interested in monitoring file descriptor once more, then it must re-enable the file descriptor using the epoll_ctl(EPOLL_CTL_MOD) operation.

Per-descriptor events mask recorded in an epoll interest list
Operational flags I/O event flags

The implementation of EPOLLONESHOT relies on a trick. If this flag is set, then, if the file descriptor indicates as being ready via epoll_wait(), the kernel clears all of the "non-operational flags" (i.e., the I/O event flags) in the events mask for that file descriptor. This serves as a later cue to the kernel that it should not track I/O events for this file descriptor.

By now, we finally have enough details to understand Paton's extension to the epoll API—the epoll_ctl(EPOLL_CTL_DISABLE) operation—that allows multithreaded applications to avoid the kind of races described above. To successfully use this extension requires the following:

  1. The user-space cache that describes file descriptors should also include a per-descriptor "delete-when-done" flag that defaults to false but can be set true when one thread wants to inform another thread that a particular file descriptor should be deleted.

  2. All epoll_ctl() calls that add or modify file descriptors in the interest list must specify the EPOLLONESHOT flag.

  3. The epoll_ctl(EPOLL_CTL_DISABLE) operation should be used as described in a moment.

In addition, calls to epoll_ctl(EPOLL_CTL_DISABLE) and accesses to the user-space cache must be suitably protected with per-file-descriptor mutexes. We won't go into details here, but the second version of Paton's patch adds a sample application to the kernel source tree (under tools/testing/selftests/epoll/test_epoll.c) that demonstrates the principles.

The new epoll operation is employed via the following call:

    epoll_ctl(epfd, EPOLL_CTL_DISABLE, fd, NULL);
epfd is a file descriptor referring to an epoll instance. fd is the file descriptor in the interest list that is to be disabled. The semantics of this operation handle two cases:

  • One or more of the I/O event flags is set in the interest list entry for fd. This means that, since the last epoll_ctl() operation that added or modified this interest list entry, no other thread has executed an epoll_wait() call that indicated this file descriptor as being ready. In this case, the kernel clears the I/O event flags in the interest list entry, which prevents subsequent epoll_wait() calls from returning the file descriptor as being ready. The epoll_ctl(EPOLL_CTL_DISABLE) call then returns zero to the caller. At this point, the caller knows that no other thread is operating on the file descriptor, and it can thus safely delete the descriptor from the user-space cache and from the kernel interest list.

  • No I/O event flag is set in the interest list entry for fd. This means that since the last epoll_ctl() operation that added or modified this interest list entry, another thread has executed an epoll_wait() call that indicated this file descriptor as being ready. In this case, epoll_ctl(EPOLL_CTL_DISABLE) returns –1 with errno set to EBUSY. At this point, the caller knows that another thread is operating on the descriptor, so it sets the descriptor's "delete-when-done" flag in the user-space cache to indicate that the other thread should delete the file descriptor once when it has finished using it.

Thus, we see that with a moderate amount of effort, and a little help from a new kernel interface, a race can be avoided when deleting file descriptors in multithreaded applications that wish to avoid file descriptor starvation.

Concluding remarks

There was relatively little comment on the first iteration of Paton's patch. The only substantive comments came from Christof Meerwald; in response to these, Paton created the second version of his patch. That version received no comments, and was incorporated into 3.7-rc1. It would be nice to think that the relatively paucity of comments reflects the silent agreement that Paton's approach is correct. However, one is left with the nagging feeling that in fact few people have reviewed the patch, which leaves open the question: is this the best solution to the problem?

Although EPOLL_CTL_DISABLE solves the problem, the solution is neither intuitive nor easy to use. The main reason for this is that EPOLL_CTL_DISABLE is a bolt-on hack to the epoll API that satisfies the requirement (often repeated by Linus Torvalds) that existing user-space applications must not be broken by making a kernel ABI change. Within that constraint, EPOLL_CTL_DISABLE may be the best solution to the problem. However, it seems certain that a better solution might have been possible if it had incorporated during the original design of the epoll API. Next week's follow-on article will consider whether a better initial solution could have been found and also consider why it might not be possible to find a better solution within the constraints of the current API.

Finally, it's worth noting that the EPOLL_CTL_DISABLE feature is not yet cast in stone, although it will become so in about two months, when Linux 3.7 is released. In the meantime, if someone comes up with a better idea to solve the problem, then the existing approach could be modified or replaced.

Comments (19 posted)

Software interrupts and realtime

By Jonathan Corbet
October 17, 2012
The Linux kernel's software interrupt ("softirq") mechanism is a bit of a strange beast. It is an obscure holdover from the earliest days of Linux and a mechanism that few kernel developers ever deal with directly. Yet it is at the core of much of the kernel's most important processing. Occasionally softirqs make their presence known in undesired ways; it is not surprising that the kernel's frequent problem child — the realtime preemption patch set — has often run afoul of them. Recent versions of that patch set embody a new approach to the software interrupt problem that merits a look.

A softirq introduction

In the announcement for the 3.6.1-rt1 patch set, Thomas Gleixner described software interrupts this way:

First of all, it's a conglomerate of mostly unrelated jobs, which run in the context of a randomly chosen victim w/o the ability to put any control on them.

The softirq mechanism is meant to handle processing that is almost — but not quite — as important as the handling of hardware interrupts. Softirqs run at a high priority (though with an interesting exception, described below), but with hardware interrupts enabled. They thus will normally preempt any work except the response to a "real" hardware interrupt.

Once upon a time, there were 32 hardwired software interrupt vectors, one assigned to each device driver or related task. Drivers have, for the most part, been detached from software interrupts for a long time — they still use softirqs, but that access has been laundered through intermediate APIs like tasklets and timers. In current kernels there are ten softirq vectors defined; two for tasklet processing, two for networking, two for the block layer, two for timers, and one each for the scheduler and read-copy-update processing. The kernel maintains a per-CPU bitmask indicating which softirqs need processing at any given time. So, for example, when a kernel subsystem calls tasklet_schedule(), the TASKLET_SOFTIRQ bit is set on the corresponding CPU and, when softirqs are processed, the tasklet will be run.

There are two places where software interrupts can "fire" and preempt the current thread. One of them is at the end of the processing for a hardware interrupt; it is common for interrupt handlers to raise softirqs, so it makes sense (for latency and optimal cache use) to process them as soon as hardware interrupts can be re-enabled. The other possibility is anytime that kernel code re-enables softirq processing (via a call to functions like local_bh_enable() or spin_unlock_bh()). The end result is that the accumulated softirq work (which can be substantial) is executed in the context of whichever process happens to be running at the wrong time; that is the "randomly chosen victim" aspect that Thomas was talking about.

Readers who have looked at the process mix on their systems may be wondering where the ksoftirqd processes fit into the picture. These processes exist to offload softirq processing when the load gets too heavy. If the regular, inline softirq processing code loops ten times and still finds more softirqs to process (because they continue to be raised), it will wake the appropriate ksoftirqd process (there is one per CPU) and exit; that process will eventually be scheduled and pick up running softirq handlers. Ksoftirqd will also be poked if a softirq is raised outside of (hardware or software) interrupt context; that is necessary because, otherwise, an arbitrary amount of time might pass before softirqs are processed again. In older kernels, the ksoftirqd processes ran at the lowest possible priority, meaning that softirq processing was, depending on where it is being run, either the highest priority or the lowest priority work on the system. Since 2.6.23, ksoftirqd runs at normal user-level priority by default.

Softirqs in the realtime setting

On normal systems, the softirq mechanism works well enough that there has not been much motivation to change it, though, as described in "The new visibility of RCU processing," read-copy-update work has been moved into its own helper threads for the 3.7 kernel. In the realtime world, though, the concept of forcing arbitrary processes to do random work tends to be unpopular, so the realtime patches have traditionally pushed all softirq processing into separate threads, each with its own priority. That allowed, for example, the priority of network softirq handling to be raised on systems where networking needed realtime response; conversely, it could be lowered on systems where response to network events was less critical.

Starting with the 3.0 realtime patch set, though, that capability went away. It worked less well with the new approach to per-CPU data adopted then, and, as Thomas said, the per-softirq threads posed configuration problems:

It's extremely hard to get the parameters right for a RT system in general. Adding something which is obscure as soft interrupts to the system designers todo list is a bad idea.

So, in 3.0, softirq handling looked very similar to how things are done in the mainline kernel. That improved the code and increased performance on untuned systems (by eliminating the context switch to the softirq thread), but took away the ability to finely tweak things for those who were inclined to do so. And realtime developers tend to be highly inclined to do just that. The result, naturally, is that some users complained about the changes.

In response, in 3.6.1-rt1, the handling of softirqs has changed again. Now, when a thread raises a softirq, the specific interrupt in question (network receive processing, say) is remembered by the kernel. As soon as the thread exits the context where software interrupts are disabled, that one softirq (and no others) will be run. That has the effect of minimizing softirq latency (since softirqs are run as soon as possible); just as importantly, it also ties processing of softirqs to the processes that generate them. A process raising networking softirqs will not be bogged down processing some other process's timers. That keeps the work local, avoids nondeterministic behavior caused by running another process's softirqs, and causes softirq processing to naturally run with the priority of the process creating the work in the first place.

There is an exception, of course: softirqs raised in hardware interrupt context cannot be handled in this way. There is no general way to associate a hardware interrupt with a specific thread, so it is not possible to force the responsible thread to do the necessary processing. The answer in this case is to just hand those softirqs to the ksoftirqd process and be done with it.

A logical next step, hinted at by Thomas, is to move from an environment where all softirqs are disabled to one where only specific softirqs are. Most code that disables softirq handling is only concerned with one specific handler; all the others could be allowed to run as usual. Going further, he adds: "the nicest solution would be to get rid of them completely." The elimination of the softirq mechanism has been on the "todo" list for a long time, but nobody has, yet, felt the pain strongly enough to actually do that work.

The nature of the realtime patch set has often been that its users feel the pain of mainline kernel shortcomings before the rest of us do. That has caused a great many mainline fixes and improvements to come from the realtime community. Perhaps that will eventually happen again for softirqs. For the time being, though, realtime users have an improved softirq mechanism that should give the desired results without the need for difficult low-level tuning. Naturally, Thomas is looking for people to test this change and report back on how well it works with their workloads.

Comments (14 posted)

Patches and updates

Kernel trees


Build system

Core kernel code

Development tools

Device drivers

Filesystems and block I/O

Memory management

Virtualization and containers

Benchmarks and bugs


Page editor: Jonathan Corbet


Whonix for anonymity

By Jake Edge
October 17, 2012

Creating a distribution for anonymity on the internet has its challenges. But it's important, especially for those living under repressive regimes. Getting the details right is clearly an overriding concern, which is why distributions of this kind tend to turn to Tor to provide that anonymity. But, Tor alone does not necessarily insulate users from disclosing personally identifiable information.

We looked at The Amnesic Incognito Live System (Tails)—a Tor-based live distribution—back in April 2011. But, regular applications or malware on a Tails system can potentially leak some information (e.g. IP address) that might be used to make a link between the user and their internet activity. The new Whonix distribution, which released an alpha version on October 9, uses virtualization to isolate the Tor gateway from the rest of the system, in part to eliminate those kinds of leaks.

Whonix is based on Debian and VirtualBox. It creates two separate virtual machines (VMs), one that runs all of the applications, and another that acts as a Tor gateway. All of the network traffic from the application VM (which is called the Whonix-Workstation) is routed through the Whonix-Gateway VM. That means the only network access available to applications is anonymized by Tor.

That setup has a number of benefits. For one, malware running on the Whonix-Workstation has no visibility into the actual configuration of the underlying system, so things like IP address, MAC address, hardware serial numbers, and the like, are all hidden. In addition, Whonix can be used in a physically isolated way, where the Workstation and Gateway run on two separate machines. It isn't only Linux that can be protected with Whonix, either, as Windows or other operating systems can be installed as the Whonix-Workstation.

The iptables rules on the workstation redirect all traffic to the gateway and disallow any local network connections. In addition, the firewall on the gateway fails "closed", disallowing any connections if Tor fails. Whonix also configures the system and various applications to reduce or eliminate information leaks. That includes using UTC for the time zone, having the same desktop resolution, color depth, and installed fonts on all installations, and setting the same virtual MAC address on all workstations. The user on Whonix is "user" and applications like GPG are configured to not leak operating system version information

As envisioned, Whonix is a framework that is "agnostic about everything", including using alternatives for the anonymized network (e.g. JonDo, freenet), virtualization mechanism (e.g. KVM, Xen, VMWare), and host and guest operating systems (e.g. Windows, *BSD). Any of those pieces can be swapped out "with some development effort", but the developers are concentrating on the Debian/VirtualBox/Tor combination, at least currently.

Isolating applications in a single VM does not protect against all anonymity-piercing attacks. Malware can (and does) send the contents of files to remote hosts, which can, obviously, provide personally identifiable information. The Whonix documentation suggests using multiple workstation VMs, one for each type of activity. That idea is, in some ways, similar to the concept behind Qubes, another virtualization-based security-oriented operating system.

The security of Whonix is obviously dependent on its constituent parts, including the Linux kernel, VirtualBox, and Tor itself, but it also depends on how the system has been put together as well. It is perhaps not a surprise that the developer behind Whonix is pseudonymous, "adrelanos", but he or she seems keenly aware that vetting of Whonix is required before users can potentially put their lives at risk by using it. The release announcement says: "I hope skilled people look into the concept and implementation and fail to find anonymity related bugs." As with most (all?) projects, Whonix is also looking for more developers to work on it.

The project does come with an extensive Security document that covers the technology behind Whonix, its advantages and disadvantages, threat model, best practices, and so on. It also has an in-depth comparison of Whonix with Tails and the Tor Browser Bundle, which is a browser configured to use Tor and to avoid leaking identifiable information. Whonix is an ambitious project that overlaps with Tails to some extent (though there is an extensive justification for having separate projects), but the projects do collaborate, which bodes well for both.

Comments (none posted)

Brief items

Distribution quotes of the week

When, as in the case of node.js, upstream is antisocial and has an overinflated sense of self-importance, it's perfectly appropriate for Debian to work contrary to their design. Our job is not to make upstreams happy, it's to make our *users* happy; and while being good Free Software citizens means we try to respect the wishes of upstreams as well, there are exceptions.
-- Steve Langasek

I use RMS as a guide in the same way that a boat captain would use a lighthouse. It's good to know where it is, but you generally don't want to find yourself in the same spot.
-- Tollef Fog Heen (Thanks to Chris Cunningham)

For F19 I plan to submit a feature asking for not installing syslog by default anymore. I wonder how far I'll get with this before this is shut down by the conservatives... ;-)
-- Lennart Poettering

Note: I don't want to smash down the discussion with a lame "show me the code" argument. But I do want to avoid the impression that "we're unable to decide" when I fact, in this case, we are and we did. But that's, unfortunately, not enough to make appear out of thin area the code implementing the decision.
-- Stefano Zacchiroli

Comments (13 posted)

NetBSD 6.0

The NetBSD Project has announced NetBSD 6.0. "Changes from the previous release include scalability improvements on multi-core systems, many new and updated device drivers, Xen and MIPS port improvements, and brand new features such as a new packet filter."

Full Story (comments: none)

OpenELEC 2.0 the embedded XBMC distribution released

OpenELEC is an embedded Linux distribution that aims to allow people to use their Home Theatre PC in the same manner as any other device attached to your TV. "OpenELEC 2.0 is the first stable Distribution ever, that includes direct XVBA (X-Video Bitstream Acceleration) support for XBMC. The advantages introduced by this implementation are enormous. It is now possible on AMD Systems with integrated UVD (Unified Video Decoder) to playback every H.264 and VC-1 encoded content directly. This reduces CPU usage drastically."

Full Story (comments: none)

Distribution News


openSUSE 11.4 EOL

openSUSE 11.4 will no longer be maintained after November 5, 2012. "However, the community Evergreen team plans to provide ongoing maintenance for openSUSE 11.4. More details on this will be published when they are known."

Full Story (comments: none)

Newsletters and articles of interest

Distribution newsletters

Comments (none posted)

Fedora is retiring Smolt hardware census (The H)

The H reports on Fedora's plan to retire the smolt hardware census on November 7. "A page on the Fedora wiki dealing with the program's retirement lists several reasons for the decision. It seems that the information collected from the program was not as useful as the developers had hoped. Since the data resulted from an opt-in process, it was always skewed and could not be used to generalise about the distribution's install base. Added to this was the fact that the software had not been maintained for a while and does not work on RHEL 6. It is clear, from the wiki, that the Fedora development team have decided to change their approach to collecting data about their install base."

Comments (10 posted)

Mandriva Foundation gets a name (The H)

The H reports that the foundation governing Mandriva's community distribution will be called OpenMandriva. The name of the distribution will be chosen by the community once the OpenMandriva foundation has been set up and formally takes over from Mandriva SA.

Comments (none posted)

The story of Nokia MeeGo (TaskuMuro)

TaskuMuro has a lengthy history of Maemo and MeeGo, translated from the Finnish version at Muropaketti. It looks at the various devices Nokia created, starting with N770 in 2005 and continues through the concept devices that were under development up until Nokia pulled the plug on MeeGo. "The Harmattan UI was originally based on the Activity Theory principle, a frame of reference for studying human behavior and development processes. The goal is to understand society, personality and, most importantly, how these two are connected. The theory was originally developed by the Russian psychologist Vygotsky. [...] The aim was to utilize information on how people combine tasks and communicate with each other, and thus support these ways of working instead of forcing people to adopt technology-based working models. The system would adapt to the way the user interacts eith it, to ensure reciprocated interaction." (Thanks to Jussi Saarinen.)

Comments (76 posted)

Page editor: Rebecca Sobol


Accessibility and the open desktop: everyone gains by it

October 17, 2012

This article was contributed by S. Massy

During the first few years of the 21st century, there was a great deal of discussion concerning the state of readiness of GNU/Linux for the mainstream desktop and how it could be furthered. An article, published in LWN almost a decade ago, is typical of the period. Today, with Linux happily ticking away on many end-user desktops and in many schools and libraries, one can no longer doubt that, though world domination may yet be a long way off, presence certainly has been achieved.

This achievement might well in turn lead to even greater recognition and could conceivably take the open desktop from the average user's computer to deployment in large, non-technically oriented corporations and governmental institutions. However, such a possibility brings up a question which may very likely have eluded many key players in the various free desktop communities: Are the various environments on offer as accessible as they are appealing and functional?

In the computing world, accessibility generally refers to the concept of allowing as wide a range of users to interact with a system as possible, either through initial design or through hardware or software palliatives, generally referred to as assistive technology. The special needs of users can vary greatly, but, in general, can be categorized as physical, perceptual, and cognitive. It follows that, in order for a system to be accessible, it must be capable of adapting and catering to such needs, which might imply as simple a feature as the ability to customize the blinking rate of a cursor in order to avoid triggering epileptic seizures or as complex as offering a fully voice-operated system to provide a working environment to a blind person also lacking the use of her hands.

Creating a system capable of accommodating even a subset of users requiring accessibility features is therefore a vast undertaking. This, however, may well be a task which the free software community needs to address seriously, not merely for the good of its users, but to ensure its credibility as a viable alternative to mainstream, proprietary platforms as well.

The legal angle

In the early 1990s, many governments introduced legislation seeking to protect the rights of people living with disabilities; the "Americans with Disabilities Act" (ADA) in the US and the "Disability Discrimination Act" (DDA, since replaced by the "Equality Act 2010") in the UK are typical outcomes of these efforts. One of the important issues these laws were trying to address was that of discrimination in the workplace and the right to equal employment opportunities for disabled people: the wording of Titles I and IV of the ADA, for instance, reflects such an attempt.

These efforts were a vast step forward, but they also came at a time when the workplace was about to be drastically transformed by the rise of the internet and the desktop computer. The laws enacted still implicitly required employers to provide accommodations to their workers in the fulfillment of their duties, regardless of whether those duties necessitated the use of a computer or not. Sadly, this was not made very clear to either employer or employee and could depend upon one's interpretation of the text: see, for example, this article discussing the relevance of the DDA to networking and computing in the British workplace and some of the areas left undefined.

Clearly then, legislation needed to catch up with this new situation and specifically address the requirement for accessible computing and information in a working environment. Perhaps the first to react to that conclusion was the US Congress who responded with its adoption of the Section 508 amendment to the "Rehabilitation Act" in 1998, which essentially requires governmental agencies to provide accessible electronic environments to their employees and offers guidelines to that effect. This amendment, directly or coincidentally, seems to have set the tone for similar policies and laws to be enacted or amended in the 21st century.

Take, for instance, the Canadian federal government's "Policy on the Provision of Accommodation for Employees with Disabilities", which seems to be an echo of Section 508, or Germany's far more extensive "Behindertengleichstellungsgesetz", which makes "barrier-free information" one of its key concerns. More recently, in the Canadian province of Ontario, the government adopted the "Accessibility for Ontarians with Disabilities Act". This is an interesting piece of legislation insofar as it devotes a great deal of attention to equal access to information and communications in private and public employment as well as education, and refers specifically to software and self-service kiosks. Such laws, whether in North America or Europe, whether they apply solely to the public sector or to all organizations, do form a clear trend, and it is to be expected that more and more legislatures will follow suit, either at the local or national level.

The legal obligation for an employer to provide an accessible platform impacts all software, free or not, of course; unfortunately, free software platforms like GNU/Linux face an inherent disadvantage with regards to accessibility: the lack of third-party proprietary solutions. None of the mainstream commercial platforms had much stock accessibility in the beginning, some still do not, but they all almost immediately benefited from third-party offerings to bridge these gaps.

As we now know from experience, proprietary software on open desktops is scarce; commercial developers are difficult to entice and their reception by the community, should they take the plunge, can be rather mixed. The practical upshot of this is that it is highly unlikely that any assistive technology software vendor will step forward to fill the accessibility gaps on the open desktop. That leaves the responsibility to the community and associated commercial interests. Failure to provide adequate and easily integrated accessibility, however, could very well one day lead to a disaster scenario. An early convert to the free desktop could be fined or forced to provide a more accessible, commercial platform, thereby seriously undermining the credibility of free software as a worthwhile alternative in the workplace.

Where we stand

The next logical question is, "How are we doing and how far do we still need to go to achieve standards compliant accessibility on the open desktop?" In some areas, the progress has been very positive, whereas others seem to be experiencing difficulty coalescing into a meaningful movement.

Visual accessibility on the Linux console has now been adequate for over a decade, with long-standing projects, such as Speakup, Emacspeak, and Brltty, providing advanced screen reviewing functionalities through braille or speech. Reasonably good text-to-speech (TTS) processing has also been available for some time through such free software synthesizers as Festival and Espeak. This means that, when the time came to develop the Orca screen-reader for the GNOME desktop, well-tested output mechanisms already existed and could easily be integrated, allowing developers to focus on interface-related accessibility.

Orca itself has been gaining in stability and functionality steadily over the last few years, making critical applications, like Firefox and the LibreOffice suite, functionally available to the blind and visually impaired. Recently, as part of an accessibility push in GNOME, many bugs and shortcomings of Orca have received some attention, and the underlying accessibility framework and libraries it employs, ATK/AT-SPI, have become fully integrated in GNOME 3.6, becoming formal dependencies. This is very positive because, in the words of Joanmarie Diggs, the main Orca developer:

As a result, leaks, performance issues, and crashes that used to be "just our problem" are now everyone's problem. And many people who are not "accessibility developers" are starting to pitch in and fix accessibility bugs.

GNOME is not the only desktop environment accessible through Orca; there have been some efforts in other quarters, with the inclusion of preliminary accessibility support through AT-SPI in version 4.10 of Xfce4 and the early development of a Qt AT-SPI bridge for KDE.

This is all very good news for visual accessibility, but weak areas remain. There is no accessible PDF reader for the open desktop, for instance, and the accuracy of optical character recognition (OCR) software is improving at a very sedate pace. Yet these would be crucial applications to a visually impaired person in virtually any modern working environment.

There also have been recent examples of decisions by distributors which can affect out-of-the-box accessibility in a negative manner, such as the likely decision by Debian to make Xfce4 its default desktop environment in Wheezy, or the announcement by the Ubuntu team that the historically more accessible Unity 2D desktop will no longer be distributed as of release 12.10. The bulk of the recent accessibility improvements to Xfce4 were introduced in version 4.10; however, Wheezy will be shipping the older and virtually inaccessible 4.8.1 release. As for Unity 3D, Luke Yelavich, an Ubuntu accessibility developer, made it clear that he does not expect it to be as accessible as Unity 2D until the next LTS release. While a more accessible environment can usually be installed with reasonable ease, such decisions could result in a poor first impression for an inexperienced user and a wrong assessment of the level of accessibility available on the platform.

Such complaints are minor, however, when comparing the state of visual accessibility with that of physical palliatives; here, the results of GNOME's accessibility efforts seem to be rather mixed. Components key to accessibility for physically disabled people, such as the Dasher predictive text input engine or the Gnome-voice-control application do not seem to have undergone significant development, or indeed a release, in over a year and appear to be stalled just at the brink of basic usability.

This stall leads to a difficult situation, because the very people needed to test the software and provide feedback and bug reports cannot quite use it without significant help, thus placing a barrier on further development. If that barrier can be overcome, physical accessibility efforts will hopefully pull together and achieve the same kind of momentum seen with GNOME, Orca, and various related projects.

In the meantime, many interesting projects are still being developed and sponsorship can help nudge them in the right direction. The Sphinx speech recognition project was part of the Google Summer of Code this year, for example, while the Opengazer gaze tracking project received some support from AEGIS. also sponsored accessibility improvements in WebKitGtk+. Such support not only benefits projects directly, but also serves to give them visibility, which can help attract potential users and contributors, thus building a stronger community.

The matter of accessibility is by no means the only stumbling block on the road to a wider adoption of the open desktop; however, with ever more stringent laws regarding accessibility in the workplace and an aging population likely to require an increasing level of accessibility from public services, it certainly is not an issue likely to fade away of its own accord. It may well be that solid, out-of-the-box universal accessibility is not something which can be achieved by one FOSS project, but requires a greater level of collaboration and concerted vision across all the projects and sub-communities which make up the open desktop as we know it.

Comments (10 posted)

Brief items

Quotes of the week

If you want to pick a fight, insult a designer by asking why we don’t “just learn to code.”
Crystal Beasley

If we are at a point where we are fighting for a proposal to get or be defeated by a 50/50 vote, then we have lost focus on the far more important issue: being the Samba Team.
Andrew Bartlett

Comments (8 posted)

Plasma Active Three released

The third release of the Plasma Active "device-independent mobile user experience" system is available from the KDE Project. It includes a lot of improvements, new features, and some new applications (including a file manager inevitably called "Files"). "Okular Active is Plasma Active's new Ebook Reader. Okular Active is built on the technology which also drives the desktop version of the popular Document Viewer, and is optimized for reading documents on a touch device."

For more information, see this post from Aaron Seigo. "Unlike traditional file managers, Files doesn't directly expose the file system. We see that as an implementation detail like 'which kernel drivers are loaded.' Yes, it's needed for the device to function, but the person using the device shouldn't have to care. Instead, Files promotes meaning and content. On starting Files, you select what you wish to view such as documents, images, music, videos, etc.."

Comments (131 posted)

Wayland and Weston 0.99.0 snapshots released

Version 0.99 of the Wayland protocol and Weston compositor implementation have been released. "We now have responsible error handling, we have a well-defined atomic update mechanism and event dispatching is thread safe. I've been very happy to see the effort and the amount of patches on the list recently, and without that we couldn't have wrapped up the protocol and client API changes in time." The 1.0 release is to be expected before the end of the month.

Full Story (comments: 58)

Gnutls 3.1.3 released

Version 3.1.3 of gnutls is available. Improvements include support for DANE, a protocol used to verify certificate validity with DNSSEC, and the OCSP Certificate Status extension.

Full Story (comments: none)

Newsletters and articles

Development newsletters from the last week

Comments (none posted)

Zemlin: The Next Battleground for Open vs. Closed? Your Car (Wired)

Wired features an editorial from Linux Foundation Executive Director Jim Zemlin, who writes about the emerging competition in the automotive software platform market. "As automakers get into the computing business, the biggest hurdle they have to overcome isn’t each other – it’s consumer expectations driven by the rise of ubiquitous mobile computing. This is where I’d argue the battle between open and closed is going to play out the hardest in coming years … the next OS wars." As one would expect, Zemlin highlights the benefits of openness, and Linux in particular.

Comments (none posted)

Schaller: The long journey towards good free video conferencing

Christian Schaller writes about why we don't have good video calling yet and what is being done to get there. "In addition to the nitty gritty of protocols and codecs there are other pieces that has been lacking to give users a really good experience. The most critical one is good echo cancellation. This is required in order to avoid having an ugly echo effect when trying to use your laptop built-in speakers and microphone for a call. So people have been forced to use a headset to make things work reasonably well."

Comments (97 posted)

Vignatti: the damn small Wayland API

On his blog, Tiago Vignatti compares the soon-to-reach-1.0 Wayland API against the existing API, which is about 15 times larger. "Although X and Wayland’s intention are both to sit between the applications and the kernel graphics layers, a direct comparison of those two systems is not fair on most of the cases; while X encompasses Wayland in numerous of features, Wayland has a few other advantages" — chief among them, of course, is a much simpler API.

Comments (none posted)

Page editor: Nathan Willis


Brief items

FSF: Free media-sharing system picking up steam

The Free Software Foundation is looking for donations to support the MediaGoblin project. "FSF member Chris Webber started the GNU MediaGoblin project. He's leading a community team to write a next-generation social web system where users will share their experiences through photos, videos and audio, all without running proprietary software or centralizing personal data in the hands of a corporation. Right now MediaGoblin is partially developed, but the team needs financial support so that they can quit their day jobs for a year and perfect MediaGoblin's features to a professional level."

Full Story (comments: 2)

FSF: Save the Web from software patents

The Free Software Foundation would like your help in ending software patents. "PersonalWeb's software patent suit against Github and others threatens the freedom of the Web. In order to make sure that the Web can remain a free and accessible space for everyone, we need to rid ourselves of all the patents that threaten its viability."

Full Story (comments: none)

FSFLA: Access to the Source Code of Imposed Tax Software

Alexandre Oliva and the FSF Latin America are campaigning for the release of the source code for the software used by the Brazilian public administration office in charge of federal taxes, Receita Federal do Brasil (RFB). "Since 2008, RFB has been subject to federal regulations that require the product of software development contracts to be published in the Brazilian Public [Free] Software Portal, licensed under the GNU GPL. Their contract with SERPRO (the Federal Data Processing Service), to develop several programs that RFB publishes on its web site for taxpayers to fill in and submit tax returns and other forms, should comply with the obligations established in this regulation, but RFB prefers to pretend the regulation “does not apply to these programs, because they do not meet the requirements to be published in the Portal,” as if their refusal to meet the requirements excused the non-compliance with the obligations."

Full Story (comments: none)

Articles of interest

Portuguese Vieira do Minho profits from a decade of open source (EC Joinup)

The European Commission Joinup blog has an open source success story from the administration of Vieira do Minho, a municipality in the north of Portugal. "The municipality has been using open source on its servers for years. Its database management systems is Postgres, on top of which the IT department built many Geographic Information Systems. Web, email, file and print services are all provided using the Debian open source distribution. And for telephony the municipality relies on Asterisk. In March this year, the council decided it was time to use open source not only for its servers but also for its desktop computers." (Thanks to Paul Wise)

Comments (none posted)

Calls for Presentations

CFP: SciPy India 2012

SciPy India will take place December 27-29 in IIT Bombay, India. The call for proposals ends November 1, 2012. "We look forward to your submissions on the use of Python for Scientific Computing and Education. This includes pedagogy, exploration, modeling and analysis from both applied and developmental perspectives. We welcome contributions from academia as well as industry."

Full Story (comments: none)

Upcoming Events

openSUSE Conference: Last Interviews and Announcing Live Video Streaming

In anticipation of the openSUSE conference, October 20-23 in Prague, there have been several video interviews with the speakers. This announcement includes the last of the interviews and the news that live video streamimg will be available during the conference.

Comments (none posted)

Announcing openSUSE Conference 2012 Sponsors

The sponsors for the openSUSE conference, October 20-23 in Prague, have been announced. "No surprises here, SUSE, as the main sponsor of the openSUSE Project, is supporting the conference."

Comments (none posted)

Columbus Python Workshop for women and their friends, Jan 18-19

The Columbus Python Workshop will take place January 18-19, 2013 in Columbus, Ohio. "The Columbus Python Workshop for women and their friends is a free hands-on introduction to computer programming that's fun, accessible, and practical even to those who've never programmed at all before. We empower women of all ages and backgrounds to learn programming in a beginner-friendly environment."

Full Story (comments: none)

Events: October 18, 2012 to December 17, 2012

The following event listing is taken from the Calendar.

October 15
October 18
Linux Driver Verification Workshop Amirandes,Heraklion, Crete
October 15
October 18
OpenStack Summit San Diego, CA, USA
October 17
October 19
LibreOffice Conference Berlin, Germany
October 17
October 19
MonkeySpace Boston, MA, USA
October 18
October 20
14th Real Time Linux Workshop Chapel Hill, NC, USA
October 20
October 21
Gentoo miniconf Prague, Czech Republic
October 20
October 21
PyCon Ukraine 2012 Kyiv, Ukraine
October 20
October 21
PyCarolinas 2012 Chapel Hill, NC, USA
October 20
October 21
LinuxDays Prague, Czech Republic
October 20
October 23
openSUSE Conference 2012 Prague, Czech Republic
October 22
October 23
PyCon Finland 2012 Espoo, Finland
October 23
October 26
PostgreSQL Conference Europe Prague, Czech Republic
October 23
October 25 Dommeldange, Luxembourg
October 25
October 26
Droidcon London London, UK
October 26
October 27
Firebird Conference 2012 Luxembourg, Luxembourg
October 26
October 28
PyData NYC 2012 New York City, NY, USA
October 27 Central PA Open Source Conference Harrisburg, PA, USA
October 27 pyArkansas 2012 Conway, AR, USA
October 27 Linux Day 2012 Hundreds of cities, Italy
October 27
October 28
Technical Dutch Open Source Event Eindhoven, Netherlands
October 29
November 2
Linaro Connect Copenhagen, Denmark
October 29
November 1
Ubuntu Developer Summit - R Copenhagen, Denmark
October 29
November 3
PyCon DE 2012 Leipzig, Germany
October 30 Ubuntu Enterprise Summit Copenhagen, Denmark
November 3
November 4
OpenFest 2012 Sofia, Bulgaria
November 3
November 4
MeetBSD California 2012 Sunnyvale, California, USA
November 5
November 8
ApacheCon Europe 2012 Sinsheim, Germany
November 5
November 7
Embedded Linux Conference Europe Barcelona, Spain
November 5
November 7
LinuxCon Europe Barcelona, Spain
November 5
November 9
Apache OpenOffice Conference-Within-a-Conference Sinsheim, Germany
November 7
November 8
LLVM Developers' Meeting San Jose, CA, USA
November 7
November 9
KVM Forum and oVirt Workshop Europe 2012 Barcelona, Spain
November 8 NLUUG Fall Conference 2012 ReeHorst in Ede, Netherlands
November 9
November 11
Free Society Conference and Nordic Summit Göteborg, Sweden
November 9
November 11
Mozilla Festival London, England
November 9
November 11
Python Conference - Canada Toronto, ON, Canada
November 10
November 16
SC12 Salt Lake City, UT, USA
November 12
November 16
19th Annual Tcl/Tk Conference Chicago, IL, USA
November 12
November 14
Qt Developers Days Berlin, Germany
November 12
November 17
PyCon Argentina 2012 Buenos Aires, Argentina
November 16
November 19
Linux Color Management Hackfest 2012 Brno, Czech Republic
November 16 PyHPC 2012 Salt Lake City, UT, USA
November 20
November 24
8th Brazilian Python Conference Rio de Janeiro, Brazil
November 24
November 25
Mini Debian Conference in Paris Paris, France
November 24 London Perl Workshop 2012 London, UK
November 26
November 28
Computer Art Congress 3 Paris, France
November 29
November 30
Lua Workshop 2012 Reston, VA, USA
November 29
December 1
FOSS.IN/2012 Bangalore, India
November 30
December 2
CloudStack Collaboration Conference Las Vegas, NV, USA
November 30
December 2
Open Hard- and Software Workshop 2012 Garching bei München, Germany
December 1
December 2
Konferensi BlankOn #4 Bogor, Indonesia
December 2 Foswiki Association General Assembly online and Dublin, Ireland
December 5
December 7
Qt Developers Days 2012 North America Santa Clara, CA, USA
December 5
December 7
Open Source Developers Conference Sydney 2012 Sydney, Australia
December 5 4th UK Manycore Computing Conference Bristol, UK
December 7
December 9
CISSE 12 Everywhere, Internet
December 9
December 14
26th Large Installation System Administration Conference San Diego, CA, USA

If your event does not appear here, please tell us about it.

Page editor: Rebecca Sobol

Copyright © 2012, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds