User: Password:
Subscribe / Log in / New account Weekly Edition for January 30, 2014

GCC, LLVM, and compiler plugins

By Nathan Willis
January 29, 2014

GCC, the GNU Compiler Collection, is a cornerstone of the GNU project and the larger free-software community that has grown up around it. Recently a debate sprang up on the GCC mailing list over the question of whether GCC ought to deliberately adopt a development approach more like that of rival compiler LLVM. Precisely which aspects of LLVM's approach were desirable for adoption depends on who one asked, but the main argument was that LLVM seems to be attracting more users. The GCC project, however, contends that LLVM's perceived popularity is due largely to its accommodation of proprietary extensions—which is something that GCC supporters consider decidedly contrary to their core objectives.

Few would consider it a stretch to say that GCC has grown to dominate the compiler landscape over the course of its 26-plus years as a project, to the point where its popularity extends beyond free-software circles, and it is even routinely used to compile proprietary code—a scenario that is ripe for bringing interests other than software freedom into development discussions. In recent years, though, newer compiler projects have made names for themselves, LLVM being the most famous at present.

LLVM started off as a research project at the University of Illinois at Urbana–Champaign; befitting the needs of researchers, it has been developed in a modular form that allows easy replacement of individual components: front ends, code generators, optimizers, and so on. It was also released under the University of Illinois Open Source License, which is a permissive, BSD-style license that requires copyright attribution but permits the code to be used in proprietary applications. And indeed there are multiple proprietary compilers and utilities built on top of LLVM—including Apple's Xcode and NVIDIA's CUDA compiler.

For whom the bell clangs

The present debate was touched off by a comment from Dmitry Gutov on the emacs-devel list on January 19. In the midst of a discussion about Emacs's company-mode auto-completion framework (specifically, a minor mode designed to work with the LLVM front end Clang), Gutov said that he had heard Richard Stallman disliked Clang so much that he would oppose importing any code from it into Emacs. That spawned a lengthy thread about Clang/LLVM and GCC's differing approaches to software freedom—in particular, about the merits of copyleft licensing.

The various sides of that debate have all been argued many times in the past, of course, and this incarnation of the topic did not break any real new ground. Stallman's position, in short, is that Emacs and other official GNU applications should not incorporate default features designed to work with non-GNU projects (like LLVM) that do not work with the corresponding GNU project (GCC).

The debate was revived on January 21, however, when Eric S. Raymond contended that the very existence of Clang/LLVM was a response to the GCC project's resistance to broad interoperability. The Clang developers, he said, "want to build IDEs and other tools that share the compiler's code. GCC policy won't let them do that. Ergo, GCC must be kicked aside." Raymond also argued that LLVM bests GCC on several technical measures, a claim that was rapidly met with disagreement and, for the most part, dropped.

But the original point remained the subject of the resulting list thread (and several overlapping threads, some of which were cross-posted to the GCC development list). Essentially, Raymond argued that LLVM's modularity is a strong selling point among users that GCC cannot match. Whatever one thinks of the relative quality of LLVM and GCC, LLVM is clearly a capable enough compiler that GCC should be worried about losing market share (so to speak) to it among users. The solution, Raymond said, was for GCC developers to abandon their monolithic architecture and adopt a modular compiler design like LLVM's.

The mod squad

Modularity, though, is not a simple design question where compilers are concerned. As several in the discussion pointed out, it is actually LLVM's modularity combined with its non-copyleft license that makes it attractive to many proprietary software companies. Apple, NVIDIA, and other downstream LLVM users build their proprietary IDEs on LLVM—at least partly—because they do not want to be bound by the GPL's copyleft requirements. As David Edelsohn said, "The issue for companies is control: which compiler allows them better access / control over the toolchain control point of the ecosystem and which allows them to create a walled garden to improve their margins and their business model."

In essence, asking GCC to be "more like LLVM" is not a matter of modularity alone, but of making it possible to use GCC with proprietary add-ons or extensions. And although (as Joseph Myers pointed out) GCC is more modular than it used to be (with more such refactoring work on the to-do list), GCC's developers have actively resisted allowing it to be modularized in a way that would allow GCC's internal representation of a program to be read and used by proprietary compilation tools or optimizers. Stallman even reiterated this point in the current thread. GCC's present plugin system debuted with GCC 4.5.0 in 2010, but the project was keen to ensure that it does not offer a "black box" API to its internal representation.

Still, it has never been possible to fully prevent others from hacking together their own plugins, which in turn has led to some unusual licensing moves by the project. Early on, the GCC team made the decision that GCC should be able to compile proprietary code (a choice which was certainly not a given, since that could be construed as "promoting" proprietary software). The question came up due to the different nature of compilers when compared to "regular" programs. Certainly nothing in the GPL itself prevents using GPL-licensed software to work on proprietary software.

But GCC adds its own runtime library to the object code that it produces, which creates the first dilemma. Strictly speaking, this linking of the GCC runtime library would make any program compiled by GCC a derivative of GCC, which in turn means it would inherit GCC's GPL license. The solution to that problem was the GCC Runtime Library Exception, which explicitly exempts this situation and allows GCC-compiled programs to be released under whatever license the program authors would otherwise choose.

In 2009, the GCC project released an updated version of the exception designed to specifically address the second problem—what to do about proprietary GCC plugins. The exception discourages them by explicitly allowing user to distribute a GCC-generated binary under a proprietary license only if no proprietary plugins were used to compile the binary.

Thus, the GCC Runtime Library Exception leverages the GPL to discourage the use of GCC in conjunction with proprietary add-ons. But there are not that many GCC plugins to begin with, since (as Ian Lance Taylor explained), the plugin interface "simply exposes GCC internals, and as such is not stable across releases." Others have argued that the unstable interface acts like an effective deterrent to those attempting to maintain out-of-tree patch sets as well, though it has the unfortunate side effect of making it difficult to write free GCC plugins, too.

Fight club

Whatever one thinks about the peculiar requirements that the runtime library exception sets up, it is undeniable that the GCC team is not interested in supporting the permissive approach taken by LLVM. Stallman made this point when he replied that allowing non-free plugins "would be throwing in the towel. If that enables GCC to 'win', the victory would be hollow, because it would not be a victory for what really matters: users' freedom."

Then again, there is considerable disagreement over what "winning" really means for GCC, and even over whether LLVM and other compilers are in a competition with GCC. Raymond and others pushing for a more LLVM-like approach argue that LLVM will attract more users than GCC. That is certainly a simple metric, but to many it is not a relevant one. Jonathan Wakely commented that Raymond thinks he is "helping us win a war with Clang. But there is no war. There's room for two high-quality open source compilers. They will distinguish themselves in different ways, one of which is licensing."

Perhaps the more extreme viewpoint would be that GCC faces an existential threat from LLVM. Along those lines was Daniel Colascione's comment that "Free software is great, but if nobody uses it, the entire enterprise is futile, sad, and ultimately irrelevant." To that, Stallman responded, "Free software is equally futile, sad, and irrelevant if everyone uses a base for nonfree software."

Ultimately, all parties to the debate said that they are interested in seeing the number of GCC users increase. The root of the "competition" disagreement tended to come down to whether the GCC team's plugin policy amounts a barrier to attracting new users. Helmut Eller called it a "policy of introducing artificial technical hurdles to prevent some nonfree programs" and caused collateral damage. David Kastrup replied that "the whole point of the GPL is to introduce an 'artificial hurdle' preventing turning code into proprietary programs," and, significantly, it is a hurdle that works, as the absence on proprietary compilers built on top of GCC demonstrates.

There will likely always be people for whom the proprietary derivatives of a permissively-licensed program like LLVM are a problem, and there will be others who do not care about them, so long as free software exists, too. Similarly, to some people, if LLVM attracts more users than GCC, it must reveal that GCC is flawed, while to others the relative numbers are meaningless.

On the more concrete questions, however, the answers are considerably less nebulous. Stallman and the GCC team have made their position clear about adopting a lax attitude toward proprietary plugins in GCC: it is a non-starter. Whether that stance amounts to a fundamental flaw in GCC itself depends on the individual—but do not expect GCC to change its tune simply because LLVM has attracted its own crowd of users.

Comments (60 posted)

An update on Rockstar v. Google

January 27, 2014

This article was contributed by Adam Saunders

In October 2013, Rockstar, a non-practicing patent-holding entity (or "patent troll"), launched multiple lawsuits against Android OEMs and Google alleging multiple counts of patent infringement. Over the last month, sparks have been flying in the fight between Rockstar and Google. We looked at the patents back in November. Now Google has responded with a motion for declaratory judgment. Google seeks — among other things — a declaration that it does not infringe the patents Rockstar is asserting against it.

Request to change venues

Google's motion is particularly interesting because of those "other things". Rockstar brought its lawsuits in the US District Court for the Eastern District of Texas (or E.D. Tex.). However, in its motion, Google asked to move the lawsuit to a different jurisdiction: the US District Court for the Northern District of California (or N.D. Cal.).

Google is taking advantage of federal procedural rules that allow parties to many types of legal disputes to bring a motion for declaratory judgment before any US court if there is an "actual controversy within its jurisdiction". This means Google has to convince the N.D. Cal judges reviewing its motion that Rockstar has been active in asserting its patents in the area N.D. Cal has jurisdiction over: namely, northern California. Google points to Rockstar's business activities as justification for a change of venue:

7. This Court has personal jurisdiction over Rockstar. Among other things, Rockstar has continuous and systematic business contacts with California. As Rockstar executives have explained to the media, once Rockstar identifies commercially successful products, it approaches the companies behind those products in person and through other means to seek licenses to Rockstar’s patents. Rockstar conducts this business extensively throughout California, including through personnel located in the San Francisco Bay Area. Rockstar’s CEO has publicly stated that Facebook (based in Menlo Park) and LinkedIn (based in Mountain View) infringe Rockstar’s patents. ( In fact, Rockstar’s CEO has stated that it would be difficult to imagine that any tech companies—legions of which call California home—do not infringe Rockstar’s patents. On information and belief, Rockstar’s licensing and enforcement efforts in California have generated substantial revenues.

Google also claims that Apple, a Rockstar shareholder, coordinates with Rockstar from Cupertino, which is in the district.

Google has moved for declaratory judgment in N.D. Cal likely because E.D. Tex. is notorious for being plaintiff-friendly in patent infringement cases. In 2006, 88% of plaintiffs bringing patent lawsuits in E.D. Tex. won. Of the 25 highest damage awards given from February 1, 2005 to December 31, 2013 for patent infringement, twelve were from E.D. Tex, including the highest award of over $1.6 billion.

Even if Google's desire for declaratory judgment that it did not infringe is dismissed by the N.D. Cal., its judges may rule that it is nonetheless appropriate to transfer the case to its jurisdiction. Should Google succeed in having the lawsuit transferred to the other jurisdiction, that alone would significantly increase its chances of success in this litigation and reduce its chances of having to pay a high damage award should it lose. Google's desire to transfer jurisdictions can be explained through this statistical calculation; a familiar approach to solving problems for the advertising and search engine giant.

Other complaints

The motion is also interesting for two other reasons. Google implies that Rockstar is acting as a proxy for its organizers and shareholders; most notably, Apple: "Rockstar's shareholders direct and participate in Rockstar's licensing and enforcement efforts against companies in California. [...]Apple [...] is a large shareholder in closely-held Rockstar, and maintains a seat on Rookstar's board of directors. Rockstar's CEO has publicly stated that Rockstar maintains regular contact with its shareholders." (Paragraph 8). Being a proxy is not a bad thing (legally) on its face, but this argument is a way for Google to claim that it's really fighting Apple, so the dispute should be settled on Apple's turf — in California.

Google also complains about the lawsuits against the Android OEMs, which it is not party to: "Rockstar intends the Android OEM Actions to harm Google's Android platform and disrupt Google's relationships with the Android OEM Defendants. This is an open secret [...]." (Paragraph 23) Google is, of course, affected by any OEMs who feel threatened by patent assertions on their use of Android. Most of the remainder of the motion is a wholesale denial of any infringement on Google's part of the asserted patents.

On December 31, Rockstar amended its complaint against Samsung to add Google as a defendant to the lawsuit in E.D. Tex. This is an interesting response to Google's motion; instead of addressing it directly, it fires the patents aimed at Samsung against Google as well, giving Google another set of patents it has to defend itself against. It's a procedural change to the lawsuit against Samsung solely meant to keep patent litigation against Google in the lucrative E.D. Tex. jurisdiction even if Google succeeds at getting the search patent infringement claims transferred to N.D. Cal.

While Google has not caved in on the jurisdictional issue with Rockstar's search patent infringement claims, it has decided to fight in E.D. Tex. while it awaits a decision from N.D. Cal. On January 10, Google formally responded in full to Rockstar's original complaint against it. In the 88-page response, "Google admits that venue is proper in the Eastern District of Texas for purposes of this particular action but not convenient or in the interests of justice" (Paragraph 10); that is, it is still hoping that N.D. Cal. will accept a transfer of the lawsuit.

After denying that it infringes, Google asserts a variety of different defenses, which would be generally claimed in this situation: that the patents are invalid for non-patentability, lack of novelty, obviousness, and because of written deficiencies in the patent text; that there are limitations on how much in damages Google would have to pay out; prosecution history estoppel; estoppel; and unclean hands. Prosecution history estoppel means that Google is alleging that Rockstar's search patents were the result of a revision and narrowing of rejected patent applications by the Patent and Trademark Office; if this is true, it means that Rockstar can only assert a narrow, and therefore weaker, interpretation of the patents against Google. The plain "estoppel" claim implies that Rockstar had behaved in such a way as to cause Google to believe it would not sue for infringement of the relevant patents.

The most interesting defense, though, is the last one: a claim of inequitable conduct on the part of the original inventors, the original patent attorneys, and anyone else who failed to make certain disclosures that they were required to to the United States Patent and Trademark Office (USPTO). Google alleges that the inventors were aware of prior art that made the search patents ineligible for patentability, and deliberately decided not to disclose them to the USPTO so they would be awarded the patents.

Google also alleges that the inventors and prosecuting attorneys lied about the workings of other relevant prior art, namely, the Open Text search engine. Google claims that, had the USPTO been properly informed about Open Text, the patents would not have issued. Since this was inequitable conduct, the patents cannot be enforced. For example, with regards to the '969 Patent:

76. But for the Applicants’ false statements to the PTO regarding Open Text functionality, the claims of the ‘969 Patent would not have issued. The Examiner had previously rejected all claims in light of the Open Text functionality described in the Sullivan reference. Directly after the Applicants’ false description of the Open Text functionality, the Examiner granted all claims. Moreover, the Examiner explicitly noted that he granted the claims in light of the Applicants’ distinction between “selling advertisements by the keyword”—as described in the alleged invention—and “selling of keywords wherein search results ordered based upon a sold keyword”—as the applicants claimed was practiced by the Open Text search engine. (Notice of Allowability at 2.) Had the Examiner known that the Open Text search engine in actuality “sold advertisements by the keyword,” he would not have withdrawn his rejection and granted the claims.

77. As a result of the actions described above, all claims of the ‘969 Patent are unenforceable due to inequitable conduct committed during the prosecution of the ‘969 Patent.

Much more to come

These activities are the latest in what will very likely be long, drawn-out litigation in the plaintiff-friendly Eastern District of Texas. Indeed, the detail of Google's January 10 response indicates that Rockstar has taken on a huge fight. However, should Rockstar be successful at trial or at getting a substantial and/or confidential settlement, it might send shocks through the IT industry, leading lawyers for other companies to be more amenable to paying Rockstar for patent licenses. Rockstar may be making some progress as Huawei appears to have already settled with the company, as there is a joint motion to dismiss Rockstar's lawsuit against Huawei. That kind of outcome seems unlikely for Google, at least, so stay tuned for lots more battles and skirmishes in this legal war.

Comments (7 posted)

This week in "As the Technical Committee Turns"

By Jonathan Corbet
January 29, 2014
The Debian init system discussion was recently described in the LWN comments as "the worst, least-gender-balanced soap opera ever." Certainly the events of the last week could be seen as supporting that view; the discussion has generated a lot of noise, but little forward progress. For those who are not tuning in for every episode, here's a quick update.

On January 25, Bdale Garbee, the chair of the Debian Technical Committee, called for votes on a proposal to resolve the debate. In a surprising plot turn, the item to be voted on was not the extensive, detailed ballot that had been developed over several preceding episodes. Instead, as if the whole thing had been a dream, Bdale put forward a simple ballot picking a default init system, but only for the upcoming "jessie" release, and only for the Linux-based version of Debian at that. As the trailing credits began to roll, committee member Russ Allbery cast his vote, listing systemd as his first choice.

Bdale was seemingly hoping to make some progress in the discussion by answering something smaller than the original question. Had the vote gone as intended, that may well have been the result. There would have been plenty of details yet to be worked out by the committee, but there would have at least been some understanding of the general direction in which things were going.

Unfortunately, the next day's episode began with an impassioned outburst from Ian Jackson, who described himself as "really quite upset by this." Ian was unhappy that the ballot had been developed without any input from the rest of the committee. He pointed out that the ballot lacked language allowing the result to be overturned by a simple majority on a general resolution, but he clearly had other problems with it as well. Raising the emotional temperature, he voted for "further discussion," followed by sysvinit as the highest preference — and systemd as the lowest, below OpenRC. Bdale agreed that the general resolution language should have been present, but added that he wasn't sure how to end the vote at this time; this episode's cliffhanger ending had the entire committee wondering if a ballot that nobody wanted anymore would have to run to its conclusion anyway.

The tension was resolved the next morning, when several committee members had voted for "further discussion," and Debian project secretary agreed that the outcome was not in doubt. So that particular vote concluded with nothing changed, and nothing resolved.

According to the trailers for upcoming episodes, another ballot can be expected in the near future, but there may well be a surprising plot turn or two before that comes to pass. Ian has a proposal of his own that mandates support for sysvinit and prohibits any package from depending on a specific init implementation. It would, in other words, mandate something very close to the status quo, setting up the viewership for a rather tiresome series of reruns.

An alternative might be a form of Bdale's ballot with the general resolution text tacked on. No such ballot has been proposed, though, and Bdale has been relatively quiet since the end of the first vote. The possibility of him popping out of the woodwork with a new ballot after the commercial break cannot be entirely discounted, but it appears that, for now, the next scene is a close-up shot of Ian at center stage as he works on a slightly different approach.

As Ian now sees it, there are two fundamental questions to be answered beyond that of which system should be the default:

  • Does Debian want to support multiple init systems indefinitely, or should the project eventually settle on one of them?

  • Is it permissible for packages to depend on a specific init system?

With regard to the first question, it seems reasonably clear that the committee is not interested in trying to set the direction of the Debian project for many years into the future. If some new init system comes along that is far better than anything we have now, the project should have the flexibility to switch to it — and, besides, there need to be some open questions to provide drama for the next season.

Speaking of open questions: nothing has been posted to suggest that the basic 4:4 split of the committee between systemd and upstart has changed. So a ballot on the default init system still runs a high risk of coming down to a tie, which may or may not be resolved by the chair's casting vote. Making it easier to overrule the committee's decision via general resolution may well have the result of making a general resolution more likely to happen. So there is a good chance that this particular soap opera has a number of convoluted episodes yet to run. Cue the sappy music, sit back, and enjoy.

Comments (170 posted)

Page editor: Jonathan Corbet


Spoiled onions and Tor exit relays

By Nathan Willis
January 29, 2014

The Tor network offers users a valuable set of features designed to ensure anonymity, but those features rely on the availability of well-behaved Tor nodes. A research team in Sweden recently explored the Tor network looking for malicious nodes, and its newly-released findings indicate that while the bad actors are few, they are indeed out there.

The team is led by Philipp Winter and Stefan Lindskog at Karlstad University; their past research has looked at other potential threats to Tor, such as the "Great Firewall of China" and various deep-packet inspection (DPI) techniques. The new study took a look at Tor exit relays, the final nodes in Tor's network circuits which make the last-hop connections with Internet servers. These connections are not encrypted by Tor (although the payload connection itself could certainly be running over SSL/TLS), so they are particularly important to the integrity of the Tor network as a whole. A malicious exit relay could undermine its connections with a variety of man-in-the-middle (MITM) attacks: traffic sniffing, DNS poisoning, SSL stripping, or even HTTPS MITM interception.

Like a relay over trouble water

Winter and Lindskog monitored the Tor network for four months in 2013, analyzing the behavior of more than 1000 exit relays. They published their results [PDF] in January 2014. To detect a malicious relay, they used a lightweight Python tool that creates Tor circuits to a decoy destination, specifically choosing each exit relay used in order to test as many as possible. This is a non-standard feature of the test program; Tor clients can choose the circuit they use for their connection (in order to make it harder for outside attackers to predict the circuit), but by default Tor randomizes the circuit, rather than asking the user for input.

Relays conducting HTTPS MITM attacks could be detected by simply comparing the fingerprint of the destination's X.509 certificate as reported by the relay with the known, correct certificate. The tool could also detect SSL stripping attacks, in which the malicious relay would rewrite HTTPS URLs as HTTP equivalents. Here again, detecting the interference of the exit relay is a simple matter of comparing the document returned through Tor with the known original.

The team also tested SSH connections, in which a malicious relay might attempt to replace the destination server's public key, as well as DNS requests. SSH MITM attacks are more difficult to carry out, they note, because replacing the destination key only fools the SSH client if the client has never connected to the server before—for a known server, SSH will report the key mismatch to the user. DNS poisoning, they note, is not common, but in the past there have been incidents reported where Tor exit relays inadvertently blocked certain sites by running filtering software.

Results and impact

Over the course of the four-month scan, the team encountered just 25 misbehaving exit relays. As a percentage of Tor's total network, that is quite small; the roughly 1000 relays scanned during the test accounted for every active exit relay the team saw. Still, some might consider 2.5% high, particularly in light of the fact that most Tor clients cannot select the exit relay used for their connection.

The bad relays are summarized in a table as well as in the paper. The majority of the attacks detected were HTTPS MITM (18), followed by SSH MITM (5); several relays were found to mount both attacks. There were also two SSL-stripping relays, two relays that redirected DNS requests, and one that used an HTML injection attack.

Looking at the fake HTTPS certificates returned by the bad relays revealed another interesting fact: the researchers found that all of the HTTPS MITM relays returned a certificate signed by the same root Certificate Authority (labeled "Main Authority," which is not a genuine CA). That may suggest that all of the malicious exit relays were run by the same attacker, although there are other possibilities (such as a malicious-relay-in-a-box program). These exit relays were located in several countries, in several different IP address blocks, but in addition to using the same root CA, they were all running the same version of Tor.

The decoy destination site requested during each test also appears to have made a difference. The team reports that some of the tested relays would only launch an attack against their decoy site that used the word "bank" in its domain name.

Interpreting the significance of these results is a subjective exercise. The Tor network has mechanisms in place to block an exit relay if it is believed to be acting maliciously. The paper also notes that the HTTPS MITM attacks would be difficult to pull off against a real user, because the phony root CA used would trigger an invalid-certificate warning from the browser. Of course, that supposes that the user takes the invalid-certificate warning seriously, which is certainly not guaranteed, but it could be argued that a Tor user would be more cautious than average.

Going forward, the researchers recommend a few possible paths to protect users against attack. One is fetching server certificates in parallel over multiple Tor circuits, a technique that has also been recommended elsewhere for use with direct HTTPS connections (such as Convergence). They also speculate that initiating a certificate fetch (over a different circuit) whenever an invalid-certificate warning is encountered could be valuable.

For its part, the Tor project has historically been good about anticipating new threats and adapting to them. The Tor Browser Bundle, for example, incorporates the HTTPS Everywhere extension specifically to guard against SSL-stripping attacks. Whether or not 25 bad exit relays constitutes a significant-enough threat to prompt changes to Tor is up to the project to decide. The existing countermeasures can certainly block out 25 bad exit relays, but perhaps there would be ways to detect these relays without requiring a four-month study. For Tor users, the good news is that the vast majority of tor exit relays seem to be behaving properly—but the bad news is that there are no guarantees.

Comments (6 posted)

Brief items

Quotes of the week

RESOLVED, the Republican National Committee encourages Republican lawmakers to immediately take action to halt current unconstitutional surveillance programs and provide a full public accounting of the NSA’s data collection programs.
The US Republican National Committee

Results show that the representative consumer is willing to make a one-time payment for each app of $2.28 to conceal their browser history, $4.05 to conceal their list of contacts, $1.19 to conceal their location, $1.75 to conceal their phone’s identification number, and $3.58 to conceal the contents of their text messages. The consumer is also willing to pay $2.12 to eliminate advertising. Valuations for concealing contact lists and text messages for “more experienced” consumers are also larger than those for “less experienced” consumers.
Scott Savage and Donald M. Waldman, via Bruce Schneier

Comments (none posted)

New vulnerabilities

libreswan: denial of service

Package(s):libreswan CVE #(s):CVE-2013-6467
Created:January 29, 2014 Updated:January 29, 2014
Description: From the CVE entry:

Libreswan 3.7 and earlier allows remote attackers to cause a denial of service (NULL pointer dereference and IKE daemon restart) via IKEv2 packets that lack expected payloads.

Fedora FEDORA-2014-1092 libreswan 2014-01-29
Fedora FEDORA-2014-1121 libreswan 2014-01-29

Comments (none posted)

cxxtools: denial of service

Package(s):cxxtools CVE #(s):CVE-2013-7298
Created:January 29, 2014 Updated:February 17, 2014
Description: From the CVE entry:

query_params.cpp in cxxtools before 2.2.1 allows remote attackers to cause a denial of service (infinite recursion and crash) via an HTTP query that contains %% (double percent) characters.

Fedora FEDORA-2014-1207 cxxtools 2014-01-29
Mageia MGASA-2014-0073 cxxtols 2014-02-16

Comments (none posted)

clamav: multiple vulnerabilities

Package(s):clamav CVE #(s):
Created:January 28, 2014 Updated:January 29, 2014
Description: From the openSUSE advisory:

Code quality fixes in libclamav, clamd, sigtool, clamav-milter, clamconf, and clamdtop. Code quality fixes in libclamav, libclamunrar and freshclam. Valgrind suppression rules for dl_catch_error complaints. bb #8385: a PDF ASCII85Decode zero-length fix. libclamav: SCAN_ALL mode fixes. bb #7436: elf64 header early exit. iso9660: iso_scan_file rewrite.

openSUSE openSUSE-SU-2014:0144-1 clamav 2014-01-28

Comments (none posted)

tor: poor random number generation

Package(s):tor CVE #(s):CVE-2013-7295
Created:January 28, 2014 Updated:February 13, 2014
Description: From the bug report:

Tor fixes potentially poor random number generation for users who 1) use OpenSSL 1.0.0 or later, 2) set "HardwareAccel 1" in their torrc file, 3) have "Sandy Bridge" or "Ivy Bridge" Intel processors, and 4) have no state file in their DataDirectory (as would happen on first start). Users who generated relay or hidden service identity keys in such a situation should discard them and generate new ones.

Mandriva MDVSA-2014:123 tor 2014-06-11
openSUSE openSUSE-SU-2014:0143-1 tor 2014-01-28
Mageia MGASA-2014-0059 tor 2014-02-12

Comments (none posted)

python-jinja2: code execution

Package(s):python-jinja2 CVE #(s):CVE-2014-1402
Created:January 27, 2014 Updated:September 2, 2014
Description: From the Mageia advisory:

Jinja2, a template engine written in pure python, was found to use /tmp as a default directory for jinja2.bccache.FileSystemBytecodeCache, which is insecure because the /tmp directory is world-writable and the filenames used like 'FileSystemBytecodeCache' are often predictable. A malicious user could exploit this bug to execute arbitrary code as another user.

Gentoo 201408-13 jinja 2014-08-29
Ubuntu USN-2301-1 jinja2 2014-07-24
Fedora FEDORA-2014-7399 python-jinja2 2014-06-22
Fedora FEDORA-2014-7166 python-jinja2 2014-06-22
CentOS CESA-2014:0747 python-jinja2 2014-06-11
Red Hat RHSA-2014:0747-01 python-jinja2 2014-06-11
Mandriva MDVSA-2014:096 python-jinja2 2014-05-16
Scientific Linux SLSA-2014:0747-1 python-jinja2 2014-06-11
Oracle ELSA-2014-0747 python-jinja2 2014-06-11
Red Hat RHSA-2014:0748-01 python33-python-jinja2 2014-06-11
Mageia MGASA-2014-0028 python-jinja2 2014-01-24

Comments (none posted)

strongswan: denial of service

Package(s):strongswan CVE #(s):CVE-2013-6076
Created:January 27, 2014 Updated:January 29, 2014
Description: From the CVE entry:

strongSwan 5.0.2 through 5.1.0 allows remote attackers to cause a denial of service (NULL pointer dereference and charon daemon crash) via a crafted IKEv1 fragmentation packet.

Fedora FEDORA-2014-0567 strongswan 2014-01-25
Fedora FEDORA-2014-0516 strongswan 2014-01-25

Comments (none posted)

xen: denial of service

Package(s):xen CVE #(s):CVE-2014-1642 CVE-2014-1666
Created:January 27, 2014 Updated:February 3, 2014
Description: From the Xen advisories:

[XSA-82]: AMD CPU erratum 793 "Specific Combination of Writes to Write Combined Memory Types and Locked Instructions May Cause Core Hang" describes a situation under which a CPU core may hang.

A malicious guest administrator can mount a denial of service attack affecting the whole system. (CVE-2013-6885)

[XSA-87]: The PHYSDEVOP_{prepare,release}_msix operations are supposed to be available to privileged guests (domain 0 in non-disaggregated setups) only, but the necessary privilege check was missing.

Malicious or misbehaving unprivileged guests can cause the host or other guests to malfunction. This can result in host-wide denial of service. Privilege escalation, while seeming to be unlikely, cannot be excluded. (CVE-2014-1666)

Gentoo 201407-03 xen 2014-07-16
openSUSE openSUSE-SU-2014:0483-1 xen 2014-04-04
SUSE SUSE-SU-2014:0373-1 Xen 2014-03-14
SUSE SUSE-SU-2014:0372-1 Xen 2014-03-14
CentOS CESA-2014:X002 xen 2014-01-25
Fedora FEDORA-2014-1559 xen 2014-02-03
Fedora FEDORA-2014-1552 xen 2014-02-03

Comments (none posted)

openjdk-7: unspecified vulnerability

Package(s):openjdk-7 CVE #(s):CVE-2014-0408
Created:January 24, 2014 Updated:February 3, 2014
Gentoo 201401-30 oracle-jdk-bin 2014-01-26
Ubuntu USN-2089-1 openjdk-7 2014-01-23
openSUSE openSUSE-SU-2014:0180-1 java-1_7_0-openjdk 2014-02-03
openSUSE openSUSE-SU-2014:0177-1 java-1_7_0-openjdk 2014-01-31
openSUSE openSUSE-SU-2014:0174-1 java-1_7_0-openjdk 2014-01-31

Comments (1 posted)

libmicrohttpd: code execution

Package(s):libmicrohttpd CVE #(s):CVE-2013-7039
Created:January 24, 2014 Updated:January 31, 2014

From the Red Hat bug tracker:

A stack overflow flaw was found in the MHD_digest_auth_check() function in libmicrohttpd. If MHD_OPTION_CONNECTION_MEMORY_LIMIT was configured to allow large allocations, a remote attacker could possibly use this flaw to cause an application using libmicrohttpd to crash or, potentially, execute arbitrary code with the privileges of the user running the application. This issue has been resolved in version 0.9.32.

Mageia MGASA-2014-0030 libmicrohttpd 2014-01-31
Fedora FEDORA-2014-0946 libmicrohttpd 2014-01-31
Fedora FEDORA-2014-0939 libmicrohttpd 2014-01-24
Gentoo 201402-01 libmicrohttpd 2014-02-02

Comments (none posted)

openstack-heat: two vulnerabilities

Package(s):openstack-heat CVE #(s):CVE-2013-6426 CVE-2013-6428
Created:January 23, 2014 Updated:January 29, 2014

From the Red Hat advisory:

It was found that heat did not properly enforce cloudformation-compatible API policy rules. An in-instance attacker could use the CreateStack or UpdateStack methods to create or update a stack, resulting in a violation of the API policy. Note that only setups using Orchestration's cloudformation-compatible API were affected. (CVE-2013-6426)

A flaw was found in the way Orchestration's REST API implementation handled modified request paths. An authenticated remote user could use this flaw to bypass the tenant-scoping restriction by modifying the request path, resulting in privilege escalation. Note that only setups using Orchestration's cloudformation-compatible API were affected. (CVE-2013-6428)

Red Hat RHSA-2014:0090-01 openstack-heat 2014-01-22

Comments (none posted)

openstack-neutron: information disclosure

Package(s):openstack-neutron CVE #(s):CVE-2013-6419
Created:January 23, 2014 Updated:January 29, 2014

From the Red Hat advisory:

It was discovered that the metadata agent in OpenStack Networking was missing an authorization check on the device ID that is bound to a specific port. A remote tenant could guess the instance ID bound to a port and retrieve metadata of another tenant, resulting in information disclosure. Note that only OpenStack Networking setups running neutron-metadata-agent were affected. (CVE-2013-6419)

Red Hat RHSA-2014:0231-01 openstack-nova 2014-03-04
Red Hat RHSA-2014:0091-01 openstack-neutron 2014-01-22

Comments (none posted)


Alkema: Misconceptions about forward-secrecy

Thijs Alkema has posted a blog entry addressing several common misconceptions about forward secrecy. Included in the discussion are a debunking of the notion that using more keys results in greater difficulty breaking the encryption ("To break a number of Diffie-Hellman negotiated keys all using the same Diffie-Hellman group, a number of different attacks are known. Many of these scale pretty well in the number of sessions.") and a look at the notion that forward secrecy makes it impossible to break future sessions. "The first two steps do not use the key at all, their result can be stored for later use to decrypt future keys. There is a trade-off here, though: the larger the factor base, the slower the first and second stages are, but the faster the third stage is. It’s unlikely that it is worth the effort to make the third stage as efficient as decrypting a session with a RSA private key is, but it’s not impossible."

Comments (none posted)

Page editor: Nathan Willis

Kernel development

Brief items

Kernel release status

The 3.14 merge window remains open, so there is no current development kernel release.

Stable updates: 3.12.9 and 3.10.28 were released on January 25, followed by 3.13.1 and 3.4.78 on January 29.

Comments (none posted)

Quotes of the week

If most of the oopses you decode are on your own machine with your own kernel, you might want to try to learn to be more careful when writing code. And I'm not even kidding.
Linus Torvalds

Because I've been using tmpfs as build target for a while, I've been experiencing this occasionally and secretly growing bitter disappointment towards the linux kernel which developed into self-loathing to the point where I found booting into win8 consoling after looking at my machine stuttering for 45mins while it was repartitioning the hard drive to make room for steamos. Oh the irony. I had to stay in fetal position for a while afterwards. It was a crisis.
Tejun Heo

Perhaps we could also generate the most common variants as:

 #define PERM__rw_r__r__		0644
 #define PERM__r________		0400
 #define PERM__r__r__r__		0444
 #define PERM__r_xr_xr_x		0555
Ingo Molnar replaces S_IRUGO and friends.

Comments (20 posted)

Gorman: LSF/MM 2014 so far

Mel Gorman, chair of the 2014 Linux Storage, Filesystem, and Memory Management Summit notes that the CFP deadline is approaching and that the event is shaping up nicely. "I am pleased to note that there are a number of new people sending in attend and topic mails. The long-term health of the community depends on new people getting involved and breaking through any perceived barrier to entry. At least, it has been the case for some time that there is more work to do in the memory manager than there are people available to do it. It helps to know that there are new people on the way." Anybody wanting to attend who has not yet sent in a proposal should not delay much further.

Comments (none posted)

Kernel development news

3.14 Merge window part 2

By Jonathan Corbet
January 29, 2014
As of this writing, almost 8,600 non-merge changesets have been pulled into the mainline repository for the 3.14 development cycle — 5,300 since last week's merge window summary. As can be seen from the list below, quite a bit of new functionality has been added to the kernel in the last week. Some of the more significant, user-visible changes merged include:

  • The event triggers feature has been added to the tracing subsystem. See this commit for some information on how to use this feature.

  • The user-space probes (uprobes) subsystem has gained support for a number of "fetch methods" providing access to data on the stack, from process memory, and more. See the patchset posting for more information.

  • The Xen paravirtualization subsystem has gained support for a "paravirtualization in an HVM container" (PVH) mode which makes better use of hardware virtualization extensions to speed various operations (page table updates, for example).

  • The ARM architecture can be configured to protect kernel module text and read-only data from modification or execution. The help text for this feature notes that it may interfere with dynamic tracing.

  • The new SIOCGHWTSTAMP network ioctl() command allows an application to retrieve the current hardware timestamping configuration without changing it.

  • "TCP autocorking" is a new networking feature that will delay small data transmissions in the hope of coalescing them into larger packets. The result can be better CPU and network utilization. The new tcp_autocorking sysctl knob can be used to turn off this feature, which is on by default.

  • The Bluetooth Low Energy support now handles connection-oriented channels, increasing the number of protocols that can work over the LE mode. 6LoWPAN emulation support is also now available for Bluetooth LE devices.

  • The Berkeley Packet Filter subsystem has acquired a couple of new user-space tools: a debugger and a simple assembler. See the newly updated Documentation/networking/filter.txt for more information.

  • The new "heavy-hitter filter" queuing discipline tries to distinguish small network flows from the big ones, prioritizing the former. This commit has some details.

  • The "Proportional Integral controller Enhanced" (PIE) packet scheduler is aimed at eliminating bufferbloat problems. See this commit for more information.

  • The xtensa architecture code has gained support for multiprocessor systems.

  • The Ceph distributed filesystem now has support for access control lists.

  • New hardware support includes:

    • Processors and systems: Marvell Berlin systems-on-chip (SoCs), Energy Micro EFM32 SoCs, MOXA ART SoCs, Freescale i.MX50 processors, Hisilicon Hi36xx/Hi37xx processors, Snapdragon 800 MSM8974 SoCs, Systems based on the ARM "Trusted Foundations" secure monitor, Freescale TWR-P102x PowerPC boards, and Motorola/Emerson MVME5100 single board computers.

    • Clocks: Allwinner sun4i/sun7i realtime clocks (RTCs), Intersil ISL12057 RTCs, Silicon Labs 5351A/B/C programmable clock generators, Qualcomm MSM8660, MSM8960, and MSM8974 clock controllers, and Haoyu Microelectronics HYM8563 RTCs.

    • Miscellaneous: AMD cryptographic coprocessors, Freescale MXS DCP cryptographic coprocessors (replacement for an older, unmaintained driver), OpenCores VGA/LCD core 2.0 framebuffers, generic GPIO-connected beepers, Cisco Virtual Interface InfiniBand cards, Active-Semi act8865 voltage regulators, Maxim 14577 voltage regulators, Broadcom BCM63XX HS SPI controllers, and Atmel pulse width modulation controllers.

    • Multimedia Card (MMC): Arasan SDHCI controllers and Synopsys DesignWare interfaces on Hisilicon K3 SoCs.

    • Networking: Marvell 8897 WiFi and near-field communications (NFC) interfaces, Intel XL710 X710 Virtual Function Ethernet controllers, and Realtek RTL8153 Ethernet adapters.

    Note also that the AIC7xxx SCSI driver, deprecated since the 2.4 days, has finally been removed from the kernel.

Changes visible to kernel developers include:

  • The ARM architecture code can be configured to create a file (kernel_page_tables) in the debugfs filesystem where the layout of the kernel's page tables can be examined.

  • The checkpatch script will now complain about memory allocations using the __GFP_NOFAIL flag.

  • There is a new low-level library for computing hash values in situations where speed is more important than the quality of the hash; see this commit for details.

At this point, the 3.14 merge window appears to be winding down. If the usual two-week standard applies, the window should stay open through February 2, but Linus has made it clear in the past that the window can close earlier if he sees fit. Either way, next week's Kernel Page will include a summary of the final changes pulled for this development cycle.

Comments (5 posted)

Preparing for large-sector drives

By Jonathan Corbet
January 29, 2014
Back in the distant past (2010), kernel developers were working on supporting drives with 4KB physical sectors in Linux. That work is long since done, and 4KB-sector drives work seamlessly. Now, though, the demands on the hard drive industry are pushing manufacturers toward the use of sectors larger than 4KB. A recent discussion ahead of the upcoming (late March) Linux Storage, Filesystem and Memory Management Summit suggests that getting Linux to work on such devices may be a rather larger challenge requiring fundamental kernel changes — unless it isn't.

Ric Wheeler started the discussion by proposing that large-sector drives could be a topic of discussion at the Summit. The initial question — when such drives might actually become reality — did not get a definitive answer; drive manufacturers, it seems, are not ready to go public with their plans. Clarity increased when Ted Ts'o revealed a bit of information that he was able to share on the topic:

In the opinion of at least one drive vendor, the pressure for 64k sectors will start increasing (roughly paraphrasing that vendor's engineer, "it's a matter of physics"), and it might not be surprising that in 2 or 3 years, we might start seeing drives with 64k sectors.

Larger sectors would clearly bring some inconvenience to kernel developers, but, since they can help drive manufacturers offer more capacity at lower cost, they seem almost certain to show up at some point.

Do (almost) nothing

One possible response, espoused by James Bottomley, is to do very little in anticipation of these drives. He pointed out that much of the work done to support 4KB-sector drives was not strictly necessary; the drive manufacturers said that 512-byte transfers would not work on such drives, but the reality has turned out to be different. Not all operating systems were able to adapt to the 4KB size, so drives have read-modify-write (RMW) logic built into their firmware to handle smaller transfers properly. So Linux would have worked anyway, albeit with some performance impact.

James's point is that the same story is likely to play out with larger sector sizes; even if manufacturers swear that only full-sector transfers will be supported, those drives will still, in the end, have to work with popular operating systems. To do that, they will have to support smaller transfers with RMW. So it comes down to what's needed to perform adequately on those drives. Large transfers will naturally include a number of full-sector chunks, so they will mostly work already; the only partial-sector transfers would be the pieces at either end. Some minor tweaks to align those transfers to the hardware sector boundary would improve the situation, and a bit of higher-level logic could cause most transfers to be sized to match the underlying sector size. So, James said:

I'm asking what can we do with what we currently have? Increasing the transfer size is a way of mitigating the problem with no FS support whatever. Adding alignment to the FS layout algorithm is another. When you've done both of those, I think you're already at the 99% aligned case, which is "do we need to bother any more" territory for me.

But Martin Petersen, arguably the developer most on top of what manufacturers are actually doing with their drives, claimed that, while consumer-level drives all support small-sector emulation with RMW, enterprise-grade drives often do not. If the same holds true for larger-sector drives, the 99% solution may not be good enough and more will need to be done.

Larger blocks in the kernel

There are many ways in which large sector support could be implemented in the kernel. One possibility, mentioned by Chris Mason, would be to create a mapping layer in the device mapper that would hide the larger sector sizes from the rest of the kernel. This option just moves the RMW work into a low-level kernel layer, though, and does nothing to address the performance issues associated with that extra work.

Avoiding the RMW overhead requires that filesystems know about the larger sector size and use a block size that matches. Most filesystems are nearly ready to do that now; they are generally written with the idea that one filesystem's block size may differ from another. The challenges are, thus, not really at the filesystem level; where things get interesting is with the memory management (MM) subsystem.

The MM code deals with memory in units of pages. On most (but not all) architectures supported by Linux, a page is 4KB of memory. The MM code charged with managing the page cache (which occupies a substantial portion of a system's RAM) assumes that individual pages can easily be moved to and from the filesystems that provide their backing store. So a page fault may just bring in a single 4KB page, without regard for the fact that said page may be embedded within a larger sector on the storage device. If the 4KB page cannot be read independently, the filesystem code must read the whole sector, then copy the desired page into its destination in the page cache. Similarly, the MM code will write pages back to persistent store with no understanding of the other pages that may share the same hardware sector; that could force the filesystem code to reassemble sectors and create surprising results by writing out pages that were not, yet, meant to be written.

Avoiding these problems almost certainly means teaching the MM code to manage pages in larger chunks. There have been some attempts to do so over the years; consider, for example, Christoph Lameter's large block patch set that was covered here back in 2007. This patch enabled variable-sized chunks in the page cache, with anything larger than the native page size being stored in compound pages. And that is where this patch ran into trouble.

Compound pages are created by grouping together a suitable number of physically contiguous pages. These "higher-order" pages have always been risky for any kernel subsystem to rely on; the normal operation of the system tends to fragment memory over time, making such pages hard to find. Any code that allocates higher-order pages must be prepared for those allocations to fail; reducing the reliability of the page cache in this way was not seen as desirable. So this patch set never was seriously considered for merging.

Nick Piggin's fsblock work, also started in 2007, had a different goal: the elimination of the "buffer head" structure. It also enabled the use of larger blocks when passing requests to filesystems, but at a significant cost: all filesystems would have had to be modified to use an entirely different API. Fsblock also needed higher-order pages, and the patch set was, in general, large and intimidating. So it didn't get very far, even before Nick disappeared from the development community.

One might argue that these approaches should be revisited now. The introduction of transparent huge pages, memory compaction, and more, along with larger memory sizes in general, has made higher-order allocations much more reliable than they once were. But, as Mel Gorman explained, relying on higher-order allocations for critical parts of the kernel is still problematic. If the system is entirely out of memory, it can push some pages out to disk or, if really desperate, start killing processes; that work is guaranteed to make a number of single pages available. But there is nothing the kernel can do to guarantee that it can free up a higher-order page. Any kernel functionality that depends on obtaining such pages could be put out of service indefinitely by the wrong workload.

Avoiding higher-order allocations

Most Linux users, if asked, would not place "page cache plagued by out-of-memory errors" near the top of their list of desired kernel features, even if it comes with support for large-sector drives. So it would seem that any scheme based on being able to allocate physically contiguous chunks of memory larger than the base allocation size used by the MM code is not going to get very far. The alternatives, though, are not without their difficulties.

One possibility would be to move to the use of virtually contiguous pages in the page cache. These large pages would still be composed of a multitude of 4KB pages, but those pages could be spread out in memory; page-table entries would then be used to make them look contiguous to the rest of the kernel. This approach has special challenges on 32-bit systems, where there is little address space available for this kind of mapping, but 64-bit systems would not have that problem. All systems, though, would have the problem that these virtual pages are still groups of small pages behind the mapping. So there would still be a fair amount of overhead involved in setting up the page tables, creating scatter/gather lists for I/O operations, and more. The consensus seems to be that the approach could be workable, but that the extra costs would reduce any performance benefits considerably.

Another possibility is to increase the size of the base unit of memory allocation in the MM layer. In the early days, when a well-provisioned Linux system had 4MB of memory, the page size was 4KB. Now that memory sizes have grown by three orders of magnitude — or more — the page size is still 4KB. So Linux systems are managing far more pages than they used to, with a corresponding increase in overhead. Memory sizes continue to increase, so this overhead will increase too. And, as Ted pointed out in a different discussion late last year, persistent memory technologies on the horizon have the potential to expand memory sizes even more.

So there are good reasons to increase the base page size in Linux even in the absence of large-sector drives. As Mel put it, "It would get more than just the storage gains though. Some of the scalability problems that deal with massive amount of struct pages may magically go away if the base unit of allocation and management changes." There is only one tiny little problem with this solution: implementing it would be a huge and painful exercise. There have been attempts to implement "page clustering" in the kernel in the past, but none have gotten close to being ready to merge. Linus has also been somewhat hostile to the concept of increasing the base page size in the past, fearing the memory waste caused by internal fragmentation.

A number of unpleasant options

In the end, Mel described the available options in this way:

So far on the table is
  1. major filesystem overhaul
  2. major vm overhaul
  3. use compound pages as they are today and hope it does not go completely to hell, reboot when it does

With that set of alternatives to choose from, it is not surprising that none have, thus far, developed an enthusiastic following. It seems likely that all of this could lead to a most interesting discussion at the Summit in March. Even if large-sector drives could be supported without taking any of the above options, chances are that, sooner or later, the "major VM overhaul" option is going to require serious consideration. It may mostly be a matter when somebody feels the pain badly enough to be willing to try to push through a solution.

Comments (26 posted)

Supporting Intel MPX in Linux

By Jonathan Corbet
January 29, 2014
Buffer overflows have long been a source of serious bugs and security problems at all levels of the software stack. Much work has been done over the years to eliminate unsafe library functions, add stack-integrity checking and more, but buffer overflow bugs still happen with great regularity. A recently posted kernel patch is one of the final steps toward the availability of a new tool that should help to make buffer overflow problems more uncommon: Intel's upcoming "MPX" hardware feature.

MPX is, at its core, a hardware-assisted mechanism for performing bounds checking on pointer accesses. The hardware, following direction from software, maintains a table of pointers in use and the range of accessible memory (the "bounds") associated with each. Whenever a pointer is dereferenced, special instructions can be used to ensure that the program is accessing memory within the range specified for that particular pointer. These instructions are meant to be fast, allowing bounds checking to be performed on production systems with a minimal performance impact.

As one might expect, quite a bit of supporting software work is needed to make this feature work, since the hardware cannot, on its own, have any idea of what the proper bounds for any given pointer would be. The first step in this direction is to add support to the GCC compiler. Support for MPX in GCC is well advanced, and should be considered for merging into the repository trunk sometime in the near future.

When a file is compiled with the new -fmpx flag, GCC will generate code to make use of the MPX feature. That involves tracking every pointer created by the program and the associated bounds; any time that a new pointer is created, it must be inserted into the bounds table for checking. Tracking of bounds must follow casts and pointer arithmetic; there is also a mechanism for "narrowing" a set of bounds when a pointer to an object within another object (a specific structure field, say) is created. The function-call interface is changed so that when a pointer is passed to a function, the appropriate bounds are passed with it. Pointers returned from functions also carry bounds information.

With that infrastructure in place, it becomes possible to protect a program against out-of-bounds memory accesses. To that end, whenever a pointer is dereferenced, the appropriate instructions are generated to perform a bounds check first. See Documentation/x86/intel_mpx.txt, included with the kernel patch set (described below), for details on how code generation changes. In brief: the new bndcl and bndcu instructions check a pointer reference against the lower and upper limits, respectively. If those instructions succeed, the pointer is known to be within the allowed range.

The next step is to prepare the C library for bounds checking. At a minimum, that means building the library with -fmpx, but there is more to it than that. Any library function that creates an object (malloc(), say) needs to return the proper bounds along with the pointer to the object itself. In the end, the C library will be the source for a large portion of the bounds information used within an application. The bulk of the work for the GNU C library (glibc) is evidently done and committed to the glibc git repository. Instrumentation of other libraries would also be desirable, of course, but the C library is the obvious starting point.

Then there is the matter of getting the necessary support code into the kernel; Qiaowei Ren has recently posted a patch set to do just that. Part of the patch set is necessarily management overhead: allowing applications to set up bounds tables, removing bounds tables when the memory they refer to is unmapped, and so on. But much of the work is oriented around the user-space interface to the MPX feature.

The first step is to add two new prctl() options: PR_MPX_INIT and PR_MPX_RELEASE. The first of those sets up MPX checking and turns on the feature, while the second cleans everything up. Applications can thus explicitly control pointer bounds checking, but that is not expected. Instead, the system runtime will probably turn on MPX as part of application startup, before the application itself begins to run. Current discussion on the linux-kernel list suggests that it may be possible to do the entire setup and teardown job within the user-space runtime code, making these prctl() calls unnecessary, so they may not actually find their way into the mainline kernel.

When a bounds violation is detected, the processor will trap into the kernel. The kernel, in turn, will turn the trap into a SIGSEGV signal to be delivered to the application, similar to other types of memory access errors. Applications that look at the siginfo structure passed to the signal handler from the kernel will be able to recognize a bounds error by checking the si_code field for the new SEGV_BNDERR value. The offending address will be stored in si_addr, while the bounds in effect at the time of the trap will be stored in si_lower and si_upper. But most programs, of course, will not handle SIGSEGV at all and will simply crash in this situation.

In summary, there is a fair amount of development work needed to make this hardware feature available to user applications. The good news is that, for the most part, this work appears to be done. Using MPX within the kernel itself should also be entirely possible, but no patches to that effect have been posted so far. Adding bounds checking to the kernel without breaking things is likely to present a number of interesting challenges; for example, narrowing would have to be reversed anytime the container_of() macro is used — and there are thousands of container_of() calls in the kernel. Finding ways to instrument the kernel would thus be tricky; doing this instrumentation in a way that does not make a mess out of the kernel source could be even harder. But there would be clear benefits should somebody manage to get the job done.

Meanwhile, though, anybody looking forward to MPX will have to wait for a couple of things: hardware that actually supports the feature and distributions built to use it. MPX is evidently a part of Intel's "Skylake" architecture, which is not expected to be commercially available before 2015 at the earliest. So there will be a bit of a wait before this feature is widely available. But, by the time it happens, Linux should be ready to take advantage of it.

Comments (6 posted)

Patches and updates

Kernel trees

  • Sebastian Andrzej Siewior: 3.12.8-rt11 . (January 25, 2014)


Core kernel code

Development tools

Device drivers


Filesystems and block I/O


Page editor: Jonathan Corbet


Looking for zombies in Fedora

By Nathan Willis
January 29, 2014

Fedora is keen to make its Software Center a user-friendly application installer, so starting with Fedora 22 the distribution will require that each package listed includes a human-friendly description in its metadata. That is certainly a worthwhile goal, but along the way, the process of gathering that metadata revealed another question that all distributors grapple with: what should the distribution do with the scores of packages whose upstream projects are either missing or demonstrably dead.

The human-friendly description campaign is led by Richard Hughes. Roughly speaking, "human friendly" means an accurate description of the program, written in complete sentences, that explains its purpose to someone not already familiar with the project. Hughes started the AppData schema as a package-format-neutral way for applications to provide these descriptions themselves, and over the past few months has been urging upstream projects to write their own AppData descriptions.

The point is intended to be that a user-friendly "software installer" should let end users answer the question "Do I want to install this application?" based solely on the AppData description. The shorter, more functional descriptions already found in most RPM and Debian packages suffice for a lower-level package manager, where references to other packages and undefined acronyms (e.g., "ISC shared library used by BIND") do not actually impede usage.

But those differing use cases necessitate making some decisions about which programs belong in the user-friendly installer and which do not. Thus, Hughes spent a lot of time digging into Fedora's packages and assessing which programs belong where. He has also regularly reported back on the progress of AppData collecting, and on January 22, dropped an intriguing tidbit on the fedora-devel list:

I've now gone through the entire list of applications-in-fedora-without-appdata. A *lot* of those applications haven't seen an upstream release in half a decade, some over a decade. I would estimate that 40% of all the apps in Fedora are dead or semi-dead upstream.

40% sounds like an alarming portion, and others on the list quickly reacted with alarm. Jóhann B. Guðmundsson, for example, suggested removing packages with dead upstream projects, on the grounds that doing so would decrease the burden on Quality Assurance (QA) team volunteers. He did not garner much support for that drastic solution, however. For starters, as a number of people pointed out, the issue at hand was not whether or not to remove packages from Fedora entirely, but whether or not to include specific packages in the Software Center.

In fact, as the resulting sub-thread revealed, there was evidently a misunderstanding at the beginning: the 40% number is not the proportion of all Fedora packages whose upstream project is dormant, but rather the percentage of those packages that Hughes considered potential Software Center candidates—specifically, GUI applications, with .desktop files that do not set the NoDisplay=true" attribute and provide at least one application Category attribute. As Hughes later put it, he was thinking only of "crappy GUI applications that users install and then the application crashes, they report a bug or feature request, wait, and nothing happens as the upstream is long dead and there are going to be no more releases."

There are of course plenty of programs that have stopped receiving updates because there is little or nothing left to do. Przemek Klosowski cited the example of small, command-line utilities that do what they need to and have no real need for further development. Hughes's focus for the Fedora 22 Software Center is narrower, to be sure. The general consensus seems to be that users comfortable on the command line and looking for command-line utilities will prefer to use a command-line installer to install them.

That aside, though, the question remained whether there are Fedora packages that are abandoned upstream and genuinely do need to be culled—either because they no longer function correctly or for ancillary reasons like the increased potential of security vulnerabilities. Rahul Sundaram argued that there are valid scenarios in which a package with no upstream development should remain in Fedora. Fedora's package maintainers, he said, often fix bugs even when the upstream project in question has gone inactive.

They don't add any real overhead to Fedora and cutting them will just piss off users without any benefits. As long as package maintainers are willing to maintain them, there is no reason to mess with the process. If we want to have a way to show that upstream is inactive, that is pretty reasonable thing to do.

Yet everyone seems to agree that there are "zombie" packages in the distribution—and that in general they are not desirable. In addition to increasing the clutter in the distribution, zombie packages can become a security liability—either directly, or by prolonging the existence of old versions of libraries and other dependencies.

The difficulty is in finding them, given that a lot of good older packages "just work" and rarely elicit new comments or bug reports from users. Adam Williamson suggested that perhaps there needs to be "an approach which tried to identify software that was truly abandoned either up- or down-stream - not just 'software that no longer required changing' - and throw that out?" As he subsequently clarified, his proposal was that Fedora try to find a way to catch the "easy cases," but so that a person, not an automatic process, would make the decision as to whether a package should be pruned out.

Part of what makes the zombie hunt problematic is that while packages that fail to build (which would be obvious candidates for elimination) are easy to identify, packages that build but do not work right are difficult to find without human intervention. All Fedora packages are automatically rebuilt periodically (in a mass rebuild) to account for important changes like updates in the toolchain, so it is not possible to simply see how long it has been since a package was last built. A bad package could theoretically pass build tests and be automatically included in the archive for years even if it is unusable—if no one uses it, no one will discover that it does not work.

Zbigniew Jędrzejewski-Szmek then offered up a practical suggestion for measuring the abandonment factor of a package. He informally called the idea "bug years," which he defined as the time since the last non-mass-rebuild release multiplied by the number of currently open bugs. Most seemed to agree that the concept had merit—simple enough to understand, while not being subjective. Sundaram suggested compiling a list and bringing the results to the community on the mailing list, thus giving potential maintainers the chance to claim a package rather than automatically dooming it to deletion.

What happens next is still to be determined. Pierre-Yves Chibon ran some queries on Fedora's build history, and found just 60 packages that have not been rebuilt for 200 or more days. That is certainly a manageable number for human volunteers to inspect further, but there is likely to be a process of refinement involved before anything resembling a formal process for locating zombie packages would be created. In the meantime, users will have to remain on the lookout for themselves—but at least the upcoming Software Center in Fedora 22 should be guaranteed a zombie-free zone.

Comments (16 posted)

Brief items

Distribution quotes of the week

The choice of default init system will be decided by a best three out of five set of Dota 2. Better get practicing! :)
-- Russ Allbery

Of course, were the Secretary to choose a four-sided dice, OpenRC proponents might be unhappy. But that's life when entrusted to random.
-- Gunnar Wolf

Even a simple list of packages ordered by the time from last non-mass-rebuild release multiplied by the number of currently open bugs would be quite useful. Packages with bug-years above 50 or so would be good candidates for inspection.
-- Zbigniew Jędrzejewski-Szmek

Comments (none posted)

A call for votes in the Debian init system discussion

Debian Technical Committee chair Bdale Garbee has put out a call for votes on a ballot intended to move the discussion on init systems forward. Rather than vote on the ballot that had been under discussion, though, he is asking a simpler question that, he hopes, will yield a useful answer. "I propose we take the simplest possible 'next step'. Let's vote just on the question of what the default init system for Linux architectures should be in jessie. Once we have an answer to this question, it seems to me that we would be 'over the hump' and more likely to be able to re-focus our attention on all the secondary questions, like what our transition plan should be, whether we should try to dictate a default for non-Linux architectures, how and to what extent alternate init systems should be supported, and so forth. Most importantly, we could start *collaborating* again... which is something I fervently wish for!"

Full Story (comments: 169)

(The first) Debian init system vote concludes

The vote called for by Debian technical committee chair Bdale Garbee has reached its conclusion: the winning option is "further discussion required." The vote was torpedoed by the lack of language saying that the result could be overridden by a simple majority vote by the community on a general resolution. Committee members are working on a new vote now that will have such language, but which will still lack much of the detailed language found in early draft ballots. Stay tuned.

Full Story (comments: 151)

Distribution News

Debian GNU/Linux

Bits from the Release Team: Architecture health check

The Debian release team takes a look at the status of architectures in sid (unstable). Right now amd64, i386, and powerpc ports will be released with Jessie.

Full Story (comments: none)


openSUSE 12.2 has reached end of SUSE support

SUSE sponsored support for openSUSE 12.2 has officially ended. Time to upgrade to 12.3 or 13.1.

Full Story (comments: none)

Ubuntu family

Ubuntu 13.04 (Raring Ringtail) End of Life

Ubuntu 13.04 has reached its end of supported life. The upgrade path is via Ubuntu 13.10.

Full Story (comments: none)

Newsletters and articles of interest

Distribution newsletters

Comments (none posted)

McGovern: Valve games for Debian Developers

On the debian-devel-announce mailing list, Neil McGovern has announced that Valve is offering all Debian developers a free subscription to all past and future Valve-produced games on SteamOS. "At $dayjob for Collabora, we've been working with Valve on SteamOS, which is based on Debian. Valve are keen to contribute back to the community, and I'm discussing a couple of ways that they may be able to do that. Immediately though, they've offered a free subscription to any Debian Developer which provides access to all past and future Valve produced games!" He goes on to give details of how to get access to the subscription. The thread on debian-devel is worth checking out as well. (Thanks to Josh Triplett.)

Full Story (comments: 35)

Upstart SolydXK Distro Seeks First Business Customers ( has an interview with SolydXK developer, Arjen Balfoort. "SolydXK is a stable, and secure operating system, and is a viable alternative for small, and medium sized businesses, non-profit organizations, and home users. SolydXK is not hardware intensive, and uses little hardware resources, which makes it suitable for even the older systems in your organization. SolydXK needs to be installed just once throughout the system's live-cycle, and upgrades are thoroughly tested by our testing team, and made available in quarterly periods thus minimizing the risk of breakage to an absolute minimum. Support is given through our community's forum, and this year we'll professionally extend support, and services..."

Comments (none posted)

Page editor: Rebecca Sobol


Python, SSL/TLS certificates and default validation

By Jake Edge
January 29, 2014

Since the beginning of time—Python time anyway—there has been no checking of SSL/TLS certificates in Python's standard library; neither the urllib nor the urllib2 library performs this checking. As a result, when a Python client connects to a site using HTTPS, any certificate can be offered by the server and the connection will be established. That is probably not what most Python programmers expect, but the documentation does warn those who read it. There are alternatives, of course, but not in the standard library—until now. Python 3.4 makes things a lot better but still does no verification by default, which is a major concern to some Python developers.

To address that concern, Donald Stufft proposed that a backward-incompatible change be made to Python 3 so that SSL/TLS certificates are checked by default when HTTPS is used. While Python 3.4 has made it much easier to turn on certificate checking (by way of a default SSLContext object in the standard library), it does not do so by default. Making certificate checking on by default would break lots of applications that are—knowingly or unknowingly—relying on the existing behavior. For example, applications that connect to sites with self-signed certificates or those signed by certificate authorities (CAs) that are not in the system-wide root store (e.g. CAcert) work just fine—until certificate checking is turned on.

At first blush, it seems like an obvious change to make. Clearly anyone making a connection using HTTPS would want to ensure that the certificate is valid at the other end. But it is not quite that simple. There are many sites out there with certificates that were not signed by one of the "approved" CAs. For any of a number of reasons—cost being the most obvious—a web site may decide to sign its own certificate or to use ones signed by alternative CAs, possibly their own mini-CA that was set up to sign multiple company-specific certificates.

It is really up the user to determine what to do when there are certificates that are not signed by the approved CAs; applications need to provide some way for them to choose (à la browser certificate warnings). So, flipping a switch in the standard library will just break applications when they connect to certificates that don't validate for any reason—man in the middle or just a certificate that is signed by a CA not in the root store—but users will have no way to fix the problem. It would require a code change that typical users are not able to make. It all adds up to something of a dilemma.

While most agreed with Stufft in the abstract—that certificate checking should default to on—there was strong sentiment that a change like that couldn't be made quickly. Marc-Andre Lemburg suggested using the usual deprecation mechanism. He also noted that some sites use CAcert certificates, which would be directly affected by the changes.

Nick Coghlan was even more specific, laying out a possible transition plan that would deprecate the feature in a Python 3.6 or 3.7 time frame (2017 or later). Changing things quickly is not an option, he said:

Securing the web is a "boil the ocean" type task - Python 3.4 takes us a step closer by making it possible for people to easily use the system certs via ssl.create_default_context() (, but "move fast and break things" isn't going to work on this one any more than it does for proper Unicode support or the IPv4 to IPv6 transition. Security concerns are too abstract for most people for them to accept it as an excuse when you tell them you broke their software for their own good.

But Jesse Noller agreed with Stufft:

I have to concur with Donald here - in the case of security, especially language security which directly impacts the implicit security of downstream applications, I should not have to opt in to the most secure defaults.

Noller continued that the default behavior makes it "trivial to MITM [man in the middle] an application". But, overall, support for a quick change was hard to find in the thread. Most were concerned that applications will break and that Python will be blamed. Stephen J. Turnbull pointed out that it is more than just interactive applications that will be affected:

This is quite different from web browsers and other interactive applications. It has the potential to break "secure" mail and news and other automatic data transfers. Breaking people's software that should run silently in the background just because they upgrade Python shouldn't happen, and people here will blame Python, not their broken websites and network apps.

I don't know what the right answer is, but this needs careful discussion and amelioration, not just "you're broken, so take the consequences!"

The right answer will eventually have to come in the form of a Python enhancement proposal (PEP), though none has been started. There is plenty of time as Python 3.4 is in feature freeze (due to be released in March) and 3.5 will come in the latter half of 2015. Stufft made another suggestion that might be incorporated into a transition plan in the PEP: add an environment variable that allows users to revert to not checking certificates. That "would act as a global sort of --insecure flag for applications that don't provide one", he said. Another possibility that did not get mentioned would be to have an environment variable that turned on the checking, which would make for an easy way to look for broken code.

The lack of certificate validation in the Python standard library has been known for a long time. There are scary warnings about it in various places in the Python documentation. We looked at the problem (in many more places than just Python) in 2012. There is even the alternative Requests library that by default does certificate validation. For Python 2.x, Requests is one of the few ways to actually get certificate validation at all—there is nothing in the Python 2 standard library that does it.

Things are clearly getting better. With Python 3.4, it will be fairly straightforward for developers to use ssl.create_default_context() to turn on certificate checking, which is a big step in the right direction. But, regardless of how much sense it seems to make to do it by default, the amount of legacy code out there makes it too risky to do without a good deal of warning. The next few years will hopefully provide that warning and Python will eventually be default hardened against man-in-the-middle attacks on SSL/TLS.

Comments (20 posted)

Brief items

Quotes of the week

Git history holds more truth than copyright headers.
Matt Ray

I can’t go legal over things like this, nor do I want to. I do wonder what happens with fraudulent claims over other Public Domain material. Do different entities just randomly claim PD works and then duke it out with each other? If PD material can be claimed by big corporations, that will exclude small players from using it because they don’t have the resources to challenge said false claims. But don’t get me started.
Nina Paley, after her original, public-domain animation was removed from YouTube due to an incorrect copyright-infringement claim.

Almost as if they worked hard to make annoying users go away or something. (LLVM is IMO a blessing because, despite its somewhat broken licensing, it cured a similar attitude of the GCC folks. In a way competition is more important than licensing details!)
Ingo Molnar

Comments (none posted)

Snort released

Version of the Snort intrusion-detection system has been released. Many new features are included, including the ability to write Snort rules based on filetype identification, the ability to capture complete sessions for later analysis, and the ability to selectively capture and save network file transfers over HTTP, FTP, SMTP, POP, IMAP, and SMB.

Comments (none posted)

GnuTLS 3.2.9 available

Version 3.2.9 of GnuTLS has been released. This update is primarily a bugfix release, but it is also the first to declare the 3.2.x series as the current stable branch.

Comments (none posted)

Open Tax Solver 11.0 released

Just in the nick of time for those who pay taxes in the United States, version 11.0 of Open Tax Solver has been released. This release simply updates the code for the relevant changes in the 2013 tax year, but then again, the government does tend to frown on filling out the forms incorrectly.

Comments (3 posted)

Newsletters and articles

Development newsletters from the past week

Comments (none posted)

Stallman on GCC, LLVM, and copyleft

During a discussion on the GCC mailing list about the comparative performance of GCC versus Clang, Richard Stallman weighed in to argue that LLVM's permissive license makes it a "terrible setback" for the free software community, because contributions to it benefit proprietary compilers as well as free ones. The original topic was Eric S. Raymond's suggestion that GCC should allow non-free plugins—an idea which, unsurprisingly, Stallman does not find appealing. "To make GCC available for such use would be throwing in the towel. If that enables GCC to 'win', the victory would be hollow, because it would not be a victory for what really matters: users' freedom."

Comments (261 posted)

Montgomery: It's not a strawman after it comes true

At his blog,'s Monty Montgomery writes about a potentially alarming change in the licensing of the AAC audio codec. "After Cisco's h.264 Open h.264 announcement, Via Licensing, which runs the AAC licensing pool, pulled the AAC royalty fee list off their website. Now the old royalty terms (visible here) have been replaced by a new, apparently simplified fee list that eliminates licensing sub-categories, adds a new, larger volume tier and removes all the royalty caps. Did royalty liability for AAC software implementations just become unlimited?" An un-capped license fee for AAC could do serious damage to the viability of Cisco's free-as-in-beer H.264 plugin, but Montgomery cautions against leaping to conclusions too quickly.

Comments (1 posted)

Page editor: Nathan Willis


Articles of interest

How to get your conference talk submission accepted (

Over at, Ruth Suehle covers a talk from the recent (LCA). The talk looked at the process of choosing the talks for LCA, and what constitutes a good talk topic and abstract. There is lots of good information on putting together a talk proposal that is applicable well beyond just LCA. "When you write your submission, begin by looking at last year's program. See the depth and types of topics covered and think about why those were the ones that were submitted. If you can, take a look at blog posts from the previous year to see what people found the most interesting and popular." Don't forget the CFP deadlines calendar to help keep track of upcoming CFPs.

Comments (4 posted)

Calls for Presentations

Linux Plumbers Conference call for refereed-track presentations

The 2014 Linux Plumbers conference will be held October 15 to 17 in Düsseldorf, Germany; the call for presentations in the refereed track has just gone out. "Refereed track presentations are similar to traditional presentations, but preferably involve significant face-to-face discussion and debate. These presentations should focus on some specific issue in the "plumbing" in the Linux system, where example Linux-plumbing components include core kernel subsystems, core libraries, windowing systems, management tools, device support, media creation/playback, and so on."

Full Story (comments: none)

CFP Deadlines: January 30, 2014 to March 31, 2014

The following listing of CFP deadlines is taken from the CFP Calendar.

DeadlineEvent Dates EventLocation
January 30 July 20
July 24
OSCON 2014 Portland, OR, USA
January 31 March 29 Hong Kong Open Source Conference 2014 Hong Kong, Hong Kong
January 31 March 24
March 25
Linux Storage Filesystem & MM Summit Napa Valley, CA, USA
January 31 March 15
March 16
Women MiniDebConf Barcelona 2014 Barcelona, Spain
January 31 May 15
May 16
ScilabTEC 2014 Paris, France
February 1 April 29
May 1
Android Builders Summit San Jose, CA, USA
February 1 April 7
April 9
ApacheCon 2014 Denver, CO, USA
February 1 March 26
March 28
Collaboration Summit Napa Valley, CA, USA
February 3 May 1
May 4
Linux Audio Conference 2014 Karlsruhe, Germany
February 5 March 20 Nordic PostgreSQL Day 2014 Stockholm, Sweden
February 8 February 14
February 16
Linux Vacation / Eastern Europe Winter 2014 Minsk, Belarus
February 9 July 21
July 27
EuroPython 2014 Berlin, Germany
February 14 May 12
May 16
OpenStack Summit Atlanta, GA, USA
February 27 August 20
August 22
USENIX Security '14 San Diego, CA, USA
March 10 June 9
June 10
Erlang User Conference 2014 Stockholm, Sweden
March 14 May 20
May 22
LinuxCon Japan Tokyo, Japan
March 14 July 1
July 2
Automotive Linux Summit Tokyo, Japan
March 14 May 23
May 25
FUDCon APAC 2014 Beijing, China
March 16 May 20
May 21
PyCon Sweden Stockholm, Sweden
March 17 June 13
June 15
State of the Map EU 2014 Karlsruhe, Germany
March 21 April 26
April 27
LinuxFest Northwest 2014 Bellingham, WA, USA

If the CFP deadline for your event does not appear here, please tell us about it.

Upcoming Events

SCALE 12X: Expo schedule now available

The schedule for this year's Southern California Linux Expo is available. SCALE will take place February 21-23 in Los Angeles, CA.

Full Story (comments: none)

Speakers and venue announced for FSF's LibrePlanet 2014

LibrePlanet will take place March 22-23 in Cambridge, MA. Registration is open and speakers have been announced. The theme for this year's conference is "free software, free society".

Full Story (comments: none)

Events: January 30, 2014 to March 31, 2014

The following event listing is taken from the Calendar.

January 31 CentOS Dojo Brussels, Belgium
February 1
February 2
FOSDEM 2014 Brussels, Belgium
February 3
February 4
Config Management Camp Gent, Belgium
February 4
February 5
Open Daylight Summit Santa Clara, CA, USA
February 7
February 9
Django Weekend Cardiff Cardiff, Wales, UK
February 7
February 9 Brno, Czech Republic
February 14
February 16
Linux Vacation / Eastern Europe Winter 2014 Minsk, Belarus
February 21
February 23 2014 Gandhinagar, India
February 21
February 23
Southern California Linux Expo Los Angeles, CA, USA
February 25 Open Source Software and Govenrment McLean, VA, USA
February 28
March 2
FOSSASIA 2014 Phnom Penh, Cambodia
March 3
March 7
Linaro Connect Asia Macao, China
March 6
March 7
Erlang SF Factory Bay Area 2014 San Francisco, CA, USA
March 15
March 16
Chemnitz Linux Days 2014 Chemnitz, Germany
March 15
March 16
Women MiniDebConf Barcelona 2014 Barcelona, Spain
March 18
March 20
FLOSS UK 'DEVOPS' Brighton, England, UK
March 20 Nordic PostgreSQL Day 2014 Stockholm, Sweden
March 21 Bacula Users & Partners Conference Berlin, Germany
March 22 Linux Info Tag Augsburg, Germany
March 22
March 23
LibrePlanet 2014 Cambridge, MA, USA
March 24 Free Software Foundation's seminar on GPL Enforcement and Legal Ethics Boston, MA, USA
March 24
March 25
Linux Storage Filesystem & MM Summit Napa Valley, CA, USA
March 26
March 28
Collaboration Summit Napa Valley, CA, USA
March 26
March 28
16. Deutscher Perl-Workshop 2014 Hannover, Germany
March 29 Hong Kong Open Source Conference 2014 Hong Kong, Hong Kong

If your event does not appear here, please tell us about it.

Page editor: Rebecca Sobol

Copyright © 2014, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds