LWN.net Weekly Edition for June 26, 2014
Software patents take a beating at the US Supreme Court
As LWN reported back in early April, the Supreme Court of the US (SCOTUS) has been looking at the patent-eligibility of software, through the lens of the case Alice Corp. v CLS Bank International. On June 19, the court released a unanimous ruling [PDF] throwing out Alice's patents. Its rationale for doing so will have a profound effect on patent litigation in the software industry.
To briefly review the dispute, Alice held patents on a method, system, and process for a particular type of financial risk hedging: namely, that one party to a set of financial transactions won't pay at one or more stages in the set. This risk is known as "settlement risk". Alice's patents describe using a computer to keep track of the transactions between the parties. If the computer determines that a party does not have sufficient funds to pay their obligations to the other side, then the transaction is blocked. The relevant patents are #5,970,479, #6,912,510, #7,149,720, and #7,725,375. Litigation started in 2007, eventually winding its way up to the Supreme Court.
Ruling
Writing for a unanimous court, Justice Thomas begins with a brief description of what the patents claimed. There are effectively three different types of claims made: "(1) the foregoing method for exchanging obligations (the method claims), (2) a computer system configured to carry out the method for exchanging obligations (the system claims), and (3) a computer-readable medium containing program code for performing the method of exchanging obligations (the media claims)
" (page 3 of the ruling). Thomas goes on to describe the history of the litigation in this case, which was covered in our April article.
Thomas begins the rationale for the court's ruling by quoting §101 of the Patent
Act: "Whoever invents or discovers any new and useful process,
machine, manufacture, or composition of matter, or any new and useful
improvement thereof, may obtain a patent therefor, subject to the
conditions and requirements of this title.
" He notes that the
Supreme Court has held for a long time—most recently in its ruling last
year in Myriad—that the text of §101 means that there are limitations on what can be patented, namely: "Laws of nature, natural phenomena, and abstract ideas
" (Myriad, quoted on page 5 of the Alice ruling). However, since every invention to some extent incorporates these patent-ineligible components, the fact that an invention relies in part on an abstract idea does not automatically render the whole invention patent-ineligible.
The ruling then goes on to cite the court's recent ruling in
Mayo, which established a test to determine which inventions that
incorporate abstract ideas are patent-eligible: "First, we determine
whether the claims at issue are directed to one of those patent-ineligible
concepts
" (page 7). If it is so directed, then the court looks at
"the elements of each claim both individually and 'as an ordered
combination' to determine whether the additional elements 'transform the
nature of the claim' into a patent-eligible application
" (page
7). This is what Thomas refers to as "a search for an 'inventive concept'
" (page 7).
The court then applies this "Mayo test" to Alice's
patents. Beginning with the first step of the test, Thomas notes that these
patents do incorporate, to a strong extent, an abstract concept, as
"these claims are drawn to the abstract idea of intermediated
settlement
" (page 7). Thomas cites previous Supreme Court case law,
emphasizing its ruling in Bilski (throwing out a patent on financial risk hedging), as illustrating examples of other patent-ineligible "inventions" that simply boil down to abstract ideas. Turning directly to Alice's patents, Thomas notes that intermediated settlement is an old financial idea: "Like the risk hedging in Bilski, the concept of intermediated settlement is 'a fundamental economic practice long prevalent in our system of commerce'
" (page 9).
Thomas then applies the second step: looking at what else is in the
patent claims, individually and in combination, to see if there's anything
more to Alice's "invention". The method claims, which describe how the
financial obligations will be exchanged, don't pass the court's test
because they "merely require generic computer implementation
"
(page 10). In what might be the most important part of the ruling for
determining where the Supreme Court draws the line for "computer-implemented
inventions", Thomas is careful to stress that while some inventions
involving computers may be patent-eligible, he notes that the mere fact
that a computer is used as part of an "invention" is not on its own
sufficient to turn an abstract idea into something eligible for a patent
(pages 13-14):
The remainder of the patents, which are the system and media claims, didn't hold up either. With regard to the system claims, Alice had claimed that there was a particular computer hardware configuration it used to implement the "invention". When the court looked at that configuration, it didn't find anything more than the components of a generic general-purpose computer. Since Alice had stated in an earlier brief that if its method claims fail, so do its media claims, the court threw out the media claims as well. Having demonstrated the court's rationale for invalidating Alice's patents, Thomas concludes the ruling by upholding the Court of Appeals for the Federal Circuit's decision to throw out the patents.
Justice Sotomayor also gave a one-paragraph opinion concurring with the court, which was joined by Justice Ginsburg and Justice Breyer. In that opinion, Sotomayor notes that the three of them would completely throw out business methods as unpatentable.
Reactions
The reaction by individuals and organizations with a professional and/or
political interest in US patent law has been strong and polarized. Gene
Quinn, a pro-software-patent lawyer and blogger on ipwatchdog.com, was rather unhappy with
the decision, depicting it as "intellectually
bankrupt
". He noted that this ruling will invalidate many types of
software patents: "On first read I don’t see how any software patent
claims written as method or systems claims can survive challenge.
"
That's a strong assertion, but it doesn't really hold up. For instance, if such a software patent claim was really so novel that it couldn't be boiled down to implementing an abstract concept on a generic general-purpose computer, it might indeed stand up to litigation.
In a press release, the Free Software Foundation (FSF), unsurprisingly, expressed delight at the ruling, but was not completely satisfied. The FSF noted in the release that it continues to seek a clear and total ban on software patents through legislative change.
In an opinion
article for SCOTUSblog, former
Director of the US Patent and Trademark Office David Kappos lauds the
decision, as he finds it tacitly endorses the patentability of software:
"This week’s decision reaffirms that from the point of view of the
patent system, software languages are no different from other vernaculars –
they are a medium of expression that can be used to capture and convey
ideas.
" Referencing Diehr, an earlier Supreme Court case upholding a patent on a machine for curing rubber that included a computer, Kappos emphasizes how the court's Mayo test draws a line based on inventiveness: "the difference [between patentable and unpatentable software] is the presence or absence of a definitive invention versus abstraction. Diehr’s new and useful process for curing rubber was held to be innately patentable — the fact that it happened to be manifest in a software language was tributary.
"
I found all three of those reactions to hold merit. The court's clear statement that a mere incorporation of a general-purpose computer is insufficient for an "invention" to be patentable will, as Quinn put it, "render many hundreds of thousands of software patents completely useless
". However, the FSF and Kappos both pick up on the subtleties of the decision: the ruling does not outright abolish software patents, and even explicitly recognizes that certain computer-implemented "inventions" remain patentable.
While those working in the free and open source software world, which has been threatened and restricted by software patents, should celebrate, the ruling in Alice is not an outright victory for software patent abolitionists. There will likely be a flurry of cases challenging software patents, which will better define the boundary the court has set in this case. However, only legislative change, such as by adding language to the patent statute similar to that proposed by Richard Stallman ("developing, distributing, or running a program on generally used computing hardware does not constitute patent infringement
") will lead to abolition.
Ascend seeks to include underaddressed populations in FOSS
Mozilla has rolled out a mentorship and education program called the Ascend Project that is designed to reach out to populations that are typically under-represented in open source development. But Ascend differs from existing outreach efforts like Google Summer of Code (GSoC) and GNOME's Outreach Program for Women (OPW) in several respects—including who it targets and how the mentorship and instruction are delivered. Ascend has already opened the invitation for its first round of participants, who will attend a six-week, full-time training course in Portland, Oregon in September.
Lukas Blakk publicly announced the launch of the initiative in March. Noticing how many "developer boot camps" were popping up, Blakk said, she had the idea to:
Blakk had attended an open-source training program while at Seneca College, but one of the core concepts behind Ascend is to make similar training accessible to people who are not in college or who, for other reasons, cannot afford to retrain for a new job. Consequently, the program was envisioned from the outset to provide a daily honorarium to attendees to lessen the impact of missing work, as well as a laptop to keep upon completing the course, and amenities like complementary meals, transit passes, and childcare services.
Mozilla's management approved the plan in December 2013, and the team set out to plan the pilot program. The initial course will be held at Mozilla's offices in Portland; a second round is tentatively planned for early 2015 in New Orleans. The pilot round will be limited to 20 participants; according to a May 30 post on the project site, applications will be accepted through June 30. Subsequently, the project team will work through several steps to narrow down the field to the final 20.
The course will be full-time instruction, five days a week, for six
weeks, with the goal being to eventually have students "getting
to committed code in production
". Blakk noted in the original
announcement that there will be a certain level of technical
competence expected of attendees at the beginning, such as the ability
to complete a free online course in JavaScript development, and that
applicants will be asked to supply essays and other application
materials that establish their interest and problem-solving ability.
So far the details of the curriculum have not been posted, either
on the project site or in its GitHub
repository. But the About page indicates that the goal is to
address general-purpose FOSS project practices, such as "IRC,
bug trackers, code review, version control, creating & committing
patches
" rather than (for example) sticking to a
Mozilla-oriented curriculum that covered web development. The project
is currently looking for volunteers in the Portland area who can serve
as in-class helpers or as "drop-in" volunteers to help participants on
a less formal basis.
The outreach landscape
One might ask how Ascend fits into the larger picture of
developer-training programs (including GSoC and OPW). In a June 20
blog post,
Blakk expanded further on how Ascend is intended to differ from these other
outreach efforts—saying that many of them target school-age
children or teenagers, but that "the REAL problem to solve is
how to get adult women (and other underrepresented people) re-trained,
supported and encouraged to take on roles in technology NOW.
"
Indeed, the biggest training programs in the FOSS arena these days do tend to aim for students who have yet to enter the workforce. GSoC is the largest, and it is an outreach opportunity that focuses exclusively on college students, while Google's related Code-in program targets high-school students. OPW is open to post-college-age women, although it, too, is structured around a more-or-less semester-length internship of three months, during which the participant is expected to work full time. Ascend's six-week course may still require attendees to rearrange their work schedule—and six weeks is certainly a lengthy leave of absence from most jobs—but it is ostensibly easier to manage than twelve weeks.
Consequently, Ascend is already appealing to a different segment of potential new FOSS contributors by focusing on finding participants who cannot pursue college computer science studies. The fact that it encourages applications from people in several distinct groups of under-represented communities (ethnic minorities and the LGBTQ community) is also atypical. The software industry as whole, after all, is often noted for how it skews toward the Caucasian male demographic.
That said, Ascend cannot necessarily expect to be overwhelmed by students unless it finds ways to advertise and promote its courses outside of the avenues dominated by those who are already in the software industry (and FOSS in particular). Such a chicken-and-egg problem confronts FOSS in many outreach and evangelism efforts, of course, and easy solutions to them are scarce. Mozilla certainly has a global reach and wide "brand awareness," both of which should help matters.
Reaching out to underemployed individuals is a factor that Ascend does have in common with both OPW and GSoC, both of which pay stipends to their participants. In contrast, the "boot camp" model Blakk referred to in the initial announcement is composed largely of schools that charge attendees tuition and fees, rather than paying them. While that may make for a good re-training option, it essentially limits enrollment to those individuals who already have sufficient means to retrain themselves.
Ascend also differs in that it appears to be designing a rather broad curriculum, focusing on general-purpose development practices. The "boot camp" model tends to focus on a particular technology stack, while the GSoC/OPW model connects participants with individual, existing software projects. Ultimately, of course, most members of the FOSS community know that it can take quite some time and interaction with quite a few people for a newcomer to join the open-source development community in a full-time capacity. A course like Ascend's is the first, but not the only, step. With the potential to reach interested participants who are not within the mission of the other outreach efforts, though, Ascend has the opportunity to help many new people take that first step.
[Thanks to Paul Wise.]
Term limits for Debian technical committee members
The Debian technical committee has a role similar to that of the US Supreme Court: it makes the final pronouncement on disputes that cannot be resolved otherwise. It also resembles the Supreme Court in another way: an appointment to the committee is essentially "for life"; in the absence of a resignation, there is no natural end to membership on the committee. Recently, there has been discussion aimed at changing that situation, but the form of a better arrangement is not yet clear.As described in the Debian constitution, the technical committee can have up to eight members. The committee chooses its own members, subject to approval by the Debian project leader. The size of the committee has varied over time, but is usually close to the full eight members.
Anthony Towns started the discussion in May, noting that some of the members of the committee have been there for quite some time. Ian Jackson appointed himself to the committee in 1998 as the brand-new Debian constitution was being adopted. Bdale Garbee came on board in early 2001. Andreas Barth and Steve Langasek have both been members since 2006; Russ Allbery and Don Armstrong were added in 2009. Colin Watson is a relative newcomer, having joined the committee in 2011, and Keith Packard, member since late 2013, is still rather wet behind the ears. Anthony raised no complaints about the performance of any of the committee members, but did note:
There was almost no opposition to the idea of establishing some sort of term limit for technical committee members, even among the committee itself. Russ Allbery suggested that he has been considering voluntarily limiting his own term, regardless of what the project might decide to do. But the discussion on how these limits might be applied was rather less conclusive. It is not that there is disagreement over how it should be done; instead, there seems to be widespread uncertainty about the best approach to the problem.
Concerns
The reasons why term limits might make sense were perhaps best expressed by Russ:
Russ also pointed out that the "for life" nature of technical committee appointments causes the selection process for committee members to be highly cautious and conservative. Limited terms would lower the level of perceived risk in appointing a developer to the committee, possibly increasing the set of viable candidates.
Those reasons, however, do not necessarily answer the question of what the policy should be. Should, for example, technical committee members be limited to a fixed number of years on the committee? There is an immediate practical problem there: if the limit were to be set to, say, four years, six of the current members would immediately be forced out. The project seems to agree that this would be an unfortunate result; while there is value in bringing new perspectives to the committee, there is also value in maintaining a certain amount of continuity and experience there.
Various ways of fixing the problem were proposed; many of them involved assigning artificial appointment dates for the current members to avoid having them all expire at once. An alternative would be to put a cap on the number of members whose terms could expire within a given year. So, even if six members were over the adopted limit, only the two most senior members would have their terms expire immediately.
There is also the question of when a member whose term has expired can be reappointed to the committee. The technical committee is currently self-selecting; it appoints its own members, subject to approval from the project leader. One could imagine a longstanding committee that is happy to immediately reappoint members when their terms expire, defeating the purpose of the whole exercise. So there probably needs to be a mandatory down time during which previous members cannot return to the committee.
One other question has to do with how this change, if it is made, is to be enacted. The rules for the technical committee are written into the project's constitution, so a constitutional change seems like the obvious way to apply term limits. That, of course, would require a project-wide vote via the general resolution mechanism. Ian, however, suggested that a better approach might be for the committee to adopt its own term-limit rules.
The primary motivation for applying limits at that level has to do with items that are currently under consideration by the committee: it would have been awkward, for example, if a member's term had expired in the middle of the recent init system debate. If the committee enforced its own term limits, it could delay an expiration for long enough to bring a contentious issue to a close. Doing things at that level would also make it easier to experiment with different approaches, Ian said, and would allow an outgoing member to vote on their own replacement.
A concrete proposal
After thinking on the issue for a while, Anthony posted a pair of proposed resolutions, either of which could be adopted to apply term limits to the technical committee. The first of those reads like this:
Additionally, members could not be reappointed to the committee if they had been a member for more than four of the last five years. Anthony expressed some discomfort with this option, though, noting that Keith would likely become one of the senior members and be pushed out of the committee after just three years. So he also put out this alternative:
In this variant, members could be reappointed after a one-year absence from the committee. This version ensures that all members are able to serve for a full six years and also limits the (forced) turnover to two members per year. It should, thus, satisfy the goal of bringing in new members while preserving a certain amount of continuity.
Anthony's proposal has not been put forward as a general resolution at this point; indeed, responses to it have been scarce in general. Perhaps the project as a whole is slowing down for the (northern hemisphere) summer. Given the apparent overall support for the idea, though, chances are that something will happen once Debian developers return to their keyboards.
Security
End-To-End webmail encryption
In early June, a team at Google released some code that brings support for OpenPGP encryption to webmail clients running in the Chrome/Chromium browser. Called simply End-To-End, the initial release was made in source form only—for the purposes of security review.
End-To-End was announced in a June 3 blog post. It is designed to be compiled as a browser extension for Chrome or Chromium, and it provides webmail clients running in those browsers with OpenPGP-compatible message encryption and decryption—including support for signed messages. The blog post puts End-To-End into the context of Google's ongoing security enhancements for Gmail, but claims End-To-End itself is not a Gmail-specific program. Nevertheless, at the moment there is not an official list of which other webmail services are supported; in fact, the only other one referenced is a bug report indicating some problems using End-To-End with the open-source Zimbra client.
![[The welcome screen in End-To-End]](https://static.lwn.net/images/2014/06-ete-welcome-sm.png)
When up and running, End-To-End allows users to compose an outgoing message, then encrypt and sign it locally with an OpenPGP key. End-To-End can import an existing OpenPGP keyring, which it then stores in its own localStorage area so that the keys are accessible from within Chrome/Chromium's sandbox. It can also generate a new key and keyring, though it supports only Elliptic Curve (EC) key generation. Existing keys—both private and public—of non-EC types are still supported for all other operations. End-To-End's keyring is separate from any other OpenPGP keyring the user might already posses, so the public keys of other users must be imported into End-To-End's keyring before they can be used. Users can either import the entire external keyring during setup, or import each key individually (which is an option presented when opening an encrypted message).
The extension differs a bit from other OpenPGP implementations in that it stores the user's private and public keys on a single keyring, and only supports encrypting the entire keyring with a passphrase, rather than individual keys. The FAQ on the main page says that this was a decision made to minimize the number of passphrases users are required to remember.
Usage
At this point, End-To-End is provided as source code only, which users must build and manually install as a browser extension. For those who do not use Chrome/Chromium regularly, note that it is also necessary to enable "Developer mode" in the browser so that one can install a local, un-packaged extension.
When in use, End-To-End provides a button in the browser's extension toolbar that provides access to a pop-up message composition window, a pop-up decryption window, and utility functions (which include key import). If you are logged in to a webmail client and composing a message, the composition window copies the current contents of the in-progress message into its own mini-composer; if the recipient's public key is in the keyring, End-To-End will use it when you click the "Protect this message" button (alternatively, you can enter the recipient's address within the mini-composer or encrypt the message with a passphrase). PGP signatures are also supported; one can even click the "Protect this message" button with no recipients included to add a PGP signature without encrypting the message's contents.
![[Message composition in End-To-End]](https://static.lwn.net/images/2014/06-ete-compose-sm.png)
However you choose to protect the message, clicking on the "Insert into the page" button copies the PGP-protected text back into the webmail composition window. Similarly, if you receive an encrypted message, the End-To-End menu button can copy its contents into its decryption window and unscramble it.
Users who are accustomed to PGP-style encryption will not find the process difficult. The first big question to answer when assessing the project's usefulness is whether or not End-To-End makes email encryption easier for anyone new to the subject. This is not easy to say; some might find the separate End-To-End window that hovers over the main webmail message composer a bit perplexing. Others might notice that if the browser tab loses focus, the End-To-End window and its contents vanish immediately.
Of course, there are security reasons for these behaviors: End-To-End does not work directly in the page contents so that it is isolated from other elements in the page (which, intentionally or not, could interfere and risk a security breach) and there are good reasons not to leave the contents of the window available if the user is away doing something else.
Security
The other big question concerning End-To-End is whether or not it is genuinely safe to use. In the blog announcement and on the project's home page, the team emphasizes that this is a project still in development and that the release is meant to attract more scrutiny of End-To-End's security. The page even asks developers not to download the code, compile it, and submit the result to the official Chrome extension "store," on the grounds that it still requires further vetting.
Historically, PGP encryption for webmail has been a thorny issue. One of the main reasons was that an encryption library (in JavaScript) delivered in a web page is regarded as suspect, since it (like other page content) could be modified by an attacker at the server (or even en-route), the content and the JavaScript execution environment can be modified by other JavaScript on the page, and there are potential information leaks (such as reliance on the JavaScript interpreter's garbage collection rather than any real "secure delete" facility).
But End-To-End does not rely on encryption routines or secrets sent with the page content itself. In that sense, it is as secure as composing a message in a text editor, encrypting it on the command line, then pasting it into the webmail client. There are still risks, of course, but the bigger concerns for a built-in PGP function or extension are concepts like key management and sandboxing—along with implementation details of the core routines, which still should be audited.
The project FAQ supplies a few basic answers to common questions. For example, as mentioned above, End-To-End uses a single keyring to store the user's private key and all collected public keys. The encryption keys are also stored in memory unencrypted, which the FAQ notes could mean that key material is sent to Google in a crash report if the browser's "send crash reports" feature is enabled. That is certainly a problem one would hope to see fixed before End-To-End becomes more widespread or a built-in feature.
As always, one is dependent on the browser's implementation of features like sandboxing and secure localStorage to be free of serious errors. Perhaps to that end, the blog post notes that End-To-End, although still experimental, is eligible for Google's vulnerability bounty program.
On the other hand, End-To-End does implement its own cryptographic functions and OpenPGP routines, rather than using an existing library like OpenPGP.js. Of course, OpenPGP.js may not be a widely-scrutinized project in the grand scheme of things; if Google chooses to invest further in End-To-End it could attract more eyes. But OpenPGP.js is already in use by other projects with similar aims, such as Mailvelope—which also has the advantage of being usable in Firefox as well as Chrome. If Google persists in maintaining End-To-End as a Chrome/Chromium-only tool, there would be competing implementations in webmail encryption, with the possibility of incompatibilities. As Enigmail has seen, even adhering to the relevant RFCs does not protect one from all possible compatibility problems.
Perhaps there are valid reasons for maintaining a new in-browser OpenPGP tool; End-To-End makes some implementation choices that other OpenPGP projects might not agree with. For example, it does not support fetching public keys from a keyserver (perhaps because doing so would complicate the sandboxing process). Similarly, End-To-End opts for a single passphrase for the entire keyring for the sake of simplicity, but not every user will find that trade-off worthwhile.
The landscape of webmail-encryption tools is sparse as it is; the other major approach is WebPG, which is built around the Netscape Plugin API (NPAPI) that, these days, is used less and less even by Mozilla. But WebPG does call out to the system's GnuPG library, which is ostensibly a more widely-tested PGP implementation than either End-To-End or OpenPGP.js. But even if the security community does thoroughly vet and enhance End-To-End's cryptographic features, as Google hopes it will, the project will still face the challenge of winning over a non-trivial percentage of webmail users. And that may be an unsolvable problem, regardless of the implementation details.
Brief items
Security quotes of the week
Cell phones differ in both a quantitative and a qualitative sense from other objects that might be kept on an arrestee’s person. The term “cell phone” is itself misleading shorthand; many of these devices are in fact minicomputers that also happen to have the capacity to be used as a telephone. They could just as easily be called cameras, video players, rolodexes, calendars, tape recorders, libraries, diaries, albums, televisions, maps, or newspapers.
So Americans, thankfully, are rational. Let’s hope that legislators and prosecutors start listening to their voters.
New vulnerabilities
castor: XML injection
Package(s): | castor | CVE #(s): | CVE-2014-3004 | ||||||||||||||||
Created: | June 20, 2014 | Updated: | December 31, 2014 | ||||||||||||||||
Description: | From the CVE entry: The default configuration for the Xerces SAX Parser in Castor before 1.3.3 allows context-dependent attackers to conduct XML External Entity (XXE) attacks via a crafted XML document. | ||||||||||||||||||
Alerts: |
|
ctdb: insecure temporary files
Package(s): | ctdb | CVE #(s): | CVE-2013-4159 | ||||||||||||||||
Created: | June 25, 2014 | Updated: | March 30, 2015 | ||||||||||||||||
Description: | From the openSUSE advisory:
ctdb was updated to version 2.3 to fix several temp file vulnerabilities | ||||||||||||||||||
Alerts: |
|
dbus: denial of service
Package(s): | dbus | CVE #(s): | CVE-2014-3477 | ||||||||||||||||||||||||||||||||||||||||||||||||||||
Created: | June 19, 2014 | Updated: | December 22, 2014 | ||||||||||||||||||||||||||||||||||||||||||||||||||||
Description: | From the Mageia advisory:
A denial of service vulnerability in D-Bus before 1.6.20 allows a local attacker to cause a bus-activated service that is not currently running to attempt to start, and fail, denying other users access to this service Additionally, in highly unusual environments the same flaw could lead to a side channel between processes that should not be able to communicate (CVE-2014-3477). | ||||||||||||||||||||||||||||||||||||||||||||||||||||||
Alerts: |
|
firefox: code execution
Package(s): | MozillaFirefox | CVE #(s): | CVE-2014-1539 CVE-2014-1543 | ||||||||||||||||
Created: | June 20, 2014 | Updated: | June 25, 2014 | ||||||||||||||||
Description: | From the CVE entry: Multiple heap-based buffer overflows in the navigator.getGamepads function in the Gamepad API in Mozilla Firefox before 30.0 allow remote attackers to execute arbitrary code by using non-contiguous axes with a (1) physical or (2) virtual Gamepad device. (CVE-2014-1543) | ||||||||||||||||||
Alerts: |
|
foreman-proxy: shell command injection
Package(s): | foreman-proxy | CVE #(s): | CVE-2014-0007 | ||||
Created: | June 19, 2014 | Updated: | June 25, 2014 | ||||
Description: | From the Red Hat advisory:
A shell command injection flaw was found in the way foreman-proxy verified URLs in the TFTP module. A remote attacker could use this flaw to execute arbitrary shell commands on the system with the privileges of the user running foreman-proxy. (CVE-2014-0007) | ||||||
Alerts: |
|
gnupg: denial of service
Package(s): | gnupg | CVE #(s): | CVE-2014-4617 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Created: | June 25, 2014 | Updated: | April 23, 2015 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Description: | From the Slackware advisory:
This release includes a security fix to stop a denial of service using garbled compressed data packets which can be used to put gpg into an infinite loop. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Alerts: |
|
heat: information leak
Package(s): | heat | CVE #(s): | CVE-2014-3801 | ||||||||
Created: | June 19, 2014 | Updated: | October 23, 2014 | ||||||||
Description: | From the Ubuntu advisory:
Jason Dunsmore discovered that OpenStack heat did not properly restrict access to template information. A remote authenticated attacker could exploit this to see URL provider templates of other tenants for a limited time. | ||||||||||
Alerts: |
|
iodine: authentication bypass
Package(s): | iodine | CVE #(s): | CVE-2014-4168 | ||||||||||||||||||||
Created: | June 23, 2014 | Updated: | August 18, 2014 | ||||||||||||||||||||
Description: | From the Debian advisory:
Oscar Reparaz discovered an authentication bypass vulnerability in iodine, a tool for tunneling IPv4 data through a DNS server. A remote attacker could provoke a server to accept the rest of the setup or also network traffic by exploiting this flaw. | ||||||||||||||||||||||
Alerts: |
|
kernel: privilege escalation
Package(s): | kernel | CVE #(s): | CVE-2014-4014 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Created: | June 19, 2014 | Updated: | June 25, 2014 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Description: | From discoverer Andy Lutomirski's description:
The bug is that, if you created a user namespace and retained capabilities in that namespace, then you could use chmod to set the setgid bit on any file you owned, including files with, say, group 0. The impact depends on what files are available that have gids that shouldn't be available to the users who own the file. For example, the existence of a uid != 0, gid == 0 file would allow that uid to escalate privileges to gid 0, which is likely good enough for full root. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Alerts: |
|
kernel: denial of service
Package(s): | kernel | CVE #(s): | CVE-2014-0203 | ||||||||||||||||||||||||||||||||||||
Created: | June 20, 2014 | Updated: | June 25, 2014 | ||||||||||||||||||||||||||||||||||||
Description: | From the Red Hat advisory: It was discovered that the proc_ns_follow_link() function did not properly return the LAST_BIND value in the last pathname component as is expected for procfs symbolic links, which could lead to excessive freeing of memory and consequent slab corruption. A local, unprivileged user could use this flaw to crash the system. | ||||||||||||||||||||||||||||||||||||||
Alerts: |
|
kernel: information disclosure
Package(s): | kernel | CVE #(s): | CVE-2014-0206 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Created: | June 25, 2014 | Updated: | July 25, 2014 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Description: | From the Red Hat advisory:
It was found that the aio_read_events_ring() function of the Linux kernel's Asynchronous I/O (AIO) subsystem did not properly sanitize the AIO ring head received from user space. A local, unprivileged user could use this flaw to disclose random parts of the (physical) memory belonging to the kernel and/or other processes. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Alerts: |
|
libreoffice: unexpected VBA macro execution
Package(s): | libreoffice | CVE #(s): | CVE-2014-0247 | ||||||||||||||||||||||||||||
Created: | June 23, 2014 | Updated: | July 3, 2014 | ||||||||||||||||||||||||||||
Description: | From the Ubuntu advisory:
It was discovered that LibreOffice unconditionally executed certain VBA macros, contrary to user expectations. | ||||||||||||||||||||||||||||||
Alerts: |
|
musl: code execution
Package(s): | musl | CVE #(s): | CVE-2014-3484 | ||||
Created: | June 19, 2014 | Updated: | June 25, 2014 | ||||
Description: | From the Mageia advisory:
A remote stack-based buffer overflow has been found in musl libc's dns response parsing code. The overflow can be triggered in programs linked against musl libc and making dns queries via one of the standard interfaces (getaddrinfo, getnameinfo, gethostbyname, gethostbyaddr, etc.) if one of the configured nameservers in resolv.conf is controlled by an attacker, or if an attacker can inject forged udp packets with control over their contents. Denial of service is also possible via a related failure in loop detection (CVE-2014-3484). | ||||||
Alerts: |
|
pdns: denial of service
Package(s): | pdns | CVE #(s): | |||||
Created: | June 23, 2014 | Updated: | June 25, 2014 | ||||
Description: | From the Mageia advisory:
PowerDNS recursor is vulnerable to a denial of service due to a bug that causes it to exhaust the maximum number of file descriptors that are available to a process. | ||||||
Alerts: |
|
rb_libtorrent: stop UPNP from opening port 0
Package(s): | rb_libtorrent | CVE #(s): | |||||||||||||
Created: | June 23, 2014 | Updated: | September 5, 2014 | ||||||||||||
Description: | From the Fedora advisory:
stop UPNP from opening port 0 | ||||||||||||||
Alerts: |
|
rubygem-openshift-origin-node: code execution
Package(s): | rubygem-openshift-origin-node | CVE #(s): | CVE-2014-3496 | ||||||||||||
Created: | June 19, 2014 | Updated: | June 25, 2014 | ||||||||||||
Description: | From the Red Hat advisory:
A command injection flaw was found in rubygem-openshift-origin-node. A remote, authenticated user permitted to install cartridges via the web interface could use this flaw to execute arbitrary code with root privileges on the Red Hat OpenShift Enterprise node server. (CVE-2014-3496) | ||||||||||||||
Alerts: |
|
samba: multiple vulnerabilities
Package(s): | samba | CVE #(s): | CVE-2014-0178 CVE-2014-0244 CVE-2014-3493 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Created: | June 23, 2014 | Updated: | July 31, 2014 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Description: | From the Debian advisory:
CVE-2014-0178: Information leak vulnerability in the VFS code, allowing an authenticated user to retrieve eight bytes of uninitialized memory when shadow copy is enabled. CVE-2014-0244: Denial of service (infinite CPU loop) in the nmbd Netbios name service daemon. A malformed packet can cause the nmbd server to enter an infinite loop, preventing it to process later requests to the Netbios name service. CVE-2014-3493: Denial of service (daemon crash) in the smbd file server daemon. An authenticated user attempting to read a Unicode path using a non-Unicode request can force the daemon to overwrite memory at an invalid address. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Alerts: |
|
samba: denial of service
Package(s): | samba | CVE #(s): | CVE-2014-0239 | ||||||||||||||||
Created: | June 25, 2014 | Updated: | June 25, 2014 | ||||||||||||||||
Description: | From the CVE entry:
The internal DNS server in Samba 4.x before 4.0.18 does not check the QR field in the header section of an incoming DNS message before sending a response, which allows remote attackers to cause a denial of service (CPU and bandwidth consumption) via a forged response packet that triggers a communication loop, a related issue to CVE-1999-0103. | ||||||||||||||||||
Alerts: |
|
tomcat: multiple vulnerabilities
Package(s): | tomcat | CVE #(s): | CVE-2014-0075 CVE-2014-0096 CVE-2014-0099 CVE-2014-0119 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Created: | June 25, 2014 | Updated: | February 23, 2015 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Description: | From the Mageia advisory: Integer overflow in the parseChunkHeader function in java/org/apache/coyote/http11/filters/ChunkedInputFilter.java in Apache Tomcat before 6.0.40 and 7.x before 7.0.53 allows remote attackers to cause a denial of service (resource consumption) via a malformed chunk size in chunked transfer coding of a request during the streaming of data (CVE-2014-0075). java/org/apache/catalina/servlets/DefaultServlet.java in the default servlet in Apache Tomcat before 6.0.40 and 7.x before 7.0.53 does not properly restrict XSLT stylesheets, which allows remote attackers to bypass security-manager restrictions and read arbitrary files via a crafted web application that provides an XML external entity declaration in conjunction with an entity reference, related to an XML External Entity (XXE) issue (CVE-2014-0096). Integer overflow in java/org/apache/tomcat/util/buf/Ascii.java in Apache Tomcat before 6.0.40 and 7.x before 7.0.53, when operated behind a reverse proxy, allows remote attackers to conduct HTTP request smuggling attacks via a crafted Content-Length HTTP header (CVE-2014-0099). Apache Tomcat before 6.0.40 and 7.x before 7.0.54 does not properly constrain the class loader that accesses the XML parser used with an XSLT stylesheet, which allows remote attackers to read arbitrary files via a crafted web application that provides an XML external entity declaration in conjunction with an entity reference, related to an XML External Entity (XXE) issue, or read files associated with different web applications on a single Tomcat instance via a crafted web application (CVE-2014-0119). | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Alerts: |
|
wireshark: denial of service
Package(s): | wireshark | CVE #(s): | CVE-2014-4020 | ||||||||||||||||
Created: | June 19, 2014 | Updated: | June 25, 2014 | ||||||||||||||||
Description: | From the Mageia advisory:
The frame metadissector could crash (CVE-2014-4020). | ||||||||||||||||||
Alerts: |
|
Page editor: Jake Edge
Kernel development
Brief items
Kernel release status
The current development kernel is 3.16-rc2, released on June 21. Linus said: "It's a day early, but tomorrow ends up being inconvenient for me due to being on the road most of the day, so here you are. These days most people send me their pull requests and patches during the week, so it's not like I expect that a Sunday release would have made much of a difference. And it's also not like I didn't have enough changes for making a rc2 release."
Stable updates: none have been released in the last week. The 3.15.2, 3.14.9, 3.10.45, and 3.4.95 updates are in the review process as of this writing; they can be expected on or after June 26.
Quotes of the week
Kernel development news
RCU, cond_resched(), and performance regressions
Performance regressions are a constant problem for kernel developers. A seemingly innocent change might cause a significant performance degradation, but only for users and workloads that the original developer has no access to. Sometimes these regressions can lurk for years until the affected users update their kernels and notice that things are running more slowly. The good news is that the development community is responding with more testing aimed at detecting performance regressions. This testing found a classic example of this kind of bug in 3.16; the bug merits a look as an example of how hard it can be to keep things working optimally for a wide range of users.
The birth of a regression
The kernel's read-copy-update (RCU) mechanism enables a great deal of kernel scalability by facilitating lock-free changes to data structures and batching of cleanup operations. A fundamental aspect of RCU's operation is the detection of "quiescent states" on each processor; a quiescent state is one in which no kernel code can hold a reference to any RCU-protected data structure. Initially, quiescent states were defined as times when the processor was running in user space, but things have gotten rather more complex since then. (See LWN's lengthy list of RCU articles for lots of details on how this all works).
The kernel's full tickless mode, which is only now becoming ready for serious use, can make the detection of quiescent states more difficult. A CPU running in the tickless mode will, due to the constraints of that mode, be running a single process. If that process stays within the kernel for a long time, no quiescent states will be observed. That, in turn, prevents RCU from declaring the end of a "grace period" and running the (possibly lengthy) set of accumulated RCU callbacks. Delayed grace periods can result in excessive latencies elsewhere in the kernel or, if things go really badly, out-of-memory problems.
One might argue (as some developers did) that code that loops in the kernel in this way already has serious problems. But such situations do come about. Eric Dumazet mentioned one: a process calling exit() when it has thousands of sockets open. Each of those open sockets will result in structures being freed via RCU; that can lead to a long list of work to be done while that same process is still closing sockets and, thus, preventing RCU processing by looping in the kernel.
RCU developer Paul McKenney put together a solution to this problem based on a simple insight: the kernel already has a mechanism for allowing other things to happen while some sort of lengthy operation is in progress. Code that is known to be prone to long loops will, on occasion, call cond_resched() to give the scheduler a chance to run a higher-priority process. In the tickless situation, there will be no higher-priority process, though, so, in current kernels, cond_resched() does nothing of any use in the tickless mode.
But kernel code can only call cond_resched() in places where it can handle being scheduled out of the CPU. So it cannot be running in an atomic context and, thus, cannot hold references to any RCU-protected data structures. In other words, a call to cond_resched() marks a quiescent state; all that is needed is to tell RCU about it.
As it happens, cond_resched() is called in a lot of performance-sensitive places, so it is not possible to add a lot of overhead there. So Paul did not call into RCU to signal a quiescent state with every cond_resched() call; instead, that function was modified to increment a per-CPU counter and, using that counter, only call into RCU once for every 256 (by default) cond_resched() calls. That appeared to fix the problem with minimal overhead, so the patch was merged during the 3.16 merge window.
Soon thereafter, Dave Hansen reported that one of his benchmarks (a program which opens and closes a lot of files while doing little else) had slowed down, and that, with bisection, he had identified the cond_resched() change as the culprit. Interestingly, the problem is not with cond_resched() itself, which remained fast as intended. Instead, the change caused RCU grace periods to happen more often than before; that caused RCU callbacks to be processed in smaller batches and led to increased contention in the slab memory allocator. By changing the threshold for quiescent states from every 256 cond_resched() calls to a much larger number, Dave was able to get back to a 3.15 level of performance.
Fixing the problem
One might argue that the proper fix is simply to raise that threshold for all users. But doing so doesn't just restore performance; it also restores the problem that the cond_resched() change was intended to fix. The challenge, then, is finding a way to fix one workload's problem without penalizing other workloads.
There is an additional challenge in that some developers would like to make cond_resched() into a complete no-op on fully preemptable kernels. After all, if the kernel is preemptable, there should be no need to poll for conditions that would require calling into the scheduler; preemption will simply take care of that when the need arises. So fixes that depend on cond_resched() continuing to do something may fail on preemptable kernels in the future.
Paul's first fix took the form of a series of patches making changes in a few places. There was still a check in cond_resched(), but that check took a different form. The RCU core was modified to take note when a specific processor holds up the conclusion of a grace period for an excessive period of time; when that condition was detected, a per-CPU flag would be set. Then, cond_resched() need only check that flag and, if it is set, note the passing of a quiescent period. That change reduced the frequency of grace periods, restoring much of the lost performance.
In addition, Paul introduced a new function called cond_resched_rcu_qs(), otherwise known as "the slow version of cond_resched()". By default, it does the same thing as ordinary cond_resched(), but the intent is that it would continue to perform the RCU grace period check even if cond_resched() is changed to skip that check — or to do nothing at all. The patch changed cond_resched() calls to cond_resched_rcu_qs() in a handful of strategic places where problems have been observed in the past.
This solution worked, but it left some developers unhappy. For those who are trying to get the most performance out of their CPUs, any overhead in a function like cond_resched() is too much. So Paul came up with a different approach that requires no checks in cond_resched() at all. Instead, when the RCU core notices that a CPU has held up the grace period for too long, it sends an inter-processor interrupt (IPI) to that processor. That IPI will be delivered when the target processor is not running in atomic context; it is, thus, another good time to note a quiescent state.
This solution might be surprising at first glance: IPIs are expensive and, thus, are not normally seen as the way to improve scalability. But this approach has two advantages: it removes the monitoring overhead from the performance-sensitive CPUs, and the IPIs only happen when a problem has been detected. So, most of the time, it should have no impact on CPUs running in the tickless mode at all. It would thus appear that this solution is preferable, and that this particular performance regression has been solved.
How good is good enough?
At least, it would appear that way if it weren't for the fact that Dave still observes a slowdown, though it is much smaller than it was before. The solution is, thus, not perfect, but Paul is inclined to declare victory on this one anyway:
Dave still isn't entirely happy with the
situation; he noted that the regression is closer to 10% with the default
settings, and said "This change of existing behavior removes some of
the benefits that my system gets out of RCU
". Paul responded that he is "not at all
interested in that micro-benchmark becoming the kernel's
straightjacket
" and sent in a pull
request including the second version of the fix. If there are any
real-world workloads that are adversely affected by this change, he
suggested, there are a number of ways to tune the system to mitigate the
problem.
Regardless of whether this issue is truly closed or not, this regression demonstrates some of the hazards of kernel development on contemporary systems. Scalability pressures lead to complex code trying to ensure that everything happens at the right time with minimal overhead. But it will never be possible for a developer to test with all possible workloads, so there will often be one that shows a surprising performance regression in response to a change. Fixing one workload may well penalize another; making changes that do not hurt any workloads may be close to impossible. But, given enough testing and attention to the problems revealed by the tests, most problems can hopefully be found and corrected before they affect production users.
Reworking kexec for signatures
The kernel execution (kexec) subsystem allows a running kernel to switch to a different kernel. This allows for faster booting, as the system firmware and bootloader are bypassed, but it can also be used to produce crash dumps using Kdump. However, as Matthew Garret explained on his blog, kexec could be used to circumvent UEFI secure boot restrictions, which led him to propose a way to disable kexec on secure boot systems. That was not terribly popular, but a more recent patch set would provide a path for kexec to only boot signed kernels, which would solve the problem Garrett was trying to address, without completely disabling the facility.
The kexec subsystem consists of the kexec_load() system call that loads a new kernel into memory, which can then be booted using the reboot() system call. There is also a kexec command that will both load the new kernel and boot it, without entering the system firmware (e.g. BIOS or UEFI) and bootloader.
But the UEFI firmware is what enforces the secure boot restrictions. Garrett was concerned that a Linux kernel could be used to boot an unsigned (and malicious) Windows operating system by way of kexec because it circumvents secure boot. That might lead Microsoft to blacklist the keys used to sign Linux bootloaders, which would make it difficult to boot Linux on commodity hardware. Using kexec that way could affect secure-booted Linux systems too, of course, though Microsoft might not be so quick to revoke keys under those circumstances.
In any case, Garrett eventually removed the kexec-disabling portion of his patch set (though he strongly suggested that distributions should still disable kexec if they are going to support secure boot). Those patches have not been merged (yet?). More recently, Vivek Goyal has put together a patch set that is intended to address Garrett's secure boot concerns, but would also protect systems that only allow loading signed kernel modules. As Garrett showed in his blog post, that restriction can be trivially bypassed by executing a new kernel that simply alters the sig_enforce sysfs parameter in the original kernel's memory and then jumps back to that original kernel.
Goyal's patches start down the path toward being able to restrict kexec so that it will only load signed code. To that end, this patch set defines a new system call:
long kexec_file_load(int kernel_fd, int initrd_fd, const char *cmdline_ptr, unsigned long cmdline_len, unsigned long flags);It will load the kernel executable from the kernel_fd file descriptor and will associate the "initial ramdisk" (initrd) from the initrd_fd descriptor. It will also associate the kernel command line passed as cmdline_ptr and cmdline_len. The initrd and command-line information will be used when the kernel is actually booted. This contrasts with the existing kexec system call:
long kexec_load(unsigned long entry, unsigned long nr_segments, struct kexec_segment *segments, unsigned long flags);It expects to get segments that have been parsed out of a kernel binary in user space and to just blindly load them into memory. As can be seen, kexec_file_load() puts the kernel in the loop so that it can (eventually) verify what is being loaded and executed.
As one of the segments that get loaded, there is a standalone executable object, called "purgatory", that runs between the two kernels. At reboot() time, the "exiting" kernel jumps to the purgatory code. Its main function is to check the SHA-256 hashes of the other segments that were loaded. If those have not been corrupted, booting can proceed. The purgatory code will copy some memory to a backup region and do some architecture-specific setup, then jump to the new kernel.
The purgatory code currently lives in kexec-tools, but if the kernel is to take responsibility for setting up the segments from the kernel binary and initrd, it will need a purgatory of its own. Goyal's patch set adds that code for x86 to arch/x86/purgatory/.
Goyal also copied code from crypto/sha256_generic.c into the purgatory directory. It's clear he would rather simply just use the code directly from the crypto/ directory, but could not find a way to do so:
So instead of doing #include on sha256_generic.c I just copied relevant portions of code into arch/x86/purgatory/sha256.c. Now we shouldn't have to touch this code at all. Do let me know if there are better ways to handle it.
While the patch set is at version 3 (earlier versions: v2, v1), it is still a "request for comment" (RFC) patch. There are various unfinished pieces, with signature verification topping the list. So far, the new facility is only available for the x86_64 architecture and bzImage kernel images. Adding other architectures and support for the ELF kernel format still remain to be done. There is also a need for some documentation, including a man page.
Goyal did explain his vision for how the signature verification will work. It is based on David Howells's work on verifying the signatures for loadable kernel modules. Essentially, the signature will be verified when kexec_load_file() is called. That is also when the SHA-256 hashes for each segment are calculated and stored in the purgatory segment. So, all purgatory has to do is verify the hashes (which it already does to avoid running corrupted code) to ensure that only a properly signed kernel will be executed.
There have been plenty of comments on each version of the patch set, but most of those on v3 were technical suggestions for improving the code. So far, there have been no complaints about the overall idea, which means we may well see the ability to require cryptographic signatures on the kernels passed to kexec added as a feature sometime in the next year—hopefully sooner than that. It would be a nice feature to have when Garrett's secure boot patches get merged.
Questioning EXPORT_SYMBOL_GPL()
There have been arguments about the legality of binary-only kernel modules for almost as long as the kernel has had loadable module support. One of the key factors in this disagreement is the EXPORT_SYMBOL_GPL() directive, which is intended to keep certain kernel functions out of the reach of proprietary modules. A recent discussion about the merging of a proposed new kernel subsystem has revived some questions about the meaning and value of EXPORT_SYMBOL_GPL() — and whether it is worth bothering with at all.Loadable modules do not have access to every function or variable in the kernel; instead, they can only make use of symbols that have been explicitly "exported" to them by way of the EXPORT_SYMBOL() macro or one of its variants. When plain EXPORT_SYMBOL() is used, any kernel module is able to gain access to the named symbol. If the developer uses EXPORT_SYMBOL_GPL() instead, the symbol will only be made available to modules that have declared that they are distributable under a GPL-compatible license. EXPORT_SYMBOL_GPL() is meant to mark kernel interfaces that are deemed to be so low-level and specific to the kernel that any software that uses them must perforce be a derived product of the kernel. The GPL requires that derived products, if distributed, be made available under the same license; EXPORT_SYMBOL_GPL() is thus a statement that the named symbol should only be used by GPL-compatible code.
It is worth noting that nobody has said that symbols exported with plain EXPORT_SYMBOL() can be freely used by proprietary code; indeed, a number of developers claim that all (or nearly all) loadable modules are derived products of the kernel regardless of whether they use GPL-only symbols or not. In general, the kernel community has long worked to maintain a vague and scary ambiguity around the legal status of proprietary modules while being unwilling to attempt to ban such modules outright.
Shared DMA buffers
Recent years have seen a fair amount of development intended to allow device drivers to share DMA buffers with each other and with user space. A common use case for this capability is transferring video data directly from a camera to a graphics controller, allowing that data to be displayed with no need for user-space involvement. The DMA buffer-sharing subsystem, often just called "dma-buf," is a key part of this functionality. When the dma-buf code was merged in 2012, there was a lengthy discussion on whether that subsystem should be exported to modules in the GPL-only mode or not.
The code as originally written used EXPORT_SYMBOL_GPL(). A representative from NVIDIA requested that those exports be changed to EXPORT_SYMBOL() instead. If dma-buf were to be GPL-only, he said, the result would not be to get NVIDIA to open-source its driver. Instead:
At the time, a number of the developers involved evidently discussed the question at the Embedded Linux Conference and concluded that EXPORT_SYMBOL() was appropriate in this case. Other developers, however, made it clear that they objected to the change. No resolution was ever posted publicly, but the end result is clear: the dma-buf symbols are still exported GPL-only in current kernels.
On the fence
More recently, a major enhancement to dma-buf functionality has come along in the form of the fence synchronization subsystem. A "fence" is a primitive that indicates whether an operation on a dma-buf has completed or not. For the camera device described above, for example, the camera driver could use a fence to signal when the buffer actually contains a new video frame. The graphics driver would then wait for the fence to signal completion before rendering the buffer to the display; it, in turn, could use a fence to signal when the rendering is complete and the buffer can be reused. Fences thus sound something like the completion API, but there is additional complexity there to allow for hardware signaling, cancellation, fences depending on other fences, and more. All told, the fence patches add some 2400 lines of code to the kernel.
The fence subsystem is meant to replace Android-specific code (called "Sync") with similar functionality. Whether that will happen remains to be seen; it seems that the Android developers have not said whether they will be able to use it, and, apparently, not all of the needed functionality is there. But there is another potential roadblock here: GPL-only exports.
The current fence code does not export its symbols with EXPORT_SYMBOL_GPL(); it mirrors the Sync driver (which is in the mainline staging area) in that regard. While he was reviewing the code, driver core maintainer Greg Kroah-Hartman requested that the exports be changed to GPL-only, saying that GPL-only is how the rest of the driver core has been done. That request was not well received by Rob Clark, who said:
(A "syncpt" is an NVIDIA-specific equivalent to a fence).
Greg proved to be persistent in his request, though, claiming that GPL-only exports have made the difference in bringing companies around in the past. Graphics maintainer Dave Airlie, who came down hard on proprietary graphics modules a few years ago, disagreed here, saying that the only thing that has really made the difference has been companies putting pressure on each other. Little else, he said, has been effective despite claims that some in the community might like to make. His vote was for "author's choice" in this case.
Is EXPORT_SYMBOL_GPL() broken?
Dave went on to talk about the GPL-only export situation in general:
The last sentence above might be the most relevant in the end. For years, the kernel community has muttered threateningly about proprietary kernel modules without taking much action to change the situation. So manufacturers continue to ship such modules without much fear of any sort of reprisal. Clearly the community tolerates these modules, regardless of its (often loud) statements about the possible legal dangers that come with distributing them.
Even circumvention of EXPORT_SYMBOL_GPL() limitations seems to be tolerated in the end; developers will complain publicly (sometimes) when it happens, but no further action ensues. So it should not be surprising if companies are figuring out that they need not worry too much about their binary-only modules.
So it is not clear that EXPORT_SYMBOL_GPL() actually helps much at this point. It has no teeth to back it up. Instead, it could be seen as a sort of speed bump that makes life a bit more inconvenient for companies shipping binary-only modules. A GPL-only export lets developers express their feelings, and it may slow things down a bit, but, in many cases at least, these exports do not appear to be changing behavior much. The fence patches, in particular, are aimed at embedded devices, where proprietary graphics drivers are, unfortunately, still the norm. Making the interface be GPL-only is probably not going to turn that situation around.
Perhaps one could argue that EXPORT_SYMBOL_GPL() is a classic example of an attempt at a technical solution to a social problem. If proprietary modules are truly a violation of the rights of kernel developers, then, sooner or later, some of those developers are going to need to take a stand to enforce those rights. The alternative is a world where binary-only kernel drivers are distributed with tacit approval from the kernel community, regardless of how many symbols are marked as being EXPORT_SYMBOL_GPL().
As with the dma-buf case, no resolution to the question of how symbols should be exported from the fence subsystem has been posted. But Greg has said that he will not give up on this particular issue, and, as the maintainer who would normally accept a patch set in this area, he is in a fairly strong position to back up his views. We may have to wait until this code is actually merged to see which position will ultimately prevail. But it seems that, increasingly, some developers will wonder if it even matters.
Patches and updates
Kernel trees
Architecture-specific
Core kernel code
Development tools
Device drivers
Device driver infrastructure
Documentation
Filesystems and block I/O
Memory management
Networking
Security-related
Virtualization and containers
Page editor: Jonathan Corbet
Distributions
Sandstorm personal cloud platform
Containerization is popular these days, with technologies like Docker having made a big splash recently. But earlier containerization efforts like LXC (which Docker was at one time based on) and OpenVZ laid the foundation, and newer efforts like Sandstorm may take the technique even further. Sandstorm is an early-stage project that bills itself as a "personal cloud platform". It is targeted at solving some of the same kinds of problems that FreedomBox is trying to solve, such as allowing users to take control of their own data.
Containers are not really Linux distributions per se, but they solve some of the same problems, particularly in the area of packaging. Both Docker and Sandstorm provide a mechanism to distribute Linux applications that is quite different from what users of traditional distributions have become accustomed to. Each Sandstorm container is, in some sense, a mini-distribution focused on providing just the environment needed by a particular application. All of the binaries, libraries, configuration, and so on, are collected up into the container so that they are available to a particular application instantiation, but not to other applications.
That isolation is why lead developer Kenton Varda (who developed the open
source version of Protocol
Buffers while at Google) calls the containers "sandboxes". The idea is
to isolate each application instance in its sandbox so that it cannot
affect any of the other applications running in sandboxes on the same
host. There is still a fair way to go before that goal is realized. As
the Caveats
section on the Sandstorm GitHub page
indicates: "At present, Sandstorm's sandboxing is incomplete. Malicious code probably can escape. Malicious code definitely can DoS your server by consuming all available resources.
"
For applications to work in this new, sandboxed world, they will need to be packaged up differently, using the Sandstorm tools. The porting guide shows how to use those tools. The basic mechanism is to install the application on a development server, which will observe an application in action while recording all of the files that it uses during its execution. That list of files, potentially with tweaks made by the developer, can then be used to create a container that will run the application when it gets installed on a Sandstorm server.
The vision is to have these applications available in an "App Store" of
sorts, where the applications can be easily installed by users onto their
own Sandstorm server (or as instances under their control on a shared
Sandstorm server). There would be no command-line interaction required, as
"the only way you ever interact with an app is through your web
browser
", Varda said in the Sandstorm
Development Google Group. He was answering a question about the
differences between Sandstorm and Docker, and continued:
This leads to several very different design choices:
- Containers only run while you have them open in your browser. Once you close the browser, the container is shut down, and then restarted the next time you return. (This implies that containers have to start up very fast.)
- Most of the container's filesystem is read-only, so that all containers can share common files (such as the application binary). Only /var is writable, and that's where it stores data specific to the particular document.
- Containers do not usually run any background daemons. Just a web server for the app itself.
- Containers do not use OS-level user accounts.
- HTTP requests going into the container must pass through the Sandstorm front-end, which handles authentication and access control so that the application doesn't have to. Thus apps are easier to write and many kinds of possible security bugs in apps are avoided.
The project is quite new, as it was announced by Varda on March 24. The plan is for Sandstorm to use Varda's follow-up to Protocol Buffers, called Cap'n Proto, for remote procedure calls (RPCs) between the sandboxed applications. The permissions for doing so would be managed through a powerbox-like graphical user interface (GUI). Currently, Sandstorm's API only allows sandboxed applications to handle HTTP and WebSocket requests, but much more is planned.
The big picture is all about users taking control of their own data, so that it can't be misappropriated, spied upon, data mined, or simply vanish if the service where it is stored decides to close up shop. There's a lot to do to get there. Even with a platform that handles problems like security and authentication, as Sandstorm is envisioned to do, there will need to be a large number of cooperating applications to make for a viable ecosystem. The current crop of Sandstorm apps is small, with just a handful of simple apps that can be installed either on local Sandstorm servers or on the managed instances that those who have signed up and been invited to join the Alpha test can use.
Overall, Sandstorm is an interesting idea of what user-controlled services could look like down the road. It remains to be seen if enough users actually care about that control and are willing to do something about it. The promise of total app isolation, such that malicious apps can do no real damage, is intriguing, as is having a user-friendly way to set it all up and get it running. There's a lot of promise, but it may be difficult to nurture a robust, cooperating app ecosystem, which could make for a fairly large hurdle for the project. In any case, it is an ambitious vision and it will be interesting to watch where Sandstorm goes from here.
[ Thanks to Jim Garrison, who reminded us about a topic that has been on our list for a while now. ]
Brief items
Distribution quotes of the week
Distribution News
Debian GNU/Linux
Debian switching back to Glibc
Aurelien Jarmo reports that the Debian Project is switching back to the GNU C Library and will no longer ship the EGLIBC fork. The reason is simple: the changes in the Glibc project mean that EGLIBC is no longer needed and is no longer under development. "This has resulted in a much more friendly development based on team work with good cooperation. The development is now based on peer review, which results in less buggy code (humans do make mistakes). It has also resulted in things that were clearly impossible before, like using the same repository for all architectures, and even getting rid of the ports/ directory."
Ubuntu family
Ubuntu 13.10 (Saucy Salamander) reaches End of Life
Ubuntu 13.10 was released on October 17, 2013 and will reach its end-of-life on July 17, 2014. The supported upgrade path from Ubuntu 13.10 is via Ubuntu 14.04 LTS.
Newsletters and articles of interest
Distribution newsletters
- Last Week in CyanogenMod (June 19)
- DistroWatch Weekly, Issue 564 (June 23)
- Ubuntu Weekly Newsletter, Issue 373 (June 22)
Whatever Happened to These Red-Hot Linux Distros? (Linux.com)
Carla Schroder looks at three distributions that were once very popular. "Way back around 2003 entrepreneur and technologist Warren Woodford released the first version of SimplyMEPIS. Mr. Woodford felt that the popular desktop Linux distros had too many rough edges, so he built his own sleek distro based on Debian and KDE 3.1.2. New releases appeared every 6-12 months, and each release was more polished and user-friendly. Nice helper utilities like MEPIS X-Windows Assistant, MEPIS System Assistant, and MEPIS Network Assistant made system administration easier. It hit the upper range of the DistroWatch rankings and stayed there for several years."
A Linux distribution for science geeks (Opensource.com)
Amit Saha introduces Fedora Scientific, a Fedora spin. "If you use open source software tools such as GNU Octave, IPython, gnuplot, and libraries such as SciPy and GNU Scientific library in your work—and you write papers and reports in LaTeX—Fedora Scientific is for you. When you install it, you get these and a number of other applications that you may be using to get your scientific work done. The Fedora Scientific guide aims to help you learn about the included software. It features pointers to resources, so you can learn more about them."
Linux Deepin Brings Mac-Like Sensibility to the Linux Desktop (Linux.com)
Over at Linux.com, Jack Wallen reviews Linux Deepin. "But then along comes Linux Deepin, a distribution from China that looks to upturn the Linux desktop with an almost Apple-like sensibility. Linux Deepin offers a keen UI design that outshines most every desktop you’ve experienced. Along with that total redesign of the UI, comes a few bonus apps that might easily steal the show from most default apps."
Page editor: Rebecca Sobol
Development
Opera, Blink, and open source
Opera Software released the latest Linux build of its web browser this week; the first Linux version to be based on Google's Blink rendering engine (Blink-based Mac and Windows builds having already been available for several months). However one feels about the browser itself—which, like previous versions of Opera, is a closed-source product—the release is also interesting because it remains one of the comparatively few outside projects that make use of Blink. When Blink was forked from WebKit in 2013, many in the development community expressed concern that the result would be a more fractured web-runtime landscape that would pose increased challenges for open source projects.
The new Opera release comes a year after the company announced that it was dropping its own HTML rendering engine to adopt Blink instead. In fact, Opera had initially announced that it would switch to the WebKit engine—only to be surprised by Google's sudden announcement that it was walking away from the WebKit project to develop Blink. Opera's decision may have been a straightforward cost-saving move; the company certainly has a difficult task cut out for itself, since it is neither the default browser on any desktop operating system, nor is it an open-source application like the market leaders Chrome/Chromium and Firefox.
At the time, Google's decision to fork WebKit into Blink was met with no small amount of consternation in the open source community, including WebKit contributors and various downstream projects. The concern was two-fold: first, that without Google's contributions and influence in WebKit, Apple would dominate WebKit development and governance (perhaps to the detriment of other projects), and second, that Blink would diverge significantly from WebKit, thus splintering the single WebKit-using community into factions with incompatible renderers—an outcome that seemingly favored no one.
In September, Igalia's Juan Sanchez told LinuxCon North America that he suspected that one of the goals of Blink would be to streamline the engine specifically to suit the needs of Chrome—eliminating, among other things, the "ports" system that helped to ensure WebKit ran on a variety of operating systems and frameworks. That, in turn, might make Blink less useful to outside projects.
In the year since the split, few downstream projects have adopted Blink. Exact numbers are hard to come by, but most of the downstream projects that are known to exist tend to be proprietary freeware browsers. Opera, for its part, is taking the commendable steps of sending patches upstream and publicly documenting them, which certainly not all derivative browser-makers do. At present, Blink may still have fewer outside contributors than WebKit, although one change since Sanchez's talk is that Blink has adopted a more formal committer-access process, like the one used by WebKit, in place of the old informal review system.
But it may still be the case that WebKit's broader contributor base (including two large browser makers) essentially forced the project to produce a reusable web-runtime component as its deliverable end-product, and that this is what led to its success. Blink, on the other hand, is developed primarily within Google, by the Chrome development team, and released on Chrome's time-based update schedule.
Of course, there are other projects—like Android—that are developed primarily by Google employees. But if Blink is positioned to be a reusable library or platform, rather than a finished product, there are different concerns for those on the other side of the Googleplex wall. Perhaps a third-party rebuild akin to CyanogenMod or Replicant will eventually arise for Blink as well, providing a less-Google-specific result. But Blink being bound tightly to Chrome development is not problematic simply because it results in fewer derivative works. The situation also allows the development team to implement changes that would be controversial for outside contributors—such as has been seen with Android's increasing reliance on the proprietary Google Play Services module for new functionality.
Of course, time could still make up the difference between the sizes of the Blink and WebKit development communities. But it has been slow going thus far. In September 2013, Digia also announced that it would be migrating, switching Qt's web runtime from WebKit to Blink. The status of that effort is far from clear; the Blink-based runtime was left out of the Qt 5.3 release in May (in favor of the existing WebKit-based runtime), and the developers' mailing list remains relatively quiet. In earlier Tizen releases, Intel had been using WebKit as the basis for its web runtime, but in late 2013 the company started its own web runtime project, Crosswalk, which incorporates pieces from Firefox, Blink, and several other sources.
Both of those projects, of course, are developing products that differ significantly from the Chrome browser. Opera, for its part, may find Blink to be just what the doctor ordered, in that it serves as a modern, actively-developed web rendering engine designed with desktop browsing in mind.
But there is one other disruption to the open-source community potentially hiding in the Blink-uptake story. Opera, although it has never enjoyed the same deployment numbers as Firefox and Chrome, has historically been a valuable ally to those projects in the web-standards development process. Apple and Microsoft, by virtue of being OS makers, have a leg up on third-party browsers, and Opera has frequently been the third voice to speak up in favor of a more open approach to some web specification problem, joining Google and Mozilla (see WebRTC for a recent example). Now that Opera has stopped development of its own HTML rendering engine, there is one fewer independent implementation to prove the viability of an open standard.
A year is not long in the lifespan of an open source project, so Blink may have many accolades and successes in its future. But the lengthy adoption period by those who publicly switched over to the new engine in 2013, Opera included, might bolster the concerns of those who viewed Blink as a serious problem when it first split away from WebKit. So far, there have not been complaints about Apple using its dominance in the Google-less WebKit project to inflict harm, but Blink also has a way to go before it offers a smoothly transitioned alternative.
Brief items
Quote of the week
Somewhere in the middle of this, Ubuntu decides to break scrollbars using a Gtk+ plugin. Your first hint that this has happened is when Ubuntu users start filing bug reports.
In the meantime, the layout rules for GtkGrid change again. When distributions update Gtk+, your program looks awful. You work around that in your source code, but distributions do not release new versions of your program until its next release.
Your program works with multiple screens. Or rather, it used to work with multiple screens. Then Gtk+ dropped support for it without notice.
30 years of X
The X.Org Foundation reminds us that the first announcement for the X Window System came out on June 19, 1984. "The X developers have pushed the boundaries and moved X from a system originally written to run on the CPU of a VAX VS100 to one that runs the GUI on today's laptops with 3D rendering capabilities. Indeed, X predates the concept of a Graphics Processing Unit (GPU) as we currently know it, and even the company that popularized this term in 1999, Nvidia." Congratulations to one of the oldest and most successful free software projects out there.
PyPy3 2.3.1 released
The PyPy3 2.3.1 release has been announced. This is the first stable release that supports version 3 of the Python language; it also has a number of performance improvements.Go 1.3 available
Version 1.3 of the Go language has been released. As the announcement notes, this update includes performance improvements, support for the Native Client (NaCl) execution sandbox, more precise garbage collection, and faster linking for large projects. It also adds support for several new environments, like Solaris, DragonFly BSD, and, of course, Plan 9.
NetworkManager 0.9.10 released
NetworkManager 0.9.10 is out with a long list of new features including a curses-based management interface, more modular device support, data center bridging support, many new customization options, better cooperation with other network management tools, and more. (Correction: the release is almost out, being planned for "later this week").nftables 0.3 available
Version 0.3 of nftables has been released. This version introduces several syntax changes, including a more compact form for queue actions and the ability to provide the multiqueue as a range. New features include a new transaction infrastructure that supports fully atomic updates for all objects, and the netlink event monitor for watching ruleset events.
Newsletters and articles
Development newsletters from the past week
- What's cooking in git.git (June 20)
- GNU Toolchain Update (June 22)
- Haskell Weekly News (June 18)
- LLVM Weekly (June 23)
- OCaml Weekly News (June 24)
- OpenStack Community Weekly Newsletter (June 20)
- Perl Weekly (June 23)
- PostgreSQL Weekly News (June 22)
- Python Weekly (June 19)
- Ruby Weekly (June 20)
- This Week in Rust (June 22)
- Tor Weekly News (June 25)
- Wikimedia Tech News (June 23)
Microformats turn 9 years old
At his blog, Tantek Çelik writes about the ninth birthday of the microformats effort, which seeks to express semantic information in web pages through the use of attribute names within HTML elements, in contrast to comparatively "heavyweight" schemes like RDFa and Microdata. Çelik notes that the community-driven process of microformats' development seems to have enabled its survival. " At his blog, Robert Ancell provides a status
report about the work he and Ryan Lortie have been doing to ensure
that recent GTK+ applications run in Ubuntu's Unity 8
environment. While there is still work to be done, including fixes for
cursor changes, fullscreen support, and subwindow focusing, he notes that there
is a personal package archive (PPA) available for testing libraries
and select applications. "
Looking back nine years ago, none of the other alternatives promoted in the 2000s (even by big companies like Google and Yahoo) survive to this day in any meaningful way
", he says. "
Large companies tend to promote more complex solutions, perhaps because they can afford the staff, time, and other resources to develop and support complex solutions. Such approaches fundamentally lack empathy for independent developers and designers, who don't have time to keep up with all the complexity.
" In addition to his analysis about the past nine years (including an exploration of the down side of email-based discussions), Çelik takes the occasion to announce that microformats2 has now been upgraded to the status of ready-to-use recommendation, and points site maintainers to tools to support the transition.
Ancell: GTK+ applications in Unity 8 (Mir)
The Mir backend currently on the wip/mir branch in the GTK+ git repository. We will keep developing it there until it is complete enough to propose into GTK+ master. We have updated jhbuild to support Mir so we can easily build and test this backend going forward.
"
Page editor: Nathan Willis
Announcements
Brief items
US Supreme Court rules against software patents
In April, LWN reported on the case of Alice Corp. v. CLS Bank International, which addresses the issue of whether ideas implemented in software are patentable. The ruling [PDF] is now in: a 9-0 decision against patentability. "We hold that the claims at issue are drawn to the abstract idea of intermediated settlement, and that merely requiring generic computer implementation fails to transform that abstract idea into a patent-eligible invention."
No more updates for Freecode
The Freecode site (once known as Freshmeat), has announced that they are no longer updating entries. "Freecode has been the Web's largest index of Linux, Unix and cross-platform software, and mobile applications. Thousands of applications, which are preferably released under an open source license, were meticulously cataloged in the Freecode database, but links to new applications and releases are no longer being added. Each entry provides a description of the software, links to download it and to obtain more information, and a history of the project's releases."
Articles of interest
Steps to diversity in your open source group (Opensource.com)
Opensource.com covers a talk by Coraline Ehmke about diversity in open source. "She came at the topic from the angle of diversity as a value of the culture of our groups. By now we've heard from many open source thought leaders on why we need diversity in open source—arguments mainly center around the more people of the greater population that we include in our groups, and make feel welcome to our groups, the better our results will be. Why? Coraline points to a study indicating that groupthinking is a real thing—we tend to agree with and value the things that are said and done by other people that are simply like us. So, the presence of someone different in our group increases accuracy by reducing knee-jerk agreements."
New Books
The Software Test Engineer's Handbook, 2nd Ed.: New from Rocky Nook
Rocky Nook has released "The Software Test Engineer's Handbook, 2nd Edition" by Graham Bath and Judy McKay.
Calls for Presentations
CFP Deadlines: June 26, 2014 to August 25, 2014
The following listing of CFP deadlines is taken from the LWN.net CFP Calendar.
Deadline | Event Dates | Event | Location |
---|---|---|---|
June 30 | November 18 November 20 |
Open Source Monitoring Conference | Nuremberg, Germany |
July 1 | September 5 September 7 |
BalCCon 2k14 | Novi Sad, Serbia |
July 4 | October 31 November 2 |
Free Society Conference and Nordic Summit | Gothenburg, Sweden |
July 5 | November 7 November 9 |
Jesień Linuksowa | Szczyrk, Poland |
July 7 | August 23 August 31 |
Debian Conference 2014 | Portland, OR, USA |
July 11 | October 13 October 15 |
CloudOpen Europe | Düsseldorf, Germany |
July 11 | October 13 October 15 |
Embedded Linux Conference Europe | Düsseldorf, Germany |
July 11 | October 13 October 15 |
LinuxCon Europe | Düsseldorf, Germany |
July 11 | October 15 October 17 |
Linux Plumbers Conference | Düsseldorf, Germany |
July 14 | August 15 August 17 |
GNU Hackers' Meeting 2014 | Munich, Germany |
July 15 | October 24 October 25 |
Firebird Conference 2014 | Prague, Czech Republic |
July 20 | January 12 January 16 |
linux.conf.au 2015 | Auckland, New Zealand |
July 21 | October 21 October 24 |
PostgreSQL Conference Europe 2014 | Madrid, Spain |
July 24 | October 6 October 8 |
Qt Developer Days 2014 Europe | Berlin, Germany |
July 24 | October 24 October 26 |
Ohio LinuxFest 2014 | Columbus, Ohio, USA |
July 25 | September 22 September 23 |
Lustre Administrators and Developers workshop | Reims, France |
July 27 | October 14 October 16 |
KVM Forum 2014 | Düsseldorf, Germany |
July 27 | October 24 October 25 |
Seattle GNU/Linux Conference | Seattle, WA, USA |
July 30 | October 16 October 17 |
GStreamer Conference | Düsseldorf, Germany |
July 31 | October 23 October 24 |
Free Software and Open Source Symposium | Toronto, Canada |
August 1 | August 4 | CentOS Dojo Cologne, Germany | Cologne, Germany |
August 15 | September 25 September 26 |
Kernel Recipes | Paris, France |
August 15 | August 25 | CentOS Dojo Paris, France | Paris, France |
August 15 | November 3 November 5 |
Qt Developer Days 2014 NA | San Francisco, CA, USA |
August 15 | October 20 October 21 |
Tizen Developer Summit Shanghai | Shanghai, China |
August 18 | October 18 October 19 |
openSUSE.Asia Summit | Beijing, China |
August 22 | October 3 October 5 |
PyTexas 2014 | College Station, TX, USA |
If the CFP deadline for your event does not appear here, please tell us about it.
Upcoming Events
Events: June 26, 2014 to August 25, 2014
The following event listing is taken from the LWN.net Calendar.
Date(s) | Event | Location |
---|---|---|
June 21 June 28 |
YAPC North America | Orlando, FL, USA |
June 24 June 27 |
Open Source Bridge | Portland, OR, USA |
July 1 July 2 |
Automotive Linux Summit | Tokyo, Japan |
July 5 July 11 |
Libre Software Meeting | Montpellier, France |
July 5 July 6 |
Tails HackFest 2014 | Paris, France |
July 6 July 12 |
SciPy 2014 | Austin, Texas, USA |
July 8 | CHAR(14) | near Milton Keynes, UK |
July 9 | PGDay UK | near Milton Keynes, UK |
July 14 July 16 |
2014 Ottawa Linux Symposium | Ottawa, Canada |
July 18 July 20 |
GNU Tools Cauldron 2014 | Cambridge, England, UK |
July 19 July 20 |
Conference for Open Source Coders, Users and Promoters | Taipei, Taiwan |
July 20 July 24 |
OSCON 2014 | Portland, OR, USA |
July 21 July 27 |
EuroPython 2014 | Berlin, Germany |
July 26 August 1 |
Gnome Users and Developers Annual Conference | Strasbourg, France |
August 1 August 3 |
PyCon Australia | Brisbane, Australia |
August 4 | CentOS Dojo Cologne, Germany | Cologne, Germany |
August 6 August 9 |
Flock | Prague, Czech Republic |
August 9 | Fosscon 2014 | Philadelphia, PA, USA |
August 15 August 17 |
GNU Hackers' Meeting 2014 | Munich, Germany |
August 18 August 19 |
Linux Security Summit 2014 | Chicago, IL, USA |
August 18 August 20 |
Linux Kernel Summit | Chicago, IL, USA |
August 18 | 7th Workshop on Cyber Security Experimentation and Test | San Diego, CA, USA |
August 18 August 19 |
Xen Developer Summit North America | Chicago, IL, USA |
August 19 | 2014 USENIX Summit on Hot Topics in Security | San Diego, CA, USA |
August 20 August 22 |
USENIX Security '14 | San Diego, CA, USA |
August 20 August 22 |
LinuxCon North America | Chicago, IL, USA |
August 20 August 22 |
CloudOpen North America | Chicago, IL, USA |
August 22 August 23 |
BarcampGR | Grand Rapids, MI, USA |
August 23 August 31 |
Debian Conference 2014 | Portland, OR, USA |
August 23 August 24 |
Free and Open Source Software Conference | St. Augustin (near Bonn), Germany |
If your event does not appear here, please tell us about it.
Page editor: Rebecca Sobol