LWN.net Weekly Edition for July 14, 2016
The state of software patents after the Alice decision
In 2014, the Alice Corp. v CLS Bank International decision by the US Supreme Court struck a significant blow against software patents. But it was not the end of the fight for advocates of reform, as Deb Nicholson from the Open Invention Network (OIN) explained in her talk at Texas Linux Fest 2016 in Austin. The Alice decision resulted in numerous patent re-examinations and in changed expectations, but it did not eliminate the threat of patent-infringement lawsuits—nor are software-patent proponents finished fighting back.
Narrowing patentability
The key facet of the ruling, Nicholson said, was that the court set
out a new, two-part test to be part of the determination of whether or
not an invention is patentable. The first part of the test is whether
the invention is primarily an abstract concept—a mathematical
formula, a mental process, an economic practice, a law of nature, or
so on. The second part of the test is whether or not the
non-abstract part of the claim is merely "foo on a
computer" or "foo over the network."
The test, she explained in response to an audience question, is in addition to the existing tests a claimed invention must pass to be considered patentable. The invention must also be non-obvious, for example, and it must be new, be possible in the real world, and cannot break the law. "You can't get a patent for how you bury bodies that you've killed or how you cheat on your taxes," she said.
Together, the two halves of the Alice test mean that the court no longer considers broad or simplistic ideas implemented in software to be a patentable invention, which radically reduces the possible scope of software patents. But, Nicholson said, that doesn't mean the scope of patentability is now "small;" it's just that two years ago it was much worse.
In the wake of the ruling, though, many software patents and applications have been affected. Between July 1 and August 15 2014, according to one study, 830 patent applications were withdrawn. New patent-infringement lawsuits are down by 40%, and in the first year, courts invalidated 286 patents or patent applications out of 345 that were reviewed. The latter statistic equals an invalidation rate of 82.9% which, she noted, has been widely repeated—often in less precise terms. Opponents of patent reform, it seems, get a lot of leverage out of warning businesses that "80% of patents may be invalid" under the new rules.
She then cited a few examples of the patents thrown out by the Alice tests. Digitech Image Technologies, for instance, had a patent that amounted to "combining two data sets in software;" it was invalidated by a circuit court in July 2014. Planet Bingo had a "bingo over the Internet" patent that was invalidated in August 2014. buySAFE held a patent on, essentially, "creating a contractual relationship online" that was invalidated in September 2014.
Other factors
Although the Alice decision was a big step forward for reform, Nicholson pointed out that it did not take place in a vacuum. Other changes in recent history have also improved the patent landscape.
First, other court cases have revised the patent-eligibility process, too. The Mayo v Prometheus ruling said that making observations and "using a little intelligence" did not amount to a patentable invention. That case revolved around a big pharmaceutical ("Big Pharma") patent on monitoring a patient's vital signs and adjusting dosage size in response. In Octane Fitness v Icon Health, the court ruled that predatory plaintiffs who lose must pay the defendant's legal fees, which changes the profitability equation for infringement suits. The Akamai v Limelight decision struck down the notion that plaintiffs could sue the end users of a web service in addition to suing the site's owner. Subsequent cases have upheld these rulings.
Second, the U.S. Patent and Trademark Office (USPTO) has made changes of its own. It is placing a renewed focus on assessing the quality of patents, so it is instructing examiners on what to look for. The USPTO has been hosting free webinars educating participants on patent quality, and has even issued memos clarifying the two-part Alice test.
Nicholson said there has been pushback from software-patent proponents as well, however, which indicates the impact Alice has had. "I don't know if you read patent lawyers' blogs," she said, "but they're pretty hilarious." In addition to the "80% of patents are invalid" claim mentioned earlier, she cited several other reactions from patent lawyers, including one who had posted a list of "words to avoid" in a patent application (such as "computational" and "business process") and another complaining that the Alice test "intentionally biased" against software patents.
The unchanged
Nevertheless, she said, the Alice ruling does not retroactively erase 20 years of bad software-patent history. The old, terrible patents already granted remain in force until they are invalidated in a court ruling or through a USPTO re-examination, "so they can still be used as a stick to beat people with."
Patent-infringement lawsuits are still profitable for plaintiffs and still expensive for defendants. As a result, the settlements offered by patent trolls are (intentionally) still cheaper than fighting back. When defendants do fight back, plaintiffs have an array of tactics available to make the suit more expensive. For instance, there is "discovery abuse," where the plaintiff requests heaps of essentially meaningless documents simply to make the defendant incur more costs.
The Alice case also did not make "good" software patents go away. The USPTO can still find a claim to be patentable under the new test and, in fact, with the harder test now in place, a "good" patent may be more difficult to fight. On that note, the ruling did not do anything to curb "jurisdiction shopping," either. The Eastern Texas district is still where half of infringement cases go to trial, and it is still friendliest to plaintiffs.
There are new challenges to the Alice test making their way through the court system that could roll back some of the progress of the past two years. One is the McRo v Namco case, which centers around a software patent for streamlining lip-syncing in computer animation. Many people seem to regard the patent as a good one, and a ruling might be made as early as the Federal Circuit court's fall session, with possible ramifications for other software patents.
Finally, there has been a lot of patent-reform legislation proposed, but none of it has been passed. The legislation, such as the Innovation Act and the PATENT Act, could curtail practices like discovery abuse if they are enacted.
Broader challenges
There are yet other fronts on which software-patent opponents need to be vigilant, Nicholson said. The USPTO's inter partes review (IPR) process allows anyone to challenge the validity of a patent. Now, patent-reform opponents are challenging that procedure. Big Pharma is lobbying to make IPR only accessible to "experts." Other court cases from the technology sector have challenged the constitutionality of IPRs (though IPRs were deemed constitutional) and have argued to have different standards applied in the review process than in the patent application process.
Worse yet, she said, Big Pharma has finally figured out that it has something to gain by teaming up with Big Software—or, rather, with Big Software-Patent companies. The two industries are now running joint lobbying operations.
In addition, the global landscape is still difficult for advocates of software-patent reform. The patent systems in many other countries have not caught up to the Alice decision and may not for quite some time. Japan and China are catching up to the U.S. in the number of patents granted annually. And although "Europeans love to say they don't have software patents when I go over there," she added, "they actually do." There are different levels of patentability in each country, she said. Germany recently granted a software patent to Image Stream, for example, and there are many software patents held by Nokia.
Further complicating the international challenge is the effect of treaties and trade agreements. Recently, Eli Lilly (a U.S. pharmaceutical company) sued the government of Canada for violating the North American Free-Trade Agreement (NAFTA), claiming that Canada was denying it free trade by refusing one of its patents. That suit will be heard in an international court that does not even publish its case schedule, so there is little information available. The Trans-Pacific Partnership (TPP) could have an even greater impact on patents if it is enacted.
In conclusion
Combating software patents—and other abuses of the patent system, like design patents—is a long-term process, Nicholson reminded the audience. OIN runs several programs it hopes will protect free-software developers from the ills of bad patents, such as its Linux patent pool, the License On Transfer Network, and Defensive Publications.
But Nicholson told the crowd there are other ways they can help improve the patent landscape in the long term, too. They can contribute to the campaigns run by non-profit organizations like the Electronic Frontier Foundation and the Free Software Foundation, she said. Both are working to oppose the software-oriented provisions in the TPP, for example, among their other activities.
Individuals can also be powerful advocates for change within their own companies, pushing them to embrace a defensive, rather than offensive, approach to patents. And they can support the pending patent-reform legislation to lawmakers. Finally, they can continue to advocate for free and open-source software. The more we collaborate together, Nicholson said, the less we'll want to sue each other.
Docker adds orchestration and more at DockerCon 2016
DockerCon 2016, held in Seattle in June, included many new feature and product announcements from Docker Inc. and the Docker project. The main keynote of DockerCon [YouTube] featured Docker Inc. staff announcing and demonstrating the features of Docker 1.12, currently in its release-candidate phase. As with the prior 1.11 release, the new version includes major changes in the Docker architecture and tooling. Among the new features are an integrated orchestration stack, new encryption support, integrated cluster networking, and better Mac support.
The conference hosted 4000 attendees, including vendors like Microsoft, CoreOS, HashiCorp, and Red Hat, as well as staff from Docker-using companies like Capital One, ADP, and Cisco. While there were many technical and marketing sessions at DockerCon, the main feature announcements were given in the keynotes.
As with other articles on Docker, the project and product are referred to as "Docker," while the company is "Docker Inc."
Catching up: Docker 1.11
In version 1.11, the project restructured how Docker works almost entirely in order to pave the way for later features. Prior to that release, the Docker daemon, container manager, and container runtime were a unified program with a single API.
Docker 1.11 separated these functions into three separate pieces: the Docker Engine takes commands from the UI, passes appropriate commands to the containerd daemon, which starts each container using the runc binary. Notably, runC is the first container runtime built according to the specification from the Open Container Initiative. This restructuring has caused some problems, especially with external software integration, and meant that few new features were added to 1.11.
The architecture changes also delivered some strong benefits, not the least of which was an alpha release of "native" versions for Mac and Windows platforms in March. These versions use the built-in hypervisor support included in those operating systems to run Docker under a Linux kernel, instead of using VirtualBox as the prior Docker Toolbox and other solutions did.
Docker 1.12 and built-in orchestration
In contrast to the "big break-up" in the prior version, 1.12 will involve integrating what had been separate tools into the Docker Engine. Docker founder Solomon Hykes explained how and why Docker is integrating container-orchestration features that had previously been included only as external tools. According to him, the developers felt that existing orchestration tools had "solved the problem," but were "usable only by experts." Orchestration consists of scheduling and managing deployment of containerized microservices across a group of servers.
![Solomon Hykes [Solomon Hykes]](https://static.lwn.net/images/2016/dcon-hykes-sm.jpg)
The goal in integrating more things into Docker was to make orchestration usable by non-experts. As such, in Docker 1.12, a full suite of orchestration tools based on Docker's previous generation of tools, primarily Swarm and Compose, will be integrated into the Docker Engine. These orchestration changes consist of four major features:
- Swarm mode
- Cryptographic node identity
- A new service API
- A built-in network routing mesh
Users can enable Swarm mode in Docker 1.12 to have each node join a named cluster of nodes. This causes the Docker Engine to start up a built-in distributed configuration store (DCS), which shares information among the nodes in the cluster using the RAFT consensus algorithm. Other orchestration tools use external DCSes such as etcd or Consul to store cluster metadata. Hykes said that setting up a separate DCS was a significant barrier to deployment for many users.
The second feature, cryptographic node identity, actually encompasses a bunch of encryption features added to Swarm mode. This includes cryptographic keys identifying each node, built-in TLS-encrypted communication, and fully automated key rotation. All of that depends on an integrated public key infrastructure (PKI) feature that is now also part of Docker Engine. Hykes said that this creates a completely secure system by default.
Docker 1.12 also includes a new service API that allows developers and administrators to define applications as services, so that they can be deployed to a Swarm cluster. The services facility includes support for application health checks and auto-restart of failing containers. This seems to work very similarly to Deployments in Kubernetes.
![Andrea Luzzardi & Mike Goelzer [Andrea Luzzardi & Mike Goelzer]](https://static.lwn.net/images/2016/dcon-luzzardi-goelzer-sm.jpg)
The last piece of the new orchestration stack is what Hykes called a "routing mesh." The project has added a built-in network overlay and DNS-based service discovery for containerized services, similar to CoreOS's Flannel. This new feature supports built-in load balancing and works with external load balancers. According to Hykes, this is implemented using Linux IP Virtual Server (IPVS) for performance and stability.
Simple orchestration demo
Andrea Luzzardi and Mike Goelzer of Docker Inc. demonstrated the new orchestration features by setting up a three-node Swarm and deploying services to it. Luzzardi started from a new machine running Docker 1.12, and initialized the first node:
# ssh node-1 node-1# docker swarm init Swarm initialized: current node is now a manager
This creates a one-node "cluster." Adding nodes to the cluster requires telling each of them to join that node, by telling them to connect to the node by DNS name on port 2377:
# ssh node-2 node-2# docker swarm join node-1:2377 This node joined a Swarm as a worker.
Deploying a containerized microservice to this cluster uses the new service command. Luzzardi showed deploying the Instavote Python container from Docker Hub, and had it listen on port 8080 in the cluster:
node-1# docker service create --name vote -p 8080:80 instavote/vote
He then showed that you could connect to the web service on any node on port 8080. The service can be "scaled" using the same service command. For example, the command below scales up to six containers by adding five more:
node-1# docker service scale vote=6
Luzzardi and Goezler finished by showing automated redeployment of containers on node failure. They also demonstrated rolling updates of container versions.
Docker for Mac and Windows
"Native" Docker for Mac and Windows has been available since March in an invitation-only beta. Hykes introduced a new release of Docker for Mac that came about from the feedback, bugs, and test cases submitted by the beta testers. Tester reports were invaluable, especially for troubleshooting hardware compatibility.
According to Hykes, creating Docker for Mac and Windows required hiring new engineers with deep systems knowledge, which is why Docker Inc. acquired Unikernel Systems in January. The company also made use of hires out of the gaming industry for user-experience improvements. He promised a "seamless" developer experience.
Aanand Prasad, an engineer at Docker Inc., demonstrated the new Mac integration. He live-debugged the Instavote demo application, showing off being able to reload the application based on editing code in a desktop editor on the Mac. This gives Mac users a similar experience to programmers on Linux desktops.
As of DockerCon, Docker for Mac and Windows are now public betas.
Comparisons with other tools
The orchestration features in Docker 1.12 are quite similar to orchestration features offered by existing tools, such as Kubernetes and Mesos Marathon. For example, Kubernetes offers service deployment and auto-failover, encryption support, rolling updates, pluggable network overlays, and service discovery. The older version of Docker Swarm also has some of those.
This is in line with Hykes's keynote. He emphasized that Docker engineers haven't invented anything new; instead, they've made complex infrastructure that was already available easy to use. "We're making powerful things simple," he said.
Further, version 1.12 will enable Docker Inc's own tools to reach near-parity on orchestration with tools offered by other companies or externally governed open-source projects. As Docker Swarm and Compose had previously lagged competing solutions in features considerably, this puts a lot of pressure on projects like Mesos and Kubernetes to add features and address ease-of-use issues. Kubernetes seems to be focusing on adding features; version 1.3 was released in early July and includes many new configuration options for microservices as well as enhancements to scalability.
Hykes also assured attendees that the older Swarm and Docker Compose APIs would continue to work and be supported.
Docker 1.12 is currently in its third release candidate. The Docker for Mac and Windows betas include version 1.12. Linux users will need to get the 1.12RC by downloading the "experimental" Docker packages.
Public clouds and the future of Docker
Hykes finished up by announcing integrated public cloud tools: "Docker for AWS" and "Docker for Azure." These two offerings automate deployment of the new Docker Swarm on Amazon Web Services or Microsoft Azure, respectively, including integration with accounts, permissions, and network security. People can apply to test these by requesting an invitation on the Docker web site.
The tools and features announced at DockerCon 2016 once again change the landscape of container tools. The near-native Mac and Windows versions remove what was perhaps the largest barrier to wider developer adoption of Docker as their main deployment technology. It's possible that they also remove a strong reason for developers to move to Linux on the desktop.
The container ecosystem is still fast-moving and changing substantially every few months. While it's hard to know what to expect in the next three or four months, we know that we can expect it to be different.
[ Josh Berkus works on container technology for Red Hat. ]
Core improvements in digiKam 5.0
Version 5.0.0 of the digiKam image-management application was released on July 5. In many respects, the road from the 4.x series to the new 5.0 release consisted of patches and rewrites to internal components that users are not likely to notice at first glance. But the effort places digiKam in a better position for future development, and despite the lack of glamorous new features, some of the changes will make users' lives easier as well.
For context, digiKam 4.0 was released in May of 2014, meaning it has been over two full years since the last major version-number bump. While every free-software project is different, it was a long development cycle for digiKam, which (for example) had released 4.0 just one year after 3.0.
The big hurdle for the 5.0 development cycle was porting the code to Qt5. While migrating to a new release of a toolkit always poses challenges, the digiKam team decided to take the opportunity to move away from dependencies on KDE libraries. In many cases, that effort meant refactoring the code or changing internal APIs to directly use Qt interfaces rather than their KDE equivalents. But, in a few instances, it meant reimplementing functionality directly in digiKam.
A relatively simple example is found in what happens when the user deletes
an image. In digiKam 4 and earlier, the deleted file would be
moved to the KDE trash directory, removing it entirely from digiKam's
internal library. In digiKam 5.0, the program now maintains an
internal "trash" folder; deletions are simply staged there until the
user empties the trash. For a lot of users, this means it is now easy
to undelete an image for the first time.
A bigger change was required for the database interface. The old digiKam releases used KDE's KIO library, primarily because early versions of SQLite (which was digiKam's database storage backend) were single-threaded and would slow digiKam to a crawl. Subsequently, however, SQLite has gained robust multi-threading support. digiKam 5.0 now talks to the database layer directly, removing another dependency. Quite a few of the digiKam's image-manipulation and export plugins also used KIO; they, too, were ported away from that library, although they still follow the KDE Image Plugin Interface (KIPI) API.
Whether or not Linux users will notice any performance or resource-usage improvements as a result of the migration away from KDE libraries remains to be seen, but one major benefit of the migration work is that digiKam now runs on Mac and Windows systems with essentially full feature-parity to the Linux builds—and significantly better stability. The release announcement points out that the OS X builds still rely on some external packages provided by MacPorts, but the Windows builds can be compiled and run standalone.
In the long term, the plan is to make digiKam a pure-Qt
application. That work is not yet complete, but the release notes
estimate it as "at least 80%
" finished.
Features
Porting aside, there are several other interesting new features in
the 5.0 release. One is a tool to tweak image colors using 3D look-up
tables (LUTs). This method of altering colors is an extension of
how files are normally converted from one color space to another; it
is most familiar to many users from the filter effects found in
Instagram and other mobile-phone camera apps.
Another change is that digiKam's DNG (Digital Negative) conversion tool has been migrated into the batch-job manager. DNG was designed to be a catch-all superset of camera raw file formats, so it is usually used as a conversion target. Consequently, users are most likely to need it when importing sets of images, so not having it available for batch-processing jobs created a stumbling block.
Several of the other image-management tools, such as the metadata editor and the geolocation editor, have now been made available from every mode of the digiKam interface (that is, in the single-image-editor mode, the gallery mode, and in the lightweight "ShowFoto" editor mode). This, too, fixes an inconvenience rather than adding new functionality, but it will likely be appreciated by many.
A more substantial addition is the return of support for storing digiKam's databases in MySQL or MariaDB. The program uses several databases by default (storing preview data, for instance, separate from image metadata), but users can also configure multiple databases for different collections if they desire. More than five years ago, there was an initial implementation of MySQL support, but the developer maintaining that code departed and it began to bit-rot. That left digiKam with SQLite as its only supported database backend, which was not suitable for collections exceeding 100,000 items.
There is now a new database maintainer, who has cleaned up and modernized the code. Users can select MySQL or MariaDB as a database option from the very beginning (previously, one had to create a SQLite database first, then convert it). Remote database servers are supported in addition to local connections. Along the way, the digiKam database schemas were altered, although migration from the old schema to the new should happen automatically when upgrading.
Fundamentals
The updated database functionality may sound like a minor detail—after all, 100,000 images sounds like a lot. But that database limit actually applies to everything in digiKam's database, including tags, metadata, and even face-recognition information, so far fewer photos were needed to bump into the practical limits of SQLite. As a result, the change helps bridge the gap for high-end users—who are ostensibly digiKam's target audience. There are several other workflow improvements in the new release, such as a "lazy" resynchronization tool that will push metadata updates out to the database opportunistically rather than interrupting the user to wait for a resync, and revisions to the metadata settings panel.
The awkward truth of image management in free-software circles is that so few people actually use it. Ask any circle of active photographers in a conference or hackathon setting, and you are likely to hear that the majority simply keep track of their images in a directory hierarchy organized by date, perhaps with "star" ratings keeping track of the best pictures from within their image editor of choice.
DigiKam is, in theory, one of the best tools available for imposing more order on image collections that a filesystem alone can. But, to all appearances, factors like the limitations of SQLite and the slowness of KIO have, for several years, hindered its adoption by the users who take the most photographs. As a result, the changes in digiKam 5.0 are some of the most important that the project has implemented in some time—even if they appear low-key from the outside.
Security
Python's os.urandom() in the absence of entropy
Python applications, like those written in other languages, often need to obtain random data for purposes ranging from cryptographic key generation to initialization of scientific models. For years, the standard way of getting that data is via a call to os.urandom(), which is documented to "return a string of n random bytes suitable for cryptographic use." An enhancement in Python 3.5 caused a subtle change in how os.urandom() behaves on Linux systems, leading to some long, heated discussions about how randomness should be obtained in Python programs. When the dust settles, Python benevolent dictator for life (BDFL) Guido van Rossum will have the unenviable task of choosing between two competing proposals.
Blocking os.urandom()
Traditionally, os.urandom() has been implemented on Linux by opening /dev/urandom and reading the requested amount of data. This interface is non-blocking; it will not wait if the amount of entropy in the system's entropy pool is low. The implementation of /dev/urandom is such that the quality of the random data it returns will be high even if the entropy pool is depleted — with one possible exception. Immediately after the system boots, when the entropy pool will contain little or no entropy, /dev/urandom may return relatively predictable data. In most systems, this window of poor randomness is only open for a few seconds at most, but exceptions do exist.
In the Python 3.5.0 release, os.urandom() was changed to use the relatively new getrandom() system call on Linux. Unless it has been called with the GRND_NONBLOCK flag, getrandom() will wait, if need be, for the system entropy pool to be initialized. os.urandom() does not supply that flag, meaning that it can block if the entropy pool has not yet accumulated enough randomness. It seems like a relatively small change, with the prospect of being sure of living up to the "suitable for cryptographic use" promise in compensation. But one need only look at Python issue 26839 to see that the implications are not quite as simple as one would expect.
It turns out that, in some distribution configurations, Python scripts are run at the very beginning of the user-space bootstrap process. If the entropy pool is not yet ready, those scripts will block until entropy-pool initialization is complete. If there is nothing else going on, and especially if the system is booting as a virtualized guest, it may take a long time to accumulate enough entropy to proceed. In the incident that led to the bug report, the boot process simply hung for 90 seconds until systemd lost patience and killed the blocking process. That kind of behavior, created in the search for unpredictable random numbers, has quite predictable effects in the form of unhappy users.
From this sprung the bug-tracker entry referenced above, which turned into a fierce discussion on the wisdom of the API change and whether os.urandom() should return crypto-quality randomness at any cost. The discussion spilled over onto the python-dev list when 3.5 release manager Larry Hastings despaired of reaching any sort of consensus and asked Van Rossum to simply rule on the matter. The resulting thread led many participants to question whether they wanted to continue following the list at all but, in the end, it did come to some useful conclusions.
If one is doing cryptographically sensitive work early in the bootstrap process — generating an SSH host key, for example — then blocking the boot almost certainly makes sense. The consequences of the alternative — generating weak keys — can be severe. In this case, though, it turns out that such high-quality randomness was not needed. Nobody was generating keys; instead, Python was initializing its own internal random-number generator and setting up dictionary randomization to defend against hash-collision attacks. These internal calls were (inadvertently) changed when os.urandom() was changed, but there seems to be a rough consensus that they do not need blocking behavior.
So the proper fix for the observed boot hang is to do these internal initializations without blocking on the entropy pool. For Python 3.5, the os.urandom() change will also be partially reverted, in that the function will, once again, be non-blocking. It will call getrandom() with the GRND_NONBLOCK flag and, if that call fails, fall back to reading /dev/urandom as before. With these fixes in place, the blocking part of the change is effectively reverted and the immediate problem has been solved.
Blocking or exceptions?
That still leaves open the issue of how os.urandom() should behave; developers who are concerned about security are adamant that it should not return data when the entropy pool is not yet ready. So there is still pressure on Van Rossum (and the community as a whole) to specify blocking behavior starting with the upcoming 3.6 release. Python's benevolent dictator seems inclined to downplay the issue:
It became clear in the discussion, though, that opposition to returning questionable randomness from os.urandom() is strong. It seems likely that, in 3.6 and later releases, os.urandom() will no longer return data drawn from an uninitialized entropy pool. The question of how it will behave is, as yet, unresolved, though. In the end, Van Rossum asked the proponents of two different approaches to write up their ideas as Python enhancement proposals (PEPs); he will then choose between the two.
The first approach, favored by Victor
Stinner, is to simply make os.urandom() blocking and be done with it —
after ensuring that Python itself uses non-blocking behavior during its
initialization. Changing to blocking behavior is arguably an incompatible
change in a longstanding Python API but, as Stinner points out:
"First of all, no user complained yet that 'os.urandom()'
blocks. This point is currently theoretical.
" As long as the
problems with starting Python itself are resolved, the thinking goes, there
should not be problems for other users.
The alternative comes from Nick Coghlan. With this proposal, os.urandom() will raise a BlockingIOError exception if random data cannot be had without blocking. Adding a new exception to an established API has its own hazards; no existing code will be expecting that exception, so surprising explosions might result. But, for a problem that should only be possible during the bootstrap process, Coghlan believes that this is the best approach:
This proposal also envisions adding a function to the in-development "secrets" module — secrets.wait_for_system_rng() — that would simply block until the system's entropy pool is fully initialized and ready. The small (possibly nonexistent) body of code that breaks with unhandled BlockingIOError exceptions could call this function to ensure the availability of strong random data from os.urandom().
It is not clear when a decision between these two proposals will be made. It is worth noting, though, that Coghlan has indicated that he is happy enough with Stinner's proposal that he can support it should that be the one that is accepted in the end. So the discussion may have been long and painful, but the end result should be strong random data in Python in a way that the community as a whole is able to agree upon. Hopefully that means everybody can rest and prepare for the inevitable debate over whether this change should be backported to Python 2.
Brief items
Security quotes of the week
This attack was already understood as a theoretical problem for the Tor project, which had recently undertaken a rearchitecting of the hidden service system that would prevent it from taking place.
No one knows who is running the spying nodes: they could be run by criminals, governments, private suppliers of "infowar" weapons to governments, independent researchers, or other scholars (though scholarly research would not normally include attempts to hack the servers once they were discovered).
Targets are important in cryptography, and Google has turned New Hope into a good one. Consider this an opportunity to advance our cryptographic knowledge, not an offer of a more-secure encryption option. And this is the right time for this area of research, before quantum computers make discrete-logarithm and factoring algorithms obsolete.
New vulnerabilities
community-mysql: unspecified
Package(s): | community-mysql | CVE #(s): | |||||
Created: | July 11, 2016 | Updated: | July 13, 2016 | ||||
Description: | Latest upstream release, 5.7.12, fixes unspecified vulnerabilities. | ||||||
Alerts: |
|
davfs2: unspecified
Package(s): | davfs2 | CVE #(s): | |||||
Created: | July 11, 2016 | Updated: | July 13, 2016 | ||||
Description: | Update to the latest upstream release to fix unspecified vulnerabilities. See the Red Hat bugzilla for more information. | ||||||
Alerts: |
|
gnutls: certificate verification vulnerability
Package(s): | gnutls | CVE #(s): | |||||||||||||
Created: | July 12, 2016 | Updated: | July 25, 2016 | ||||||||||||
Description: | From the Red Hat bugzilla:
A vulnerability was discovered in gnutls that affects certificate verification when GnuTLS is used in combination with the p11-kit trust module. This issue affects gnutls 3.3.23, 3.4.12 and later versions. | ||||||||||||||
Alerts: |
|
gsi-openssh: support GSI authentication
Package(s): | gsi-openssh | CVE #(s): | |||||
Created: | July 12, 2016 | Updated: | July 13, 2016 | ||||
Description: | From the Fedora advisory:
This version of OpenSSH has been modified to support GSI authentication. This package includes the core files necessary for both the gsissh client and server. To make this package useful, you should also install gsi-openssh-clients, gsi-openssh-server, or both. | ||||||
Alerts: |
|
httpd: authentication bypass
Package(s): | httpd | CVE #(s): | CVE-2016-4979 | ||||||||||||||||
Created: | July 12, 2016 | Updated: | July 18, 2016 | ||||||||||||||||
Description: | From the CVE entry:
The Apache HTTP Server 2.4.18 through 2.4.20, when mod_http2 and mod_ssl are enabled, does not properly recognize the "SSLVerifyClient require" directive for HTTP/2 request authorization, which allows remote attackers to bypass intended access restrictions by leveraging the ability to send multiple requests over a single connection and aborting a renegotiation. | ||||||||||||||||||
Alerts: |
|
libgd2: denial of service
Package(s): | libgd2 | CVE #(s): | CVE-2016-6161 | ||||||||||||||||||||||||||||||||||||||||||||
Created: | July 12, 2016 | Updated: | July 27, 2016 | ||||||||||||||||||||||||||||||||||||||||||||
Description: | From the Ubuntu advisory:
It was discovered that the GD library incorrectly handled memory when encoding a GIF image. A remote attacker could possibly use this issue to cause a denial of service. | ||||||||||||||||||||||||||||||||||||||||||||||
Alerts: |
|
nodejs-ws: denial of service
Package(s): | nodejs-ws | CVE #(s): | |||||||||
Created: | July 11, 2016 | Updated: | July 13, 2016 | ||||||||
Description: | From the Red Hat bugzilla:
ws is a "simple to use, blazing fast and thoroughly tested websocket client, server and console for node.js, up-to-date against RFC-6455" By sending an overly long websocket payload to a ws server, it is possible to crash the node process. | ||||||||||
Alerts: |
|
php5: cross-site scripting
Package(s): | php5 | CVE #(s): | CVE-2015-8935 | ||||||||||||||||||||
Created: | July 8, 2016 | Updated: | July 13, 2016 | ||||||||||||||||||||
Description: | From the openSUSE advisory:
CVE-2015-8935: XSS in header() with Internet Explorer (bsc#986004) | ||||||||||||||||||||||
Alerts: |
|
samba: crypto downgrade
Package(s): | samba | CVE #(s): | CVE-2016-2119 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Created: | July 8, 2016 | Updated: | December 19, 2016 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Description: | From the Slackware advisory:
This release fixes a security issue: Client side SMB2/3 required signing can be downgraded. It's possible for an attacker to downgrade the required signing for an SMB2/3 client connection, by injecting the SMB2_SESSION_FLAG_IS_GUEST or SMB2_SESSION_FLAG_IS_NULL flags. This means that the attacker can impersonate a server being connected to by Samba, and return malicious results. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Alerts: |
|
tcpreplay: denial of service
Package(s): | tcpreplay | CVE #(s): | CVE-2016-6160 | ||||||||||||||||||||
Created: | July 8, 2016 | Updated: | December 6, 2016 | ||||||||||||||||||||
Description: | From the Debian-LTS advisory:
The tcprewrite program, part of the tcpreplay suite, does not check the size of the frames it processes. Huge frames may trigger a segmentation fault, and such frames occur when caputuring packets on interfaces with an MTU of or close to 65536. For example, the loopback interface lo of the Linux kernel has such a value. | ||||||||||||||||||||||
Alerts: |
|
Page editor: Jake Edge
Kernel development
Brief items
Kernel release status
The current development kernel is 4.7-rc7, which was released by Linus Torvalds on July 10. "Anyway, there's a couple of regressions still being looked at, but
unless anything odd happens, this is going to be the last rc. However,
due to my travel schedule, I won't be doing the final 4.7 next
weekend, and people will have two weeks to report (and fix) any
remaining bugs.
Yeah, that's the ticket. My travel schedule isn't screwing anything
up, instead think of it as you guys getting a BONUS WEEK! Yay!
"
Another edition of the 4.7 regression list was released by Thorsten Leemhuis on July 10. It has ten current regressions, two of which are new.
Stable updates: The 4.6.4 and 4.4.15 stable kernels were released on July 11.
Kernel development news
Kernel documentation with Sphinx, part 2: how it works
The kernel's documentation tree is going through a fundamental transition toward the use of Sphinx and reStructuredText for the production of formatted documents. The first article in this series discussed the path the development community took as it made the decision to go with Sphinx. This article, which concludes the series, covers the mechanics of the new documentation system and how to add to it.From the casual developer's perspective, building the documentation hasn't changed much. In the 4.8 kernel and beyond, the usual "make htmldocs" and "make pdfdocs" commands will invoke both Sphinx to build the documentation written in reStructuredText and the old toolchain to build documentation still in DocBook format. One will need to have Sphinx installed, obviously. For prettier HTML, the Read the Docs Sphinx theme (sphinx_rtd_theme) will be used if available. For PDF output, the rst2pdf package is also needed. All of them are readily available in stable distributions.
The documentation build for Sphinx uses a dedicated Documentation/Makefile.sphinx, with Documentation/conf.py for configuration. The generated files are placed under Documentation/output in format-specific subdirectories. Currently, there is not much documentation that is actually built from reStructuredText, but the graphics documentation as well as documentation about the Sphinx-based system itself will be ready in time for v4.8. Over time, the plan is that all DocBook documents will be converted to reStructuredText, and we can finally say goodbye to DocBook.
From the perspective of the build system, Sphinx is pleasantly simple compared to the DocBook toolchain. It handles dependencies within documents by itself, storing intermediate data in the output directory. This allows the build system to work without knowledge of how the input and output files map to each other.
Writing documentation
Adding new documentation to the Sphinx build can be as simple as following these steps:
- Add a new reStructuredText file somewhere under Documentation with a .rst extension.
- Refer to it from the main index file Documentation/index.rst.
For now, converting existing plain-text and DocBook files to reStructuredText is more likely to happen than adding new files altogether. Because the current plain-text files don't follow any markup, they need to be manually converted; happily, by design, plain text is not too far from lightweight markup. We expect that some of the thousands of plain-text files will be converted to reStructuredText over time, but there is no real pressure to do so, and not everything needs to be part of the documentation build.
The DocBook conversion is more interesting. There's a "cheesy conversion script" from Jonathan Corbet in Documentation/sphinx/tmplcvt that uses pandoc with some sed pre- and post-processing. Markus Heiser has been working on some more advanced conversion scripts. The DocBook templates should be converted primarily by their authors or maintainers to ensure they remain sensible and no errors creep in while converting. The conversion is a one-time effort anyway, so after a point, polishing the scripts is wasted effort. (Here's a sample of the results of some of the DocBook files converted using the cheesy script, with no manual editing on top.)
Once converted, the DocBook templates are to be placed alongside other documentation under Documentation instead of in a silo under Documentation/DocBook. That directory, along with the entire DocBook toolchain, is slated to be removed once all the documents therein have been converted. Even developers who couldn't care less about producing pretty documents can benefit from converting the DocBook templates to reStructuredText because grepping and reading reStructuredText is much easier than the angle-bracketed mess that is DocBook.
Eventually we'll need to have more structure than just shoving everything directly in the main index. In particular, the PDF output needs to be split into several documents. This can be done using a configuration option in Documentation/conf.py as more documents are added. For starters, however, keeping things simple seems like the way to go.
Formatted kernel-doc comments
When building documentation using Sphinx, the kernel-doc comments are now treated as reStructuredText. Some hiccups will inevitably follow, as the comments were not written with reStructuredText in mind, but mostly it just works.
The kernel-doc script parses the formatted comments at the high level (function and structure names, parameter and member descriptions, and so on), generates appropriate Sphinx C Domain anchors for them, filters the comments for highlights and cross-references, and otherwise passes the rest through as-is. The filters convert function_name() and references to structure types (using the &struct struct_name convention) to proper C Domain cross-references, and there are other highlights as well.
A dedicated Sphinx directive extension incorporates kernel-doc comments from source files into the document. Internally, the extension invokes kernel-doc to do the job and informs Sphinx about the document dependencies on source files. The extension makes it possible to include kernel-doc comments with any reStructuredText file under Documentation with no special handling or dependency tracking in the makefiles.
For example, to include the documentation for all the functions exported using EXPORT_SYMBOL() from bitmap.c, you'd write the following:
.. kernel-doc:: lib/bitmap.c :export:
To include an overview documentation section from intel_audio.c:
.. kernel-doc:: drivers/gpu/drm/i915/intel_audio.c :doc: High Definition Audio over HDMI and Display Port
The DOC: title given in the source code acts as an identifier for the section. There are also ways to include documentation for specific functions or types.
Daniel Vetter's contributions enable the kernel-doc extension to feed the source code file and line number of each documentation comment to Sphinx to enhance diagnostic messages on reStructuredText errors. This will come in handy when fixing the hiccups mentioned earlier.
Future work
There has been some talk (and even code from Markus) to convert the kernel-doc script from Perl to Python and perhaps to run it directly in the Sphinx extension. It is not clear, however, whether it's worth converting a homebrew C parser with two decades of field testing from one language to another just for the sake of it. Perhaps a compiler plugin would be a better idea.
As noted earlier, the media documentation in particular needs better syntax for tables. To this end, Markus has written a Sphinx extension to support row and column spans, among other things, in tables. This work, too, looks set to go into 4.8; it is a dependency for converting the media documents.
But, on a positive note, most of the work discussed in this article has been merged. We'll be seeing more documentation patches that convert files to reStructuredText, as well as fixing and improving kernel-doc comments in source. Hopefully the changes will improve the state of the kernel documentation as a whole, and will move us one step closer to the documentation maintainer's vision as expressed during a linux.conf.au talk, "If we do this, we end up with, some years from now, this beautiful, integrated documentation tree, that covers things in a comprehensive way, where you can find what you want, looks pretty when you look at it. It's a nice vision, I hear angels singing when I think about it and so on, it's where I want to go."
[Jani Nikula is employed by Intel to work on Linux graphics, and is also the author of most of the Sphinx work, with contributions from Daniel Vetter and Jonathan Corbet.]
Tracking resources and capabilities used
There are various types of limits and privileges that administrators can apply to processes or control groups (cgroups) in Linux, but it is sometimes difficult to determine what those values should be—except by trial and error. A patch set from Topi Miettinen targets making that easier by tracking resource and capability usage by processes in order to give users and administrators a starting point to use when setting those values. The idea is that the processes can be run under a normal load and the high-water values (as well as the capabilities used) will be recorded to provide a guide for future, more-restrictive deployments.
The 18-patch series is broken up into three groups: capabilities used (one patch), cgroup limits (three patches), and resource limits (14 patches). Capabilities used are reported in /proc/PID/status, while cgroup maximums are presented in files in the cgroup filesystem. Resource limits (i.e. rlimits), on the other hand, are reported in the /proc/PID/limits file. Those may change since there are programs that parse the files in /proc, so adding more information could potentially alter the user-space interface for the kernel.
As Miettinen says in the cover letter for the patches, much of the information can already be gleaned from various /proc files and using tools like ps, but those methods only give a value at one point in time. In order to be sure that transient spikes are also recorded, so they can be taken into account, the kernel needs to be involved; thus these patches.
But Konstantin Khlebnikov objected to the overall goal:
He also suggested that tracepoints could be used (perhaps in conjunction with SystemTap or other kernel tracing infrastructure), rather than adding high-water recording to the kernel.
But both Miettinen and Austin S. Hemmelgarn disagreed with that analysis. Miettinen noted that there are always risks when setting limits, but that the patches are just meant to help provide some guidance. Hemmelgarn essentially agreed:
Rlimits could be handled similarly, he said. Beyond that, though, there
are different types of failure modes for processes that cannot get the
resources they need (e.g. can't start a thread or process), which may not
manifest as application errors or crashes. In addition, getting the
information about
the maximum usage from
user space will be difficult or impossible, he said. In a follow-up post, he also noted that tracing can't supply
any better answers for the upper bound of these values than internal kernel
tracking can: "You can't get a perfectly reliable upper bound for any
type of resource
usage with just black box observations, period.
"
There were also comments on many of the individual patches. The capabilities-tracking patch simply adds a cap_used bit array to struct task_struct and sets the bit corresponding to a capability whenever that capability is checked (and passes the check). But as Andy Lutomirski pointed out, simply tracking the capabilities used by a process won't work well in the presence of ambient capabilities. If a process runs a program with ambient capabilities, which uses some capabilities beyond what the main process uses, those will be missed in the set of capabilities collected. He suggested tracking capabilities used for an entire process tree or cgroup.
The cgroup patches track values for three specific controllers: the maximum PIDs used in a PID cgroup, maximum memory used in a memory cgroup, and the devices accessed in a device cgroup. The PID cgroup patch uses an atomic variable to track the highest number of PIDs that have been active in the cgroup at any point. It makes that number available in the pids.current_max file. Cgroup maintainer Tejun Heo didn't like the name (he suggested a high_watermark field in the pids.stats file) and was concerned that some of the atomic variable handling that could lead to races.
The
patch for the memory cgroup simply presents
the existing watermark value in the memory.current_max
file. But, as Johannes Weiner noted, that
generally won't provide much useful information. The page cache is counted
in that watermark and is not reduced in size unless there is memory
pressure, "so in all but very
few cases the high watermark you are introducing will be pegged to the
configured limit
".
The last of the cgroup patches keeps a list of devices that are accessed in a device cgroup. That list, which contains the device type (character or block), major and minor numbers, and access type (read, write, or mknod), can be read from the devices.accessed file.
The rlimit patches drew fewer comments in general (or, perhaps, the comments were outweighed by the sheer number of patches). There was some general confusion because Miettinen did not send a copy of the cover letter (or the first rlimit patch that added some infrastructure used by the rest) to everyone who got copies of the individual patches. In addition, the function name used to update the current maximum value, bump_rlimit(), was confusing to some, since it seems to imply that the actual rlimit is being increased (bumped).
There are individual patches to record (and sometimes report) the maximum use of different resources that are tied to rlimits. That includes the number of open files (RLIMIT_NOFILE), CPU usage (RLIMIT_CPU), file sizes created (RLIMIT_FSIZE), number of processes (RLIMIT_NPROC), and so on. There were some complaints about race conditions and using read-copy-update (RCU) incorrectly, along with some suggestions for better comments to make the intent of the code clearer. Aside from the final patch in the series, which Kees Cook pointed out was unneeded, the series as a whole got a fairly warm response.
There is clearly some work to be done, but maximum resource usage tracking seems like a feature that might make its way into the kernel in, say, 4.9 or 4.10 unless some major opposition appears. It will provide users with a way to gauge what their processes are doing so that limits and privileges can be tightened down appropriately. It certainly won't provide all the answers, but may give the starting point that Miettinen is seeking.
USB charging, part 2: implementation
In the first part of this series we explored the complexities of charging a battery in a portable Linux-driven device from a USB connection, and in particular looked at how the maximum allowed current can be determined. This resulted in five tasks that Linux would need to complete in order to charge batteries in a compliant manner. It is now time to look inside Linux to see how well it achieves these tasks and, as we will find, the answer is "not very well", or at least "not very uniformly". There is some reason for hope on the horizon, however, as a patch set described as providing a "usb charger framework" is under development and should close at least some of the gaps.
The five tasks we identified, and that we will address in order, are:
- find out from the USB PHY what type of cable is attached and report this to the battery charger
- advertise USB gadget configurations with appropriate power demands
- determine which gadget configuration was chosen and report the available power to the battery charger
- adjust current within the given range to maintain suitable voltage
- detect when the power supply is questionable during boot, and limit activation of components until that is resolved
The EXTernal CONnector in your USB PHYsical interface
When a cable is plugged into the B-series USB receptacle on your
device, it is the task for the PHY, and the Linux driver for the PHY, to
measure voltage levels and resistances to determine what sort of
cable has been plugged in. The PHY driver must then tell the USB core
code if it should start negotiations as a USB host or a USB gadget;
it must also report the cable type to whatever driver is responsible
for charging the battery. How these reports are sent could best be
described as ad hoc, though a less kind commentator might say it is a
total mess. There are two approaches that are fairly generic: one
legacy and one newer. And then there are non-generic approaches
like musb_mailbox()
.
The legacy approach requires that the charger call
usb_register_notifier()
, as eight charger drivers do. The notifier
mechanism allows a pointer to an arbitrary data structure to be passed
along with the notification. Some PHY drivers pass a pointer to an
integer giving the available current in mA, some pass a pointer to the
usb_gadget
structure, which doesn't contain any information about
available current, and some just pass NULL
. Even without any data
passed, the notification can be useful since the charger driver may be able to
query the PHY directly, and can almost certainly turn the charging
circuit on or off depending on whether there is any voltage. So, while
this is not a coherent interface, it does provide some value.
The newer approach is to use "extcon", which is a driver class for monitoring external connectors, whether for audio jacks, video ports, USB receptacles, or anything else. An extcon device maintains a record of what type of cable (or what collection of cables) is currently plugged in and will generate a notification whenever a cable is plugged or unplugged. Other drivers can register interest in a particular cable type being attached to a particular connector or in a particular cable type being attached to any connector. Strangely, there is no way to register interest in a particular connector regardless of cable type.
Among the cable types known to extcon are:
/* USB external connector */ #define EXTCON_USB 1 #define EXTCON_USB_HOST 2 /* Charging external connector */ #define EXTCON_CHG_USB_SDP 5 /* Standard Downstream Port */ #define EXTCON_CHG_USB_DCP 6 /* Dedicated Charging Port */ #define EXTCON_CHG_USB_CDP 7 /* Charging Downstream Port */ #define EXTCON_CHG_USB_ACA 8 /* Accessory Charger Adapter */ #define EXTCON_CHG_USB_FAST 9 #define EXTCON_CHG_USB_SLOW 10
Unfortunately, there is no documentation beyond what is given above and
the implicit documentation of how various drivers use the cable types.
EXTCON_CHG_USB_SLOW
seems to suggest a cable that can provide
500mA. EXTCON_CHG_USB_FAST
is used by
axp288_charger.c to indicate
a charger capable of 2000mA. The relationship between the
EXTCON_USB*
and EXTCON_CHG_USB_*
cable types seems confused.
A possible interpretation is that the EXTCON_USB*
cable types
indicate if a cable can carry data, either in gadget or host mode,
independent of any charging capabilities. The EXTCON_CHG_USB_*
types would then indicate the power that can be expected of the cable,
independent of any data. Thus a single USB cable might be
reported as both a data cable and a power cable, which certainly makes
it easier for any client that is only interested in one or the other.
A few drivers, such as extcon-max14577.c, report a standard downstream
port as both EXTCON_USB
and EXTCON_CHG_USB_SDP
, which supports this
hypothesis, but, since they don't report EXTCON_USB
together with
EXTCON_CHG_USB_CDP
or EXTCON_USB_HOST
together with
EXTCON_CHG_USB_ACA
, this is not an interpretation that can
safely be
relied upon.
Even though these cable definitions do not seem to be implemented consistently, there is infrastructure available that carries all the information we need. Updating some drivers to use existing infrastructure properly is a trivial task compared to trying to work out what infrastructure is needed to allow the drivers to communicate at all.
And, indeed, drivers would need to be updated. There are precisely two
charger drivers that listen for extcon notifications. Quite a few USB
drivers listen for EXTCON_USB
or EXTCON_USB_HOST
so they can
configure as a gadget or a host, but the only chargers that do are
axp288_charger.c and charger_manager.c.
It is from axp288_charger.c that we can discover the one
interpretation of EXTCON_CHG_USB_FAST
and
EXTCON_CHG_USB_SLOW
that was mentioned above, but
otherwise it isn't particularly helpful as the code doesn't appear to
work. The API for extcon was updated last year and when
axp288_charger.c was adjusted to match, the only improvement
provided was the removal of compiler errors.
charger_manager.c is a software battery-charge monitor that checks the temperature and voltage on a battery and decides when to try to charge it. It can be configured to expect a list of different cable types along with the current to try to use from each cable. This seems to be the closest thing to a working charger manager that uses an extcon device to be notified of cables.
This poor state of the code doesn't necessarily mean that no Linux
device charges properly over USB. The USB PHY and the charging controller
in a particular device are often from the same manufacturer and even
in the same integrated circuit. In these cases, a driver for one half
can have intimate knowledge of the other half and thus achieve reasonable
results. An example of such a driver is isp1704_charger.c. This
driver is ostensibly a driver for battery charging, but it reaches over
into the territory of the PHY driver to
directly
access "ULPI" registers, which is the USB Low Pin
Interface. It uses usb_register_notifier()
to find out when
something changes, then pokes around on its own to see the specifics
of the change.
Where I have mentioned "charger drivers" above I have been a little
loose with terminology. Linux doesn't have a "battery charger" class
for drivers, it only has a "power_supply
" class. The unifying feature
of this class is that it allows drivers to report various details of a
power source, such as voltage (both present and maximum), current,
capacity (for batteries), technology used, etc. Since the most important
aspect of charging a battery is managing the source of power, and possibly
turning it off when temperature or voltage monitors indicate a
problem, it is quite reasonable for battery charging to be managed by
power supply devices, and this is how it happens in Linux.
One of the properties a power supply can present is the supply type, and until 2010 it was one of battery, UPS, mains, or USB. At that time USB DCP, USB CDP and USB ACA were added. More recently, some more types specific to USB 3.0 have been added. This means we have two subsystems vying for ownership of the USB-charger-type concept. Is the type of charger plugged into a USB receptacle a property of the power supply, or a property of the cable (or external connection)? Or both? The "technology" property mentioned previously is currently used only for batteries, allowing NiMH, LION, NiCd, etc. If the power supply needs to know about the attached charger, rather than just being told the available current, should the various types be treated in the same way as battery technology? While it is doubtless possible to argue for various different options, it is hard to argue against having unified coherent usage and that is certainly missing.
The various USB power supply subtypes are currently used in three
different drivers. The axp288_charger.c that we have already met uses
some of the values, but just uses them internally. It doesn't use
them to report the type of the power supply (that is always
POWER_SUPPLY_TYPE_USB
) but stores them in a data structure called
cable
. It finds out the type of charger by registering with
an extcon device, but as already noted that doesn't work correctly.
So that driver isn't good example to learn from.
Then there is a gpio-charger.c, which is designed to work with power-supply hardware with limited monitoring options: a GPIO input can detect if the charger is active, but that is all. In order to provide the other properties that a power supply should have, gpio-charger.c reads some configuration information from a device-tree description of the hardware. It allows that description to declare that the power supply is some particular subtype of USB. But this type is not changed dynamically, so it could only be meaningful for a USB charger that was hardwired to the device, which seems a little pointless.
Finally there is the isp1704_charger.c. As mentioned, it is a power-supply driver that pokes in the USB registers to determine the power-supply type, which is a bit of a layering violation. So it seems that no power-supply driver in mainline actually uses the USB power-supply subtypes in a particularly useful way.
So let's move on to determining current usage during bus enumeration.
Tracking gadget configuration
When a Standard Downstream Port (SDP) connection is detected, the PHY
driver notifies the USB gadget
controller, which then proceeds with the enumeration process. The
parts of this that interest us are how MaxPower
values are chosen
and how the MaxPower
from the chosen configuration is communicated.
MaxPower
is the field in a USB configuration table that lists the
current requirement, which can be seen using:
lsusb -v | grep MaxPower
Linux provides a "composite" gadget design where a number of different drivers can each register their own configuration and the composite driver will provide a list of all of those configurations to the host for it to choose from. There is a serial driver, an ether driver for networking, a mass_storage driver, and several others. Each of these just provides a single configuration and, while a few do set the MaxPower field in that configuration, most just leave it as the default. This default can be set using the compile-time configuration option CONFIG_USB_GADGET_VBUS_DRAW. This option defaults to 2mA, which is the smallest non-zero number that can be represented; zero is technically legal but apparently confuses some hosts. CONFIG_USB_GADGET_VBUS_DRAW is the sort of number that doesn't really make sense as a configuration option, but was probably implemented that way because it was easier than finding a real solution. No attempt is made to offer multiple versions of each configuration with different power requirements as was suggested in the previous article.
It may be possible to offer multiple configurations by a different route.
The composite USB gadget can be configured at runtime using
configfs
. As these slides [PDF] describe, it is possible
to create multiple configurations and set the MaxPower for each. This
interface could be used to create multiple configurations for each
driver, but that does feel a little roundabout and clumsy.
Whatever configuration is created, once it has been chosen by the host,
the core USB gadget driver will report this information to the
hardware-specific gadget driver by calling the vbus_draw()
method on
that driver. Of those gadget drivers that actually provide a
vbus_draw()
method (some don't) and don't simply ignore the value
(several do), most just call usb_phy_set_power()
to tell the PHY
driver what power is available. If that sounds like passing the buck
to you, I would agree. Most PHY drivers just ignore
the number too.
One exception is the s3c2410_udc.c USB gadget driver used in the GTA02, which is the original OpenMoko phone. It calls a function provided by the "board" file that contains specifics of the particular platform. The GTA02 board file uses a private mechanism to pass the number to the power manager. It is probable that out-of-tree drivers in vendor kernels use a similar approach.
Setting the right current
Once the current that might be available has been determined and
communicated to the charging manager, it is necessary to configure the
charging power supply with an appropriate current, preferably the
highest permitted current that doesn't cause the voltage to drop too low.
As far as I can tell from exploring the code, there is only one driver
that tries anything more sophisticated than setting a fixed current
level, possibly dependent on the type of cable or vbus_draw()
setting. That driver is the twl4030_charger.c that drives the battery
charger in the OpenPhoenux GTA04; I know about that driver, and its
imperfections, because I wrote the code to control the current.
The code in this driver increases the current requested in steps of 20mA until the voltage drops to 4.75V or until the maximum permitted is reached. This process mostly works, but subsequent reflections revealed a problem. If the battery is fully charged, then the phone as a whole cannot make use of more that a few hundred mA, so increasing the current setting won't actually put more load on the power supply, and thus won't cause the voltage to drop. This could lead to the current request being set to the maximum permitted even if it exceeds the maximum available. The charging hardware stops feeding current to the battery when the battery voltage reaches a certain level and the battery will be allowed to power at least some of the hardware. After the voltage drops a little, the charging turns back on, and at this point the battery may be able to accept more current than it could when the available current was being measured. This current might overload the charger.
The main point about this code is that it is easy to get wrong, but in principle should be common to all chargers that can limit current and measure voltage. So it really belongs in a common location — but where? There do seem to be a number of different elements of functionality needed for USB charging and they are currently implemented in an ad hoc manner. Bringing it all under a common umbrella appears to be the goal of USB charger framework that is currently being developed by Baolin Wang; it was recently posted in its 15th revision.
The USB charger framework
The framework attaches a "usb charger" object to every USB gadget
device that is created and intercepts the vbus_draw()
calls so that
it knows when an SDP has been configured. If the USB gadget device is
described in the device-tree as having an "extcon" connector
attached, it
will register for notifications for cable-change events.
Other drivers, such as a charger driver, can register to receive
notifications from a USB
charger if they know the name of the charger. The name will always be
usb-charger.0
unless there are multiple chargers. When any change
happens to the charger, it will notify all registered listeners to tell
them the new current limit. This limit is a single number, not a
range, so it needs to be handled carefully.
If charger managers were required to increase the current gradually up to this level, then sending the maximum would be appropriate. If they were expected to always enable exactly this number, then sending the minimum is the only safe approach. In the default configuration, the framework advises a current limit of 1500mA for the various types of chargers. This is the maximum for some, but not all, cable types. The only example of a charger driver that has been modified to use this information simply sets the limit rather than carefully ramping up to the limit. This may be safe, but only if that hardware has its own built-in current ramping.
When the framework registers interest in an extcon, it only requests notification of EXTCON_USB cables, not the various charger cables. When that notification arrives, it checks with the configured power supply to see what USB subtype it is and reports available current based on that. So this framework seems to have sided with USB cable types being the property of the power supply rather than the property of the cable.
Conclusion
While most of the parts needed for compliant USB charging are present, they are not implemented consistently, and it isn't even entirely clear what the right approach should be even if the USB charging framework does get merged. That wasn't the answer I was hoping for when I started examining this issue, but does at least clarify the current situation. Knowing where we stand makes moving forward a lot easier.
The one question I haven't yet covered is the need to keep most devices off until a stable source of power is assured. The reason for keeping this separate is that it is unlikely to ever be a part of Linux. There are enough interdependencies between discovery of different devices in Linux that trying to delay some in unusual circumstances is likely to lead to hard-to-diagnose problems.
Since the device is encouraged to avoid any unnecessary tasks until power is stable, it makes sense to not even boot Linux straight away. U-Boot, a common boot loader for mobile devices, is sufficiently powerful to be able to handle all the necessary negotiation to enable the maximum current possible. It should then be able to enter a low-power state until the battery has reached a sufficient charge to carry all the way through the Linux boot process. Linux will likely turn off battery charging during boot and renegotiate from scratch, so the battery needs enough charge to get all the way through the boot on its own.
There is clearly plenty to do to get USB charging into a healthy state. A particularly valuable first step would be to get clarity on how extcon and power_supply devices should work with the different cable types, and then to provide a standard way for charging power supply devices to ramp up the load while maintaining adequate voltage. With these in place, individual drivers could be updated to use these newly clarified interfaces on an as-needed basis. It's just a small matter of programming.
Patches and updates
Kernel trees
Architecture-specific
Core kernel code
Development tools
Device drivers
Device driver infrastructure
Memory management
Security-related
Virtualization and containers
Page editor: Jake Edge
Distributions
Declassifying debian-private?
The debian-private mailing list is, as its name would imply, for Debian developers to have private discussions that do not belong on the public lists. Back in 2005, the project held a (public) discussion on whether the debian-private messages should eventually be "declassified" and made public; after that, a general resolution (GR) was passed to do so. It was (and is) believed that some of the topics discussed are of historical interest. Since that time, there has been no real progress made to do so, though, which has led to calls to readdress the GR.
Nicolas Dandrimont brought up the issue on the debian-vote mailing list with a proposed GR that would acknowledge that the 2005 GR has never been implemented and that it never will be, so it should be repealed. He noted that nearly five years after the GR, then Debian Project Leader Stefano Zacchiroli had asked for volunteers to work on the declassification process for messages posted after the GR had passed—as the specific wording of the choice that passed required. But nothing came of it, so:
There were many who seconded the GR, which led Debian Secretary Kurt
Roeckx to set up a vote
page for it. But there were also some who had questions or thoughts
about the proposal. As Jakub Wilk pointed
out, the title of the GR ("Acknowledge that the debian-private
list will remain private
") doesn't actually match the text, which
simply repeals the earlier GR and encourages minimizing the use of
debian-private. "If you want -private remain private forever, say so
explicitly.
"
But Dandrimont disagreed, saying that
whether debian-private messages stayed private forever is not really the
point. "The acknowledgement is about failing to implement the existing GR, being honest
about it, and to let us move on from it.
" Don Armstrong generally agreed with that sentiment, but did
not "see the utility of voting to remove the
possibility of ever implementing it
".
Armstrong mentioned that he had tried to work on declassification a time or
two, "but the process
required was far too clunky
". He is considering proposing an
amendment to Dandrimont's GR that "either acknowledges that we have failed to declassify -private for
the time being, or gives listmaster@ the ability to define a published
procedure for the automated declassification of private
". MJ Ray,
though, said that Armstrong was
"being far too nice
" when he called the process clunky:
I don't think it's implementable in any sensible manner. At the very least, the requirement for the declassification to be automatic needs to be removed because no automatic system is going to adhere to those constraints perfectly.
Russ Allbery would like to see the possibility of declassification removed until someone comes up with a workable scheme to do so and another vote is taken. Right now, the specter of declassification hangs over posts made to debian-private:
The only reference to the mailing list in Debian's governing documents is in the Debian Developer's Reference, which doesn't really address declassification one way or another—though it would seem to strongly imply that messages to the list are ... well ... private. However, Didier 'OdyX' Raboud argued that it is up to the ListMaster team to determine what to do about the archives in the absence of a GR to the contrary. So if the 2005 GR is repealed, that's where the responsibility would lie:
We should now acknowledge that the work to declassify d-private archives would be very sensitive, complex and would need quite a load of good judgment calls. Given the assumption that the most interesting part is the early days (aka pre-2005 GR), we have no process anyhow.
That seems to be in keeping with Dandrimont's thinking as well. In response to a question from Roeckx about the pre-2005 messages, Dandrimont also suggested that the ListMasters are the proper authority:
In other words, if we remove the 2005 GR, debian-private is not a special list anymore, and we trust the listmasters judgement on its archive.
And I'm fine with that.
The hope of finding some kind of automated process to handle declassification seems forlorn. That means humans would need to wade through the messages to try to determine which posts were sensitive and thus shouldn't be made public. That too seems rather daunting. The most likely outcome would seem to be the status quo—whether a GR ever comes to a vote on the question or not.
Brief items
Distribution quote of the week
Distribution News
Debian GNU/Linux
DebConf16 closes in Cape Town and DebConf17 dates announced
The Debian Project wraps up a successful DebConf in South Africa. Next year DebConf will be held in Montreal, Canada.Debian technical committee appointment
Margarita Manterola has been appointed to a seat on Debian's technical committee.
Fedora
Fedora Elections July 2016 - Campaign period has started
This year's Fedora elections have reached the campaigning period. Justin Flory and Langdon White are running for 1 open seat on the Council. Josh Boyer, Stephen Gallagher, Haikel Guemar, Dennis Gilmore, and Dominik Mierzejewski are running for FESCo, where there are 4 open seats.
Newsletters and articles of interest
Distribution newsletters
- DistroWatch Weekly, Issue 669 (July 11)
- Lunar Linux weekly news (July 8)
- openSUSE Tumbleweed – Review of the Week (July 8)
- Tails report (June)
- Ubuntu Weekly Newsletter, Issue 473 (July 10)
Linux Lite 3: The Ideal Platform for Old Hardware and New Users (Linux.com)
Linux.com has a review of Linux Lite. "Linux Lite falls into that ubiquitous category of Linux distributions perfectly suited for low-end hardware. However, this particular take on the lightweight operating system achieves something few in this category can manage. It delivers all the tools you need to get the job done, all the while making Linux a no-brainer for any level of user."
HandyLinux Is a Great Toolbox for Linux Newbies (LinuxInsider)
LinuxInsider takes a look at HandyLinux. "The developers make it easy to peel off the "Handy" layers to reveal a more standard Linux environment as users learn the system. Those who no longer need the IT tools included with the initial installation can remove them easily using the Handy2Debian application from the main menu. That turns HandyLinux into a relatively standard Debian-based distribution running the lightweight and slightly remixed Xfce desktop environment. The remixed desktop is the distinguishing feature of this distro. It is built around HandyMenu, a custom start menu with applications and Internet bookmarks grouped in tabs."
Page editor: Rebecca Sobol
Development
An initial release of Flatpak portals for GNOME
A framework for sandboxing desktop applications in the GNOME environment has been in development ever since 2013, when Lennart Poettering proposed the idea in a GUADEC session. Subsequently, Alexander Larsson developed the xdg-app system, which relied on lightweight per-application containers linked to more substantial collections of system tools and libraries called "runtimes." The xdg-app container format was recently renamed Flatpak.
To confine an application in such a way that it cannot adversely affect the host, Flatpak needs a sandbox. Currently, it uses a combination of namespaces, control groups, bind mounts, and SELinux. Eventually, secure system services like Wayland are also expected to be part of the sandbox, replacing insecure alternatives like X.
Up until now, however, xdg-app/Flatpak releases have always shipped with the sandboxing feature disabled by default, because there were no mechanisms in place to allow sandboxed applications to access the system resources they would need to function. For instance, if the user clicks on a hyperlink in a sandboxed office application, there would need to be some interprocess communication (IPC) interface in place to open up that link in a web browser. Applications that need to play or record sound need access to the sound card, and so on.
Poettering proposed defining a set of these interfaces in his original plan (modeled after Android's Intents), referring to them as "portals." Specifying and implementing the portals has always been part of the plan, of course, but work on the container format preceded it by necessity.
But, as Matthias Clasen announced on July 8, the first draft of GNOME's portal interfaces has now been unveiled. Along with it, the project has released a set of GNOME components that can be used to test compatible Flatpak application packages.
The portal package itself is named xdg-desktop-portal and implements a D-Bus service running under the name org.freedesktop.portal.Desktop. That service listens for specific requests from applications. In this initial 0.1 release, there are ten portals defined:
- org.freedesktop.portal.Request: a generic interface that is used by the portal service to keep the current portal request alive until it has been completed
- org.freedesktop.portal.FileChooser: an interface applications can use to trigger a file-open dialog
- org.freedesktop.portal.OpenURI: an interface applications can use to initiate opening a URI in a web browser
- org.freedesktop.portal.Print: an interface applications can use to trigger the system print dialog
- org.freedesktop.portal.Screenshot: an interface applications can use to request a screenshot
- org.freedesktop.portal.Notification: an interface applications can use to send a desktop notification to the system or to withdraw a notification
- org.freedesktop.portal.Inhibit: an interface applications can use to inhibit the current desktop session from ending, idling, or suspending (such as might be wanted when the user is running a presentation)
- org.freedesktop.portal.NetworkMonitor: an interface that provides network status information to applications
- org.freedesktop.portal.ProxyResolver: an interface that provides network proxy information to applications
- org.freedesktop.portal.Documents: an interface that provides mediated access to a filesystem to an application
From that list, the FileChooser, OpenURI, Print, Screenshot, and Inhibit portals follow the traditional guidelines used by Android Intents and originally described by Poettering. User interaction is required before the sandboxed application can complete any of the desired actions.
First, the sandboxed application sends a request to the D-Bus portal (such as "let me open a file" to the FileChooser portal) and the portal backend (which we will see in a moment) responds by asking the user to approve the request. If the user agrees, the backend presents the file-open dialog, lets the user select some file, then hands the selected file descriptor back to the sandboxed application. If the user cancels the request at any stage, no file will be opened. By mediating the interaction, the portal mechanism should make it impossible for applications to take actions without the user's knowledge.
The Documents portal is somewhat more complex; its primary purpose appears to be allowing applications to save documents to the filesystem, which in turn can make it necessary to provide some view of the filesystem to the application, as well as features to specify some file attributes. Documents created by the application are stored in a FUSE filesystem mounted inside the sandbox at /run/user/$UID/doc/ so that the application can continue to access them. The FUSE filesystem will also be mounted outside the sandbox, making it accessible with other, non-sandboxed programs.
The NetworkMonitor and ProxyResolver portals are a bit different, in that they do not involve user interaction at all; rather, sandboxed applications are intended to ask for network information and get a reply directly from the backend. Users would, presumably, be informed at install time that the application requires network access. Similarly, the Notification portal does not ask the user's permission for each request; users must place a small amount of trust in the system that desktop notifications are relatively safe. There is more detailed documentation about the D-Bus interfaces available at the Flatpak site.
Accompanying the xdg-desktop-portal release is a corresponding backend implementation for GNOME called xdg-desktop-portal-gtk. It connects to existing GTK+ APIs to respond to the FileChooser, Documents, Print, and OpenURI portals, uses the GNOME Session Manager to respond to the Inhibit portal, and uses GNOME Shell's screenshot and notification facilities to respond to those portals. The NetworkMonitor and ProxyResolver portals are handled by GIO.
In his blog post, Clasen noted that the necessary support for the portals has already been merged and will be released with the 3.22 releases of GTK+ and GNOME. While there are a few existing functions that will stop working for sandboxed applications using the portals (for example, sandboxed applications will not be able to see file-preview images), accessing the core portal functionality should not require any changes on the application's side.
Clasen also pointed out that the initial portal set was chosen because it covers basic, toolkit-level functionality. The next step is defining portals needed by audio/video-capable applications, including access to camera, microphone, and audio-playback hardware. Several other portal proposals are pending as open issues on the repository, such as portals for geolocation information, access to address book contacts, and access to cryptographic keys.
He also said the portals are defined to be desktop-environment neutral, and that work is underway to write backends for KDE/Qt applications as well. Little information is available about that effort so far, but it is early in the process. No doubt the design and details of portals will need to change as developers work with the system in real application projects. Nevertheless, it will be particularly informative to see how the Flatpak project's designs are received by developers working outside of GNOME and GTK+. The Flatpak developers would like the framework to be adopted as widely as possible—a process that will, naturally, require dealing with a fair amount of feedback from projects with their own plans and agendas.
Brief items
Quotes of the week
Pylint 1.6.0 released
Pylint 1.6.0 is now available. Several new checkers are included, including consider-iterating-dictionary, which emits a warning when a dictionary is iterated by using .keys() and invalid-length-returned, which warns when the __len__ special method returns anything other than a non-negative number. Several other new features are available; check the release notes for details.Mesa 12.0 is available
Version 12.0 of the Mesa graphics library has been released. Included among the many updates are a Vulkan driver for suitably recent Intel GPUs, DRI3 enablement for VDPAU, OMX and VAAPI, and updated OpenGL and OpenGL ES support for nvc0 and radeonsi.
Rust 1.10 released
Version 1.10 of the Rust programming language has been released. New in this update is a compilation flag that will cause a task to abort rather than unwind when a panic happens. This is an oft-requested feature that many applications are expected to use for the smaller binary sizes and increased speed that are expected to result. Bootstrapping builds of Rust itself has also changed; Rust 1.10 needs to be compiled by Rust 1.9—as opposed to using a potentially unstable nightly binary snapshot, as was used in previous releases.
Newsletters and articles
Development newsletters from the past week
- What's cooking in git.git (July 8)
- What's cooking in git.git (July 11)
- What's cooking in git.git (July 13)
- This Week in GTK+ (July 11)
- OCaml Weekly News (July 12)
- Perl Weekly (July 11)
- This Week in Rust (July 12)
- Wikimedia Tech News (July 11)
Gräßlin: Multi-screen woes in Plasma 5.7
On his blog, Martin Gräßlin describes some of the multi-screen problems that users have been running into on KDE Plasma 5.7, what the causes are, and why multi-screen is a difficult problem to solve. "Many users expect that new windows open on the primary screen. Unfortunately primary screen does not imply that, it’s only a hint for the desktop shell where to put it’s panels, but does not have any meaning for normal windows. Of course windows should be placed on a proper location. If a window opens on a turned off external TV something is broken. And KWin wouldn’t do so. KWin places new windows on the “active screen”. The active screen is the one having the active window or the mouse cursor (depending on configuration setting). Unless, unless the window adds a positioning hint. Unfortunately it looks like windows started to position themselves to incorrect values and I started to think about ignoring these hints in future. If applications are not able to place themselves correctly, we might need to do something about it. Of course KWin allows the user to override it. With windowing specific rules one can ignore the requested geometry."
Portals: Using GTK+ in a Flatpak
On his blog, Matthias Clasen announces the availability of some of the infrastructure for Portals, which are a way for Flatpak applications to reach outside of their sandbox. "Most of these projects involve some notion of sandboxing: isolating the application from the rest of the system. Snappy does this by setting environment variables like XDG_DATA_DIRS, PATH, etc, to tell apps where to find their ‘stuff’ and using app-armor to not let them access things they shouldn’t. Flatpak takes a somewhat different approach: it uses bind mounts and namespaces to construct a separate view of the world for the app in which it can only see what it is supposed to access. Regardless which approach you take to sandboxing, desktop applications are not very useful without access to the rest of the system. So, clearly, we need to poke some holes in the walls of the sandbox, since we want apps to interact with the rest of the system. The important thing to keep in mind is that we always want to give the user control over these interactions and in particular, control over the data that goes in and out of the sandbox."
Herman: Shipping Rust in Firefox
Dave Herman reports that with Firefox 48, Mozilla will ship its first Rust component to all desktop platforms. "One of the first groups at Mozilla to make use of Rust was the Media Playback team. Now, it’s certainly easy to see that media is at the heart of the modern Web experience. What may be less obvious to the non-paranoid is that every time a browser plays a seemingly innocuous video (say, a chameleon popping bubbles), it’s reading data delivered in a complex format and created by someone you don’t know and don’t trust. And as it turns out, media formats are known to have been used to trick decoders into exposing nasty security vulnerabilities that exploit memory management bugs in Web browsers’ implementation code. This makes a memory-safe programming language like Rust a compelling addition to Mozilla’s tool-chest for protecting against potentially malicious media content on the Web."
Page editor: Nathan Willis
Announcements
Brief items
SPI 2015 Annual Report
Software in the Public Interest has announced its 2015 Annual Report (PDF), covering the 2015 calendar year. The annual report covers SPI's finances, elections, board members, committees, associated projects, and other significant changes throughout the year.Tor Project Elects All-New Board of Directors
The Tor Project has announced a new board of directors. "As Tor's board of directors, we consider it our duty to ensure that the Tor Project has the best possible leadership. The importance of Tor's mission requires it; the public standing of the organization makes it possible; and we are committed to achieve it. We had that duty in mind when we conducted an Executive Director search last year, and appreciate the leadership Shari Steele has brought. To support her, we further believe that it is time that we pass the baton of board oversight as the Tor Project moves into its second decade of operations."
Articles of interest
FSFE Newsletter - July 2016
The Free Software Foundation Europe's newsletter for July covers European Interoperability Framework, FSFE summit, news from the community, and more.
New Books
Wiley’s Latest Raspberry Pi Books
Wiley announced three Raspberry Pi books; "Raspberry Pi User Guide, 4th Edition" by Eben Upton and Gareth Halfacree, "Learning Computer Architecture with Raspberry Pi" by Eben Upton and Jeffrey Duntemann, and "Exploring Raspberry Pi: Interfacing to the Real World with Embedded Linux" by Derek Molloy.
Calls for Presentations
ownCloud Conference 2016 - Call for Papers
The call for papers for the ownCloud Contributors conference closes August 21. The conference will take place September 9-15 in Berlin, Germany. "Presentations can address all different areas of ownCloud, however we encourage, first and foremost, technical topics around code, but we also seek contributions about the social aspects of the project, community affairs or free software in general."
CFP Deadlines: July 14, 2016 to September 12, 2016
The following listing of CFP deadlines is taken from the LWN.net CFP Calendar.
Deadline | Event Dates | Event | Location |
---|---|---|---|
July 15 | October 12 | Tracing Summit | Berlin, Germany |
July 15 | September 7 September 9 |
LibreOffice Conference | Brno, Czech Republic |
July 15 | October 11 | Real-Time Summit 2016 | Berlin, Germany |
July 22 | October 7 October 8 |
Ohio LinuxFest 2016 | Columbus, OH, USA |
July 24 | September 20 September 21 |
Lustre Administrator and Developer Workshop | Paris, France |
July 30 | August 25 August 28 |
Linux Vacation / Eastern Europe 2016 | Grodno, Belarus |
July 31 | September 9 September 11 |
GNU Tools Cauldron 2016 | Hebden Bridge, UK |
July 31 | October 29 October 30 |
PyCon HK 2016 | Hong Kong, Hong Kong |
August 1 | October 6 October 7 |
PyConZA 2016 | Cape Town, South Africa |
August 1 | September 28 October 1 |
systemd.conf 2016 | Berlin, Germany |
August 1 | October 8 October 9 |
Gentoo Miniconf 2016 | Prague, Czech Republic |
August 1 | November 11 November 12 |
Seattle GNU/Linux Conference | Seattle, WA, USA |
August 3 | October 1 October 2 |
openSUSE.Asia Summit | Yogyakarta, Indonesia |
August 5 | January 16 January 20 |
linux.conf.au 2017 | Hobart, Australia |
August 7 | November 1 November 4 |
PostgreSQL Conference Europe 2016 | Tallin, Estonia |
August 7 | October 10 October 11 |
GStreamer Conference | Berlin, Germany |
August 8 | September 8 | LLVM Cauldron | Hebden Bridge, UK |
August 15 | October 5 October 7 |
Netdev 1.2 | Tokyo, Japan |
August 17 | September 21 September 23 |
X Developers Conference | Helsinki, Finland |
August 19 | October 13 | OpenWrt Summit | Berlin, Germany |
August 20 | August 27 September 2 |
Bornhack | Aakirkeby, Denmark |
August 20 | August 22 August 24 |
7th African Summit on FOSS | Kampala, Uganda |
August 21 | October 22 October 23 |
Datenspuren 2016 | Dresden, Germany |
August 24 | September 9 September 15 |
ownCloud Contributors Conference | Berlin, Germany |
August 31 | November 12 November 13 |
PyCon Canada 2016 | Toronto, Canada |
August 31 | October 31 | PyCon Finland 2016 | Helsinki, Finland |
September 1 | November 1 November 4 |
Linux Plumbers Conference | Santa Fe, NM, USA |
September 1 | November 14 | The Third Workshop on the LLVM Compiler Infrastructure in HPC | Salt Lake City, UT, USA |
September 5 | November 17 | NLUUG (Fall conference) | Bunnik, The Netherlands |
September 9 | November 16 November 18 |
ApacheCon Europe | Seville, Spain |
If the CFP deadline for your event does not appear here, please tell us about it.
Upcoming Events
Events: July 14, 2016 to September 12, 2016
The following event listing is taken from the LWN.net Calendar.
Date(s) | Event | Location |
---|---|---|
July 11 July 17 |
SciPy 2016 | Austin, TX, USA |
July 13 July 14 |
Automotive Linux Summit | Tokyo, Japan |
July 13 July 15 |
ContainerCon Japan | Tokyo, Japan |
July 13 July 15 |
LinuxCon Japan | Tokyo, Japan |
July 14 July 16 |
REST Fest UK 2016 | Edinburgh, UK |
July 17 July 24 |
EuroPython 2016 | Bilbao, Spain |
July 30 July 31 |
PyOhio | Columbus, OH, USA |
August 2 August 5 |
Flock to Fedora | Krakow, Poland |
August 10 August 12 |
MonadLibre 2016 | Havana, Cuba |
August 12 August 16 |
PyCon Australia 2016 | Melbourne, Australia |
August 12 August 14 |
GNOME Users and Developers European Conference | Karlsruhe, Germany |
August 18 August 20 |
GNU Hackers' Meeting | Rennes, France |
August 18 August 21 |
Camp++ 0x7e0 | Komárom, Hungary |
August 20 August 21 |
FrOSCon - Free and Open Source Software Conference | Sankt-Augustin, Germany |
August 20 August 21 |
Conference for Open Source Coders, Users and Promoters | Taipei, Taiwan |
August 22 August 24 |
ContainerCon | Toronto, Canada |
August 22 August 24 |
LinuxCon NA | Toronto, Canada |
August 22 August 24 |
7th African Summit on FOSS | Kampala, Uganda |
August 24 August 26 |
KVM Forum 2016 | Toronto, Canada |
August 24 August 26 |
YAPC::Europe Cluj 2016 | Cluj-Napoca, Romania |
August 25 August 26 |
Xen Project Developer Summit | Toronto, Canada |
August 25 August 26 |
Linux Security Summit 2016 | Toronto, Canada |
August 25 August 26 |
The Prometheus conference | Berlin, Germany |
August 25 August 28 |
Linux Vacation / Eastern Europe 2016 | Grodno, Belarus |
August 27 September 2 |
Bornhack | Aakirkeby, Denmark |
August 31 September 1 |
Hadoop Summit Melbourne | Melbourne, Australia |
September 1 September 7 |
Nextcloud Conference | Berlin, Germany |
September 1 September 8 |
QtCon 2016 | Berlin, Germany |
September 2 September 4 |
FSFE summit 2016 | Berlin, Germany |
September 7 September 9 |
LibreOffice Conference | Brno, Czech Republic |
September 8 September 9 |
First OpenPGP conference | Cologne, Germany |
September 8 | LLVM Cauldron | Hebden Bridge, UK |
September 9 September 10 |
RustConf 2016 | Portland, OR, USA |
September 9 September 11 |
GNU Tools Cauldron 2016 | Hebden Bridge, UK |
September 9 September 11 |
Kiwi PyCon 2016 | Dunedin, New Zealand |
September 9 September 15 |
ownCloud Contributors Conference | Berlin, Germany |
If your event does not appear here, please tell us about it.
Page editor: Rebecca Sobol