User: Password:
Subscribe / Log in / New account Weekly Edition for March 6, 2014

Progress toward free GPU drivers

By Jonathan Corbet
March 5, 2014
The fact that a free operating system needs free drivers for the hardware it runs on would seem to be relatively easy to understand, but the history of Linux makes it clear that this is a point that must be made over and over. In recent times, one of the toughest nuts to crack in the fight for free drivers has been graphics processing units (GPUs), and mobile GPUs in particular. Recent events suggest that this fight is being won (albeit slowly), and that the way to prevail in this area has changed little in the last decade or so.

Last September, longtime holdout NVIDIA announced that it would start providing documentation to the Nouveau project, which has worked for many years to provide high-quality, reverse-engineered drivers for NVIDIA's video chipsets. That was a major step in the right direction, but things got even better in February, when NVIDIA made some initial, tentative contributions to Nouveau directly. Given that NVIDIA has been seen as the epitome of an uncooperative vendor in this area for many years, these steps marked a major change. NVIDIA is far from fully open, but there appears to be a will, finally, to move in that direction.

Another longtime purveyor of closed-source GPU drivers has been Broadcom, which, to all appearances, had no interest in cooperating with the development community in this area. So it came as a surprise to many when, on February 28, Broadcom announced the immediate release of its VideoCore driver stack under a three-clause BSD license. By all appearances, this code (and the documentation released with it) is sufficient to implement a fully functional graphics driver for a number of Broadcom's system-on-chip products, including the system found in the Raspberry Pi. Closed-source graphics drivers should soon be a thing of the past on such platforms.

Along with releasing the source and documentation, Broadcom appears to be saying that it understands the problem:

Binary drivers prevent users from fixing bugs or otherwise improving the graphics stack, and complicate the task of porting new operating systems to a device without vendor assistance. But that’s changing, and Broadcom is taking up the cause.

That said, Broadcom has not contributed a driver that will make its way upstream in the near future. What we have instead is a code (and documentation) dump that will make writing that driver possible. It is unlikely that the existing code is suitable for the mainline kernel or the user-space parts of the graphics stack; code that has lived for years behind a corporate firewall is rarely even close to meeting the community's standards. But the important part — the information on how the GPU works — is there; the community should be able to do the rest.

Of course, there are many other manufacturers of mobile GPUs, and few of them have been forthcoming about programming information for those GPUs. So the fight against binary blobs in this area will continue. In a number of cases, there are projects working toward the development of free drivers for these GPUs; see Freedreno, Etnaviv, and Lima, for example. None of those projects is yet ready to replace the vendor's binary-only drivers, but progress is being made in that direction.

These projects demonstrate one facet of what has proved to be a successful strategy against closed hardware. This is, after all, a discussion that the community has had to undertake many times with different groups of vendors. Each time around, what has appeared to work is a combination of these techniques:

  • First and foremost: make it clear that proprietary drivers are simply not acceptable to the community. These drivers receive little cooperation from developers or development projects and little love from users. The displeasure expressed by the Raspberry Pi community may have had a lot to do with Broadcom's change of heart; note that Broadcom's announcement was written by Eben Upton, co-founder of the Raspberry Pi Foundation.

  • Address vendors' excuses for not releasing their drivers. Wireless network drivers were held proprietary for years out of fears of legal problems with spectrum-regulatory agencies; over time, kernel developers made it clear that they had no interest in operating wireless devices in non-compliant ways, and those fears eventually went away. NVIDIA once claimed that the community couldn't possibly handle the complexity of a driver for its hardware; the community has proved otherwise, even when handicapped by a lack of documentation.

  • Emphasize the costs of maintaining out-of-tree code and the benefits that come from merging code upstream. A responsible company cannot shed all of its maintenance costs by upstreaming code, but it can certainly reduce them and, often, drop the maintenance of in-house infrastructure that duplicates upstream kernel mechanisms entirely.

  • When vendors refuse to cooperate, reverse engineer their hardware and write drivers of our own. As the Nouveau experience shows, these projects can be successful in creating highly functional drivers; they also often end up with the original vendor deciding to contribute to the community's driver rather than maintain its own.

  • Demand hardware with free-driver support from equipment manufactures who, in turn, will apply pressure to their suppliers. This technique worked well in the server market; there is no real evidence that companies like NVIDIA and Broadcom are responding to similar incentives in the mobile market, but certainly that kind of pressure can only help.

In the GPU market, it has long been speculated that vendors have insisted on holding onto their source as a result of patent worries. It would be hard to argue that these concerns are entirely out of line; the patent troll situation does not appear to be getting any better. But, perhaps, some recent high-profile victories against patent trolls, combined with increasing membership in organizations like the Open Invention Network, have helped to reduce those fears slightly. And, if nothing else, patent trolls, too, are able to perform reverse engineering; a lack of source seems unlikely to deter a troll with multi-million-dollar settlements in its eyes.

So we might, just maybe, be moving toward a new era where open GPU drivers will be the rule, rather than the exception. That can only lead to better hardware support, more freedom, and increased innovation around mobile devices. One thing it will not do is signal an end to problems with closed-source drivers; that particular fight seems to go on forever. But the free software community has shown many times that it has the techniques and the patience to overcome such obstacles.

Comments (7 posted)

Using git and make for tasks beyond coding

By Nathan Willis
March 5, 2014
SCALE 2014

One of the best aspects of community-driven conferences like SCALE is the greater proliferation of talks about personal side-ventures, pet projects, and activities outside the confines of the standard enterprise/cloud/mobile topics. Don Marti presented such a talk at SCALE 12x in Los Angeles, describing how he uses make, git, and other tools for work other than software development. His primary example use case was document processing, but he noted the applicability of the tools to other scenarios.

Make-ing trouble

The story started when Marti discovered Pandoc, he said, which "changed his life." Pandoc is a document format converter that can translate back and forth between HTML, Markdown, EPUB, PDF, DOCX, and many other formats. It is a command-line tool with a rich syntax, he said, but that has good and bad sides. Commands are compact, so they can fit into an email "and put an end to mailing-list arguments," but it can be difficult to remember the lengthy or complex invocations. There are other command-line tools with the same issue, he added, like ImageMagick, aspell, and Ledger.

[Don Marti]

If one frequently switches between projects (such as between a gritty cyberpunk noir novel and a marketing whitepaper), things can get still worse, as it is difficult to keep multiple command syntax options straight—it would be an embarrassing mistake to use the cyberpunk-specific spelling dictionary on the professional paper by mixing up the relevant aspell commands, after all. "Wouldn't it be nice," he asked, "to have a executable dictionary to keep your commands in?"

Fortunately, a notebook for commands to process stuff is just what a makefile is. Makefiles are basically sets of rules for making one thing from another, he said, so they can be used for any repetitive task, not just compilation. After a brief overview of makefile syntax, he then went into several makefile examples for use with document processing. For example, by writing the following make target, he could create his presentation slides in Markdown format, then use Pandoc (with an appropriate template) to translate them into HTML:

    index.html : template.kittenslides
        pandoc --section-divs -t html5 \
        --template template.kittenslides -s -o $@ $<

Of course, a good makefile would not be limited to one hard-coded filename, so Marti explained how to use % pattern substitution to turn the preceding example into a reusable one. He also noted some helpful tricks he has picked up along the way, such as using touch to create empty files that are used only to keep track of the state of the process—touch deploy to denote pushing changes out to a web site, touch print to denote that print-ready output was generated, and so on.

He also explained how to call shell commands—like

   find . -name '??-*' | sort

from a makefile and iterate through the results, which allowed him to automatically process novel chapters he was writing as separate files simply by adhering to the naming scheme that matched the search pattern. Similarly, he explained how he solved the novel-versus-whitepaper aspell problem by creating one makefile target that set the appropriate dictionary in a variable, and another target that referenced the first when calling aspell.

Git going

Makefiles can automate repetitive tasks, Marti said, but there are even more things one can do by pairing them with git. The most obvious example is accessing the same project files from multiple machines, followed by using git to manage projects for teams. There are, however, some differences that come into play when using git for working with document processing.

For those entirely new to git, Marti advised staying away from git tutorials, which are "pure evil," because they attempt to explain git through examples. That, he said, is like learning how to escape from Death Valley by memorizing the number of steps in each compass direction: getting confused and lost is easy, because it ignores the low-level reality that one has to understand to get by. Git is actually easier to learn from diagrams and low-level commands, he said, working from the core concepts up.

Part of this way of learning git applies to adapting it for use with non-coding tasks—for instance, a branch in git is really a pointer to a specific commit, and certain operations just move that pointer. With that educational aside completed, he went on to describe how he further automates document-processing tasks with git hooks. The most important, he explained, are the pre-commit hook, which determines whether one can make a commit, the update hook, which runs whenever the branch pointer moves, and post-commit, which runs after the branch pointer is moved. They are therefore ripe for use as a way to automatically execute spell checking, format conversion, and other document-processing tasks.

Although it might be tempting to use hooks to execute make against the targets described in the first part of the talk, he recommended against it—primarily because git hooks are not centrally managed. They live in personal repositories, so over time, different users would get their hook makefiles out of sync, or simply break them—and break them in a manner that is invisible to other users on other machines, who can only see the contents of the git repository.

Instead, Marti's solution is to store the commands that need to run at pre-commit and update in makefile targets, then store the makefile itself in the git repository. In other words, the makefile contains a target named update that executes the necessary commands. Each user's .git/hooks/update only has to call make update, so it is nearly impossible for two machines to get out of sync.

Additional tricks can be performed with git's "smudge" and "clean" filters, he said. Although they are often used for transforming plain text (such as cleaning up Unix-style line endings), there is no requirement that they do so. He explained how he uses Pandoc to transform a shared file to and from Microsoft's DOCX format: he edits the file himself in Markdown format (although the file uses the .docx extension), and uses the clean filter to call Pandoc to convert it to genuine DOCX and copy it to a shared Dropbox folder. There, other users can edit it in Microsoft Office and save their changes; git's smudge filter converts the document back to Markdown whenever Marti does a checkout, and none of the Dropbox users are any wiser about the true format of the original file.

Finally, he described how he automates even more of the work with a few additional utilities. By using inotifywait, it is possible to monitor a directory for updated files and automatically run make against them whenever there are changes. By using etcd, one can replicate the relevant client-side git configuration between multiple machines (in "almost-meteor-proof" fashion). There are a few possible failure modes to the etcd replication setup, he conceded, but they are no worse than the existing failure modes of git (such as a user trying to push an out-of-date branch), "so users can just deal."

In conclusion, Marti encouraged the audience to do their own further reading; the workflows he described were specific to his own experience, but the same general principles could be helpful for anyone: makefiles can remember any command for you (not just compilation), git's hooks and filters can further automate away repetition, and there are plenty of other tools that save work typing, replicating, and other drudgery. Just because a lot of those tools were created for software development does not limit the creative uses they can be put to if some thought is put into the process.

Comments (46 posted)

A thumbnail sketch of Krita 2.8

By Nathan Willis
March 5, 2014

Version 2.8 of the digital-painting application Krita has been released. The project recently formed its own backing foundation and has undertaken a concerted effort to fund development through (among other things) the sale of training materials, so a natural question might be whether or not this new release shows any substantial gains that could be attributed to the more formal project management. It is hard to say for sure, of course, but the change does look like a win—the new release includes a series of technical improvements as well as practical contributions from Krita-using artists.

Krita 2.8 was officially released on March 5. Developer Dmitry Kazakov has been publishing the releases to the "Krita Lime" personal package archive (PPA) for Ubuntu and its derivatives. Beta builds have been released steadily for the past few months; a release candidate appeared on February 27, which was feature complete with the final 2.8 release.


The 2.8 release marks the debut of several new under-the-hood changes in Krita. The first is a major refactoring of the application's OpenGL canvas code. Krita has used OpenGL to render the drawing surface on Linux since version 2.0 in 2009; the OpenGL canvas was instrumental in some of Krita's best-loved features, such as the ability to freely rotate the canvas on screen (that is, rotate the orientation of the canvas as it is shown in the window, just like one would a piece of paper, as opposed to rotating the image contents). For 2.8 the OpenGL support was brought up to OpenGL 3.1 and OpenGL ES 2.0 compliance (the latter of which enables the tablet-centric "Krita Sketch" variant to run on embedded hardware).

Along the way, Krita's Windows builds gained OpenGL support as well; 2.8 marks the first version of Krita to be declared stable on Windows. But the more interesting improvement for Linux users is an entirely new OpenGL scaling algorithm that offers better quality than the default OpenGL scaling options. The upshot is smoother rendering, especially when zooming in on the canvas.

The new rendering code was written by Kazakov, whose time on the project is funded by the Krita Foundation. Kazakov also undertook the other major piece of plumbing to debut in version 2.8: native support for pressure-sensitive graphics tablets. Pressure-sensitive tablets are vital to Krita's primary mission of implementing natural-media simulation. In the past, Krita had used the tablet support implemented by Qt, but the Qt library was plagued by problems like poor Windows performance and limited hardware support—Qt supported Wacom, the most famous vendor of pressure-sensitive tablets, but no other manufacturers. So "with leaden shoes," as project lead Boudewijn Rempt described it, the team wrote its own tablet support subsystem.

[G'MIC in Krita 2.8]

Another significant contribution to 2.8 is Lukáš Tvrdý's G'MIC plugin. G'MIC is an image-processing framework with capabilities often described as "weird and wild" or words to that effect; in a sense it is a competitor to an image-processing library like ImageMagick, but it has a long history of supporting complex processing pipelines that need to be seen to be believed. The most well-known effect is "inpainting," which intelligently synthesizes pixels to fill in regions of an image with far more realism than the traditional "cloning brush" method. GIMP has had G'MIC support for quite some time; Tvrdý's plugin brings that feature set to Krita.

It will, of course, take some time for Krita users to discover all of the interesting things that G'MIC can do. No doubt it will be informative to see how G'MIC filters perform on natural-media artwork as opposed to its use with photographs in GIMP.


In addition to the various the under-the-hood changes, Krita 2.8 also sports a handful of new painting tools. The first is a tool for creating arrays of clones—essentially, mass-duplication of picture elements, so that rather than repeatedly cutting-and-pasting, the user can paint an image that is tiled in however many rows and columns as are needed.

[Krita 2.8 cloning]

The tool works by cloning the current image layer, optionally adding a horizontal and vertical offset to each clone so that they are arranged in a 2D grid. But these clone layers are not mere static copies of the original; they are updated in real time as the user continues to paint on the original. This makes it possible to draw all of the clones simultaneously, which is a lot easier than drawing one, duplicating it, then undoing the duplication if things look wrong. Back in September 2013 when the feature was first previewed, it was hailed as tool of particular utility in video game design, but there are certainly many more potential uses.

[Krita 2.8 clones]

Another new painting feature is "wraparound mode," which helps the user create tileable images. There are two parts to the feature: one repeats the canvas in every direction so that the tiled view is visible, while the other wraps any brushstrokes that go past the edge of the canvas back around to the other side. Again, the simplest use cases to describe for the feature might be video-game-related (e.g., repeating background textures that blend smoothly at all of the edges), but that is by no means the only possibility.

[Wrap-around mode in Krita 2.8]

There is an assortment of smaller feature additions, too. One is the "pseudo-infinite canvas," which lets the user extend the edge of the image on the fly by clicking on a button at the boundary. Krita can also load external files as display-only layers that are automatically updated whenever the external file is changed. This is an alternative to importing the external file's contents as a native image layer, which (naturally) can no longer be updated in another program.

As interesting as entirely new functionality is, one could also make a case that the addition of several new brush sets is more significant. Krita supports the creation of custom painting brushes that model all sorts of physical properties—including changes to the size, shape, opacity, and behavior of the brush in response to various graphics-tablet dynamics (pressure, tilt, speed, etc.). But with all of those available variables, it can take some work to tweak the parameters into settings that reflect something useful for daily work.

The project's response has been to promote brush sets that have been finely honed by artists who know what they are doing, and several new ones have recently made their debut in time for 2.8. One of the most interesting is Vasco Basquéhas's watercolor set, which simulates wet-paint behavior like mixing and diffusion. Krita has had support for modeling the physics of liquid paint for quite some time, but has lacked a brush set tuned to recreate watercolor effects. Despite the fact that watercolors are one of the first media that kids are exposed to, they are a difficult form of painting to master. Basquéhas's watercolor brushes may not simplify that learning process, but at least they simplify cleanup.

Basquéhas has also created a "modular" mixed-media brush set that emulates a range of different tools (pencils, pens, erasers, smudge tools, and so forth). So has David Revoy, whose set features entirely different implementations of some tools, in addition to completely different brushes.

Artist-created brush sets are a tremendous addition to Krita's toolbox, but if there is any criticism to be leveled at the feature it would be that it can become difficult to navigate the ever-ballooning library of brushes available. But there is work underway to improve the brush-management experience. Revoy, Timothée Giet, and Ramón Miranda have worked together on a standard approach to the thumbnail icons each brush displays in Krita's tool palette, and Basquéhas's modular set includes a naming convention to help the user keep the dozens of options straight. Krita itself has also implemented features to help users organize their brushes more easily, such as the ability to assign tags from the right-click context menu.

Critical eye

Based on the 2.8 release candidate build, the new Krita release is another solid one. One might have worried that the new implementation of graphics tablet support or the refactored OpenGL canvas could cause problems—in fact, Krita pops up a warning when one enables OpenGL rendering—but in my own testing I encountered no crashes or mysterious bugginess. Undoubtedly a "YMMV" caveat is warranted, particularly in light of all the various GPU and driver combinations available, but the release candidate announcement only cited one known OpenGL issue, which is a good sign.

Whether the internal graphics tablet support is better than Qt's built-in support is a bit difficult to gauge from a single test. Krita 2.8's tablet support is certainly less difficult to configure, which is a factor not to overlook. For people who own non-Wacom hardware, of course, any support at all is a big improvement. Hopefully, in the long run, Krita's improved tablet support will eventually make its way into other applications (or toolkits).

As for the new features, array cloning and wraparound mode are both simple enough to learn one's way around without outside assistance. The new brush sets can take some getting used to in order to see the particular advantages each individual brush offers, but the same would be said of media in real life, too.

If one does decide that some expert guidance is wise, however, the Krita Foundation recently released its second training DVD, Muses, which showcases Miranda's work. This (and the Drawing Comics with Krita DVD that preceded it) is a welcome new approach to raising funds for further Krita development. Considering that much of Kazakov's 2.8 work was funded development, it would appear that Krita has managed to find a sustainable fundraising model—which, as most people know, is a bit of a rarity among free software desktop applications.

Of course, making a push into new territory helps attract new users in need of training, so Krita 2.8's stable, OpenGL-rendered Windows builds will probably be a boon as well. Speaking of new territory, yet another interesting approach being taken by the project is making Krita available for distribution through Valve's Steam software-delivery service. Krita recently got approved by Steam, although the actual release has not yet happened. It will be interesting to watch that release happen and see how the numbers impact the Krita project. Regardless of how smoothly it goes, though, it is nice to see a free software project pursue an out-of-the-box distribution method like Steam. No doubt it will not be the last to do so—and it looks like 2.8 is a release many new users will be happy with.

Comments (2 posted)

Page editor: Jonathan Corbet


A longstanding GnuTLS certificate validation botch

By Jake Edge
March 5, 2014

Something rather reminiscent of Apple's "goto fail;" bug has been found, but this time it hits rather closer to home for the free software community since it lives in GnuTLS. Certificate validation for SSL/TLS has been under some scrutiny lately, evidently to good effect. But this bug is arguably much worse than Apple's, as it has allowed crafted certificates to evade validation checks for all versions of GnuTLS ever released since that project got started in late 2000.

Perhaps the biggest irony is that the fix changes a handful of "goto cleanup;" lines to "goto fail;". It also made other changes to the code (including adding a "fail" label), but the resemblance to the Apple bug is too obvious to ignore. While the two bugs are actually not that similar, other than both being in the certificate validation logic, the timing and look of the new bug does give one pause.

The problem boils down to incorrect return values from a function when there are errors in the certificate. The check_if_ca() function is supposed to return true (any non-zero value in C) or false (zero) depending on whether the issuer of the certificate is a certificate authority (CA). A true return should mean that the certificate passed muster and can be used further, but the bug meant that error returns were misinterpreted as certificate validations.

Prior to the fix, check_if_ca() would return error codes (which are negative numbers) when it encountered a problem, which would be interpreted as a true value by the caller. The fix was made in two places. First, ensuring that check_if_ca() returned zero (false) when there were errors, and second, also testing the return value in verify_crt() for != 1 rather than == 0.

It is hard to say how far back this bug goes, as the code has been restructured several times over the years, but the GnuTLS advisory warns that all versions are affected. There are a lot of applications that use GnuTLS for their SSL/TLS secure communication needs. This thread at Hacker News mentions a few, including Emacs, wget, NetworkManager, VLC, Git, and others. On my Fedora 20 system, attempting to remove GnuTLS results in Yum wanting to remove 309 dependent packages, including all of KDE, Gnucash, Calligra, LibreOffice, libvirt, QEMU, Wine, and more.

GnuTLS came about partly because the OpenSSL license is problematic for GPL-licensed programs. OpenSSL has a BSD-style license, but still includes the (in)famous "advertising clause". The license has been a source of problems before, so GPL programs often avoid it. One would hope that the OpenSSL developers are diligently auditing their code for problems similar to what we have seen from Apple and GnuTLS.

It was a code audit done by GnuTLS founder Nikos Mavrogiannopoulos (at the request of Red Hat, his employer) that discovered the bug. He may well have been the one to introduce it long ago, as he has done much of the work on the project—and the file in question (lib/x509/verify.c). He described it as "an important (and at the same time embarrassing) bug". It is clearly that, but it is certainly a good thing that it has at last been found and fixed.

Several commenters in various places have focused on the "goto" statement as somehow being a part of the problem for both Apple and GnuTLS. That concern seems misplaced. While, in both cases, a goto statement was located at the point where the bug was fixed, the real problem was twofold: botched error handling and incomplete testing. While Edsger Dijkstra's advice on goto and its harmful effects on the structure of programs is cogent, it isn't completely applicable here. Handling error conditions in C functions is commonly done using goto and, if it is done right, goto actually adds to the readability of the code. Neither Apple nor GnuTLS's flaw can really be laid at the feet of goto.

In something of a replay of the admonishments in last week's article on the Apple flaw: all security software needs to be better tested. We are telling our users that we are protecting their communications with the latest and greatest encryption, but we are far too often failing them with implementation errors. Testing with bad certificates would seem to be a must; some presumably was done for both code bases, but obviously some possibilities of badly formed or signed certificates were skipped. More (and better) testing is indicated.

[ Thanks to Paul Sladen for the heads-up about this bug. ]

Comments (117 posted)

Brief items

Security quotes of the week

I am still trying to get my head around the implications that the British government's equivalent of the NSA probably holds the world's largest collection of pornographic videos, that the stash is probably contaminated with seriously illegal material, and their own personnel can in principle be charged and convicted of a strict liability offence if they try to do their job. It does, however, suggest to me that the savvy Al Qaida conspirators [yes, I know this is a contradiction in terms] of the next decade will hold their covert meetings in the nude, on Yahoo! video chat, while furiously masturbating.
Charlie Stross

This is truly atrocious. Given that “encrypting” the backup configuration files is done presumably to protect end users, expecting this to thwart any attacker and touting it as a product feature is unforgivable.

OK, I don’t really care that much. I’m just disappointed that it took longer to write this blog post than it did to break their “crypto”.

Craig of /dev/ttyS0 is saddened by Linksys router "encryption" (XOR with 0xFF)

The plan was confirmed by Keurig's CEO who stated on a recent earnings call that the new maker indeed won't work with "unlicensed" pods as part of an effort to deliver "game-changing performance." "Keurig 2.0" is expected to launch this fall. French Press and pour-over manufacturers like Chemex have plenty of time to get their thank you notes to Keurig in the mail ahead of time as users are hopefully nudged toward the realization they could be drinking much better coffee anyway.
Karl Bode of Techdirt comments on coffee maker DRM

If the NSA collects -- I'm using the everyday definition of the word here -- all of the contents of everyone's e-mail, it doesn't count it as being collected in NSA terms until someone reads it. And if it collects -- I'm sorry, but that's really the correct word -- everyone's phone records or location information and stores it in an enormous database, that doesn't count as being collected -- NSA definition -- until someone looks at it. If the agency uses computers to search those emails for keywords, or correlates that location information for relationships between people, it doesn't count as collection, either. Only when those computers spit out a particular person has the data -- in NSA terms -- actually been collected.
Bruce Schneier

Comments (none posted)

Critical crypto bug leaves Linux, hundreds of apps open to eavesdropping (ars technica)

According to this ars technica article, the GnuTLS library has a certificate validation vulnerability that looks awfully similar to the recently patched Apple hole. "This time, instead of a single misplaced 'goto fail' command, the mistakes involve errors with several 'goto cleanup' calls. The GnuTLS program, in turn, prematurely terminates code sections that are supposed to establish secure TLS connections only after the other side presents a valid X509 certificate signed by a trusted source. Attackers can exploit the error by presenting vulnerable systems with a fraudulent certificate that is never rejected, despite its failure to pass routine security checks."

Comments (94 posted)

New vulnerabilities

activemq: multiple vulnerabilities

Package(s):activemq CVE #(s):CVE-2013-2035 CVE-2013-4330 CVE-2014-0003
Created:March 4, 2014 Updated:November 21, 2014
Description: From the Red Hat advisory:

The HawtJNI Library class wrote native libraries to a predictable file name in /tmp/ when the native libraries were bundled in a JAR file, and no custom library path was specified. A local attacker could overwrite these native libraries with malicious versions during the window between when HawtJNI writes them and when they are executed. (CVE-2013-2035)

A flaw was found in Apache Camel's parsing of the FILE_NAME header. A remote attacker able to submit messages to a Camel route, which would write the provided message to a file, could provide expression language (EL) expressions in the FILE_NAME header, which would be evaluated on the server. This could lead to arbitrary remote code execution in the context of the Camel server process. (CVE-2013-4330)

It was found that the Apache Camel XSLT component allowed XSL stylesheets to call external Java methods. A remote attacker able to submit messages to a Camel route could use this flaw to perform arbitrary remote code execution in the context of the Camel server process. (CVE-2014-0003)

Mageia MGASA-2014-0461 hawtjni 2014-11-21
Red Hat RHSA-2014:0254-01 activemq 2014-03-05
Red Hat RHSA-2014:0245-01 activemq 2014-03-03

Comments (none posted)

chromium: multiple vulnerabilities

Package(s):chromium CVE #(s):CVE-2013-6652 CVE-2013-6663 CVE-2013-6664 CVE-2013-6665 CVE-2013-6666 CVE-2013-6667 CVE-2013-6668 CVE-2013-6802 CVE-2014-1681
Created:March 5, 2014 Updated:December 10, 2014
Description: From the Gentoo advisory:

Multiple vulnerabilities have been discovered in Chromium and V8. A context-dependent attacker could entice a user to open a specially crafted web site or JavaScript program using Chromium or V8, possibly resulting in the execution of arbitrary code with the privileges of the process or a Denial of Service condition. Furthermore, a remote attacker may be able to bypass security restrictions or have other unspecified impact.

Mandriva MDVSA-2015:142 nodejs 2015-03-29
Red Hat RHSA-2014:1744-01 v8314-v8 2014-10-30
Fedora FEDORA-2014-10975 v8 2014-09-28
Fedora FEDORA-2014-11065 v8 2014-09-28
Fedora FEDORA-2014-10975 nodejs 2014-09-28
Fedora FEDORA-2014-11065 nodejs 2014-09-28
Debian DSA-2883-1 chromium-browser 2014-03-23
Mageia MGASA-2014-0121 chromium-browser-stable 2014-03-06
Gentoo 201403-01 chromium 2014-03-05

Comments (none posted)

chromium: multiple vulnerabilities

Package(s):chromium CVE #(s):CVE-2013-6653 CVE-2013-6654 CVE-2013-6655 CVE-2013-6656 CVE-2013-6657 CVE-2013-6658 CVE-2013-6659 CVE-2013-6660 CVE-2013-6661
Created:February 28, 2014 Updated:March 5, 2014

From Chromium blog:

CVE-2013-6653: Use-after-free related to web contents.

CVE-2013-6654: Bad cast in SVG.

CVE-2013-6655: Use-after-free in layout.

CVE-2013-6656: Information leak in XSS auditor.

CVE-2013-6657: Information leak in XSS auditor.

CVE-2013-6658: Use-after-free in layout.

CVE-2013-6659: Issue with certificates validation in TLS handshake.

CVE-2013-6660: Information leak in drag and drop.

CVE-2013-6661: Various fixes from internal audits, fuzzing and other initiatives. Of these, seven are fixes for issues that could have allowed for sandbox escapes from compromised renderers.

Debian DSA-2883-1 chromium-browser 2014-03-23
openSUSE openSUSE-SU-2014:0327-1 chromium 2014-03-05
Gentoo 201403-01 chromium 2014-03-05
Mageia MGASA-2014-0107 chromium-browser 2014-02-27

Comments (none posted)

drupal6-filefield: access bypass

Package(s):drupal6-filefield CVE #(s):
Created:March 3, 2014 Updated:March 5, 2014
Description: From the Drupal advisory:

FileField module allows users to upload files with in conjunction with the Content Construction Kit (CCK) module in Drupal 6.

The module doesn't sufficiently check permissions on revisions when determining if a user should have access to a particular file attached to that revision. A user could gain access to private files attached to revisions when they don't have access to the corresponding revision.

This vulnerability is mitigated by the fact that an attacker must have access to upload files through FileField module while creating content, and the site must be using a non-core workflow module that allows users to create unpublished revisions of content.

Fedora FEDORA-2014-2615 drupal6-filefield 2014-03-01
Fedora FEDORA-2014-2648 drupal6-filefield 2014-03-01

Comments (none posted)

drupal6-image_resize_filter: denial of service

Package(s):drupal6-image_resize_filter CVE #(s):
Created:March 3, 2014 Updated:March 5, 2014
Description: From the Drupal advisory:

This module enables you to resize images based on the HTML contents of a post. Images with specified height and width properties that differ from the original image result in a resized image being created.

The module doesn't limit the number of resized images per post or user, which could allow a user to post a large number of images that need to be resized within a single piece of content. This could cause the server to become overwhelmed by requests to resize images.

This vulnerability is mitigated by the fact that an attacker must have a role that allows them to post content that utilizes the image resize filter.

Fedora FEDORA-2014-2612 drupal6-image_resize_filter 2014-03-01
Fedora FEDORA-2014-2611 drupal6-image_resize_filter 2014-03-01

Comments (none posted)

drupal7-ctools: access bypass

Package(s):drupal7-ctools CVE #(s):
Created:March 3, 2014 Updated:March 5, 2014
Description: From the Drupal advisory:

This module provides content editors with an autocomplete callback for entity titles, as well as an ability to embed content within the Chaos tool suite (ctools) framework.

Prior to this version, ctools did not sufficiently check access grants for various types of content other than nodes. It also didn't sufficiently check access before displaying content with the relationship plugin.

These vulnerabilities are mitigated by the fact that you must be using entities other than node or users for the autocomplete callback, or you must be using the relationship plugin and displaying the content (e.g. in panels).

Fedora FEDORA-2014-2578 drupal7-ctools 2014-03-01
Fedora FEDORA-2014-2562 drupal7-ctools 2014-03-01

Comments (none posted)

easy-rsa: weak keys

Package(s):easy-rsa CVE #(s):
Created:March 4, 2014 Updated:March 5, 2014
Description: From the Fedora advisory:

Update to 2.2.2, stronger defaults for key strength. Use SHA256 instead of SHA1.

Fedora FEDORA-2014-2869 easy-rsa 2014-03-04
Fedora FEDORA-2014-2804 easy-rsa 2014-03-04

Comments (none posted)

egroupware: remote code execution

Package(s):egroupware CVE #(s):CVE-2014-2027
Created:March 4, 2014 Updated:March 29, 2015
Description: From the Mageia advisory:

eGroupware prior to is vulnerable to remote file deletion and possible remote code execution due to user input being passed to PHP's unserialize() method.

Mandriva MDVSA-2015:087 egroupware 2015-03-28
Mageia MGASA-2014-0116 egroupware 2014-03-03

Comments (none posted)

gnutls: certificate verification issue

Package(s):gnutls CVE #(s):CVE-2014-0092
Created:March 4, 2014 Updated:March 13, 2014
Description: The GnuTLS library has error-handling issues that can result in the false validation of fraudulent certificates; see this article for details.
Mandriva MDVSA-2015:072 gnutls 2015-03-27
Fedora FEDORA-2014-14760 gnutls 2014-11-13
Gentoo 201406-09 gnutls 2014-06-13
SUSE SUSE-SU-2014:0445-1 gnutls 2014-03-25
Red Hat RHSA-2014:0288-01 gnutls 2014-03-12
openSUSE openSUSE-SU-2014:0346-1 gnutls 2014-03-08
Mandriva MDVSA-2014:048 gnutls 2014-03-10
openSUSE openSUSE-SU-2014:0328-1 gnutls 2014-03-05
Fedora FEDORA-2014-3363 gnutls 2014-03-06
Fedora FEDORA-2014-3413 gnutls 2014-03-06
SUSE SUSE-SU-2014:0324-1 gnutls 2014-03-04
openSUSE openSUSE-SU-2014:0325-1 gnutls 2014-03-05
CentOS CESA-2014:0247 gnutls 2014-03-04
CentOS CESA-2014:0246 gnutls 2014-03-04
Ubuntu USN-2127-1 gnutls26 2014-03-04
SUSE SUSE-SU-2014:0323-1 gnutls 2014-03-04
SUSE SUSE-SU-2014:0322-1 gnutls 2014-03-04
SUSE SUSE-SU-2014:0321-1 gnutls 2014-03-04
SUSE SUSE-SU-2014:0320-1 gnutls 2014-03-04
SUSE SUSE-SU-2014:0319-1 gnutls 2014-03-04
Slackware SSA:2014-062-01 gnutls 2014-03-03
Scientific Linux SLSA-2014:0246-1 gnutls 2014-03-03
Scientific Linux SLSA-2014:0247-1 gnutls 2014-03-03
Oracle ELSA-2014-0247 gnutls 2014-03-03
Oracle ELSA-2014-0246 gnutls 2014-03-03
Mageia MGASA-2014-0117 gnutls 2014-03-03
Debian DSA-2869-1 gnutls26 2014-03-03
Red Hat RHSA-2014:0247-01 gnutls 2014-03-03
Red Hat RHSA-2014:0246-01 gnutls 2014-03-03

Comments (none posted)

gnutls: X.509 v1 certificate handling flaw

Package(s):gnutls CVE #(s):CVE-2009-5138
Created:March 4, 2014 Updated:March 5, 2014
Description: From the Red Hat advisory:

A flaw was found in the way GnuTLS handled version 1 X.509 certificates. An attacker able to obtain a version 1 certificate from a trusted certificate authority could use this flaw to issue certificates for other sites that would be accepted by GnuTLS as valid.

SUSE SUSE-SU-2014:0445-1 gnutls 2014-03-25
CentOS CESA-2014:0247 gnutls 2014-03-04
SUSE SUSE-SU-2014:0323-1 gnutls 2014-03-04
SUSE SUSE-SU-2014:0322-1 gnutls 2014-03-04
SUSE SUSE-SU-2014:0321-1 gnutls 2014-03-04
SUSE SUSE-SU-2014:0320-1 gnutls 2014-03-04
SUSE SUSE-SU-2014:0319-1 gnutls 2014-03-04
Scientific Linux SLSA-2014:0247-1 gnutls 2014-03-03
Oracle ELSA-2014-0247 gnutls 2014-03-03
Red Hat RHSA-2014:0247-01 gnutls 2014-03-03

Comments (none posted)

kernel: information leak

Package(s):kernel CVE #(s):CVE-2014-2038
Created:February 27, 2014 Updated:March 5, 2014
Description: From the Mageia advisory:

Linux kernel build with the NFS file system(CONFIG_NFS_FS) along with the support for NFSv4 protocol(CONFIG_NFS_V4) is vulnerable to an information leakage flaw. It could occur while writing to a file wherein NFS server has offered write delegation to the client. Such delegation allows NFS client to perform the said operation locally without instant interaction with the server. A user/program could use this flaw to atleast leak kernel memory bytes. (CVE-2014-2038)

Ubuntu USN-2137-1 linux-lts-saucy 2014-03-07
Ubuntu USN-2140-1 kernel 2014-03-07
Mageia MGASA-2014-0103 kernel 2014-02-26

Comments (none posted)

kernel: denial of service

Package(s):kernel CVE #(s):CVE-2014-2039
Created:March 3, 2014 Updated:March 5, 2014
Description: From the Red Hat bugzilla:

Linux kernel built for the s390 architecture(CONFIG_S390) is vulnerable to a crash due to low-address protection exception. It occurs when an application uses a linkage stack instruction.

An unprivileged user/application could use this flaw to crash the system resulting in DoS.

Scientific Linux SLSA-2014:0771-1 kernel 2014-06-19
Oracle ELSA-2014-0771 kernel 2014-06-19
CentOS CESA-2014:0771 kernel 2014-06-20
Red Hat RHSA-2014:0771-01 kernel 2014-06-19
openSUSE openSUSE-SU-2014:0766-1 Evergreen 2014-06-06
SUSE SUSE-SU-2014:0696-1 Linux kernel 2014-05-22
Debian DSA-2906-1 linux-2.6 2014-04-24
Mandriva MDVSA-2014:124 kernel 2014-06-13
Fedora FEDORA-2014-2887 kernel 2014-03-01
Fedora FEDORA-2014-3094 kernel 2014-02-28

Comments (none posted)

libvirt: unsafe usage of paths under /proc/$PID/root

Package(s):libvirt CVE #(s):CVE-2013-6456
Created:March 3, 2014 Updated:May 2, 2014
Description: From the Red Hat bugzilla:

Eric Blake from Red Hat notes:

The LXC driver will open paths under /proc/$PID/root for some operations it performs on running guests. For the virDomainShutdown and virDomainReboot APIs it will use this to access the /dev/initctl path in the container. For the virDomainDeviceAttach / virDomainDeviceDettach APIs it will use this to create device nodes in the container's /dev filesystem. If any of the path components under control of the container are symlinks the container can cause the libvirtd daemon to access the incorrect files.

A container can cause the administrator to shutdown or reboot the host OS if /dev/initctl in the container is made to be an absolute symlink back to itself or /run/initctl. A container can cause the host administrator to mknod in an arbitrary host directory when invoking the virDomainDeviceAttach API by replacing '/dev' with an absolute symlink. A container can cause the host administrator to delete host device when invoking the virDomainDeviceDettach API by replacing '/dev' with an absolute symlink.

Mandriva MDVSA-2015:115 libvirt 2015-03-29
Gentoo 201412-04 libvirt 2014-12-09
Mageia MGASA-2014-0243 libvirt 2014-05-29
Mandriva MDVSA-2014:097 libvirt 2014-05-16
Ubuntu USN-2209-1 libvirt 2014-05-07
openSUSE openSUSE-SU-2014:0593-1 libvirt 2014-05-02
Fedora FEDORA-2014-2864 libvirt 2014-02-28

Comments (none posted)

mariadb: multiple vulnerabilities

Package(s):mariadb CVE #(s):
Created:March 3, 2014 Updated:March 5, 2014
Description: From the Mageia advisory:

MariaDB has been updated to the latest release in the 5.5 series, 5.5.36, which fixes several security vulnerabilities and other bugs. See the Release Notes for more details.

Mageia MGASA-2014-0108 mariadb 2014-02-28

Comments (none posted)

mediawiki: multiple vulnerabilities

Package(s):mediawiki CVE #(s):CVE-2013-6451 CVE-2013-6452 CVE-2013-6453 CVE-2013-6472
Created:March 3, 2014 Updated:March 5, 2014
Description: From the Mageia advisory:

MediaWiki user Michael M reported that the fix for CVE-2013-4568 allowed insertion of escaped CSS values which could pass the CSS validation checks, resulting in XSS (CVE-2013-6451).

Chris from RationalWiki reported that SVG files could be uploaded that include external stylesheets, which could lead to XSS when an XSL was used to include JavaScript (CVE-2013-6452).

During internal review, it was discovered that MediaWiki's SVG sanitization could be bypassed when the XML was considered invalid (CVE-2013-6453).

During internal review, it was discovered that MediaWiki displayed some information about deleted pages in the log API, enhanced RecentChanges, and user watchlists (CVE-2013-6472).

Gentoo 201502-04 mediawiki 2015-02-07
Debian DSA-2891-3 mediawiki 2014-04-04
Debian DSA-2891-2 mediawiki 2014-03-31
Debian DSA-2891-1 mediawiki 2014-03-30
Mandriva MDVSA-2014:057 mediawiki 2014-03-13
Mageia MGASA-2014-0113 mediawiki 2014-03-02

Comments (none posted)

openstack-glance: information leak

Package(s):openstack-glance CVE #(s):CVE-2014-1948
Created:March 5, 2014 Updated:May 13, 2014
Description: From the CVE entry:

OpenStack Image Registry and Delivery Service (Glance) 2013.2 through 2013.2.1 and Icehouse before icehouse-2 logs a URL containing the Swift store backend password when authentication fails and WARNING level logging is enabled, which allows local users to obtain sensitive information by reading the log.

Fedora FEDORA-2014-5198 openstack-glance 2014-05-13
Red Hat RHSA-2014:0229-01 openstack-glance 2014-03-04

Comments (none posted)

openstack-nova: denial of service

Package(s):openstack-nova CVE #(s):CVE-2013-6437
Created:March 5, 2014 Updated:March 5, 2014
Description: From the Red Hat advisory:

A flaw was found in the way the libvirt driver handled short-lived disk back-up files on Compute nodes. An authenticated attacker could use this flaw to create a large number of such files, exhausting all available space on Compute node disks, and potentially causing a denial of service. Note that only Compute setups using the libvirt driver were affected.

Red Hat RHSA-2014:0231-01 openstack-nova 2014-03-04

Comments (none posted)

openstack-packstack: insecure network connections

Package(s):openstack-packstack CVE #(s):CVE-2014-0071
Created:March 5, 2014 Updated:March 5, 2014
Description: From the Red Hat advisory:

It was found that PackStack did not correctly install the rules defined in the default security groups when deployed on OpenStack Networking (neutron), allowing network connections to be made to systems that should not have been accessible.

Red Hat RHSA-2014:0233-01 openstack-packstack 2014-03-04

Comments (none posted)

openstack-swift: timing side-channel attack

Package(s):openstack-swift CVE #(s):CVE-2014-0006
Created:March 5, 2014 Updated:May 7, 2014
Description: From the CVE entry:

The TempURL middleware in OpenStack Object Storage (Swift) 1.4.6 through 1.8.0, 1.9.0 through 1.10.0, and 1.11.0 allows remote attackers to obtain secret URLs by leveraging an object name and a timing side-channel attack.

Ubuntu USN-2207-1 swift 2014-05-06
Red Hat RHSA-2014:0367-01 openstack-swift 2014-04-03
Red Hat RHSA-2014:0232-01 openstack-swift 2014-03-04

Comments (none posted)

otrs: JavaScript code execution

Package(s):otrs CVE #(s):CVE-2014-1695
Created:March 3, 2014 Updated:March 13, 2014
Description: From the Mageia advisory:

An attacker could send a specially prepared HTML email to OTRS. If he can then trick an agent into following a special link to display this email, JavaScript code would be executed.

openSUSE openSUSE-SU-2014:0360-1 otrs 2014-03-13
Mandriva MDVSA-2014:054 otrs 2014-03-13
Mageia MGASA-2014-0114 otrs 2014-03-02

Comments (none posted)

php: multiple vulnerabilities

Package(s):php5 CVE #(s):CVE-2013-7327 CVE-2013-7328 CVE-2014-2020
Created:March 4, 2014 Updated:March 5, 2014
Description: From the CVE entries:

The gdImageCrop function in ext/gd/gd.c in PHP 5.5.x before 5.5.9 does not check return values, which allows remote attackers to cause a denial of service (application crash) or possibly have unspecified other impact via invalid imagecrop arguments that lead to use of a NULL pointer as a return value, a different vulnerability than CVE-2013-7226. (CVE-2013-7327)

Multiple integer signedness errors in the gdImageCrop function in ext/gd/gd.c in PHP 5.5.x before 5.5.9 allow remote attackers to cause a denial of service (application crash) or obtain sensitive information via an imagecrop function call with a negative value for the (1) x or (2) y dimension, a different vulnerability than CVE-2013-7226. (CVE-2013-7328)

ext/gd/gd.c in PHP 5.5.x before 5.5.9 does not check data types, which might allow remote attackers to obtain sensitive information by using a (1) string or (2) array data type in place of a numeric data type, as demonstrated by an imagecrop function call with a string for the x dimension value, a different vulnerability than CVE-2013-7226. (CVE-2014-2020)

Gentoo 201408-11 php 2014-08-29
Mandriva MDVSA-2014:059 php 2014-03-14
Ubuntu USN-2126-1 php5 2014-03-03

Comments (none posted)

python-logilab-common: multiple unspecified temporary file vulnerabilities

Package(s):python-logilab-common CVE #(s):CVE-2014-1838 CVE-2014-1839
Created:February 28, 2014 Updated:March 19, 2014

From the openSUSE advisory:

The Python logilab-common module was updated to fix several temporary file problems, one in the PDF generator (CVE-2014-1838) and one in the shellutils helper (CVE-2014-1839).

Fedora FEDORA-2014-3300 python-logilab-common 2014-03-19
Fedora FEDORA-2014-3300 python-astroid 2014-03-19
Fedora FEDORA-2014-3300 pylint 2014-03-19
Mageia MGASA-2014-0118 python-logilab-common 2014-03-03
openSUSE openSUSE-SU-2014:0306-1 python-logilab-common 2014-02-28

Comments (none posted)

python-tahrir: insecure openid login

Package(s):python-tahrir CVE #(s):
Created:March 4, 2014 Updated:March 5, 2014
Description: From the Fedora advisory:

Fix openid login from untrusted provider.

Fedora FEDORA-2014-2239 python-tahrir 2014-03-04
Fedora FEDORA-2014-2264 python-tahrir 2014-03-04

Comments (none posted)

subversion: denial of service

Package(s):subversion CVE #(s):CVE-2014-0032
Created:February 28, 2014 Updated:August 15, 2014

From the Mageia advisory:

The mod_dav_svn module in Apache Subversion before 1.8.8, when SVNListParentPath is enabled, allows remote attackers to cause a denial of service (crash) via an OPTIONS request.

Debian-LTS DLA-207-1 subversion 2015-04-24
Mandriva MDVSA-2015:085 subversion 2015-03-28
Ubuntu USN-2316-1 subversion 2014-08-14
Mandriva MDVSA-2014:049 subversion 2014-03-10
Scientific Linux SLSA-2014:0255-1 subversion 2014-03-05
Oracle ELSA-2014-0255 subversion 2014-03-05
Oracle ELSA-2014-0255 subversion 2014-03-05
openSUSE openSUSE-SU-2014:0334-1 subversion 2014-03-06
CentOS CESA-2014:0255 subversion 2014-03-06
CentOS CESA-2014:0255 subversion 2014-03-06
Red Hat RHSA-2014:0255-01 subversion 2014-03-05
Mageia MGASA-2014-0105 subversion 2014-02-27
Slackware SSA:2014-058-01 subversion 2014-02-27
openSUSE openSUSE-SU-2014:0307-1 subversion 2014-02-28
Mageia MGASA-2014-0104 subversion 2014-02-27
Gentoo 201610-05 subversion 2016-10-11

Comments (none posted)

xen: multiple vulnerabilities

Package(s):xen CVE #(s):CVE-2014-1950 CVE-2013-2212
Created:March 3, 2014 Updated:March 5, 2014
Description: From the CVE entries:

Use-after-free vulnerability in the xc_cpupool_getinfo function in Xen 4.1.x through 4.3.x, when using a multithreaded toolstack, does not properly handle a failure by the xc_cpumap_alloc function, which allows local users with access to management functions to cause a denial of service (heap corruption) and possibly gain privileges via unspecified vectors. (CVE-2014-1950)

The vmx_set_uc_mode function in Xen 3.3 through 4.3, when disabling caches, allows local HVM guests with access to memory mapped I/O regions to cause a denial of service (CPU consumption and possibly hypervisor or guest kernel panic) via a crafted GFN range. (CVE-2013-2212)

Gentoo 201504-04 xen 2015-04-11
Debian DSA-3006-1 xen 2014-08-18
openSUSE openSUSE-SU-2014:0483-1 xen 2014-04-04
openSUSE openSUSE-SU-2014:0482-1 xen 2014-04-04
SUSE SUSE-SU-2014:0446-1 Xen 2014-03-25
SUSE SUSE-SU-2014:0373-1 Xen 2014-03-14
SUSE SUSE-SU-2014:0372-1 Xen 2014-03-14
Fedora FEDORA-2014-2862 xen 2014-03-02
Fedora FEDORA-2014-2802 xen 2014-03-02

Comments (none posted)

Page editor: Jake Edge

Kernel development

Brief items

Kernel release status

The current development kernel is 3.14-rc5, released on March 2. Linus says: "Not a lot. Which is just how I like it. Go verify that it all works for you."

Stable updates: no stable updates have been released in the last week. The 3.13.6 and 3.10.33 updates are in the review process as of this writing; they can be expected on or after March 6.

Comments (none posted)

Quotes of the week

I guess this tips the balance from "you must be crazy to show the source code for your GPU and risk getting sued" to "how do you expect to stay in business without a free driver".
Arnd Bergmann

Honestly since no one cares enough to maintain the kernel code properly I really think we should just rip audit out instead trying to present userspace with the delusion that the code works, and will continue to work properly.
Eric Biederman

Comments (none posted)

Red Hat's dynamic kernel patching project

It seems that Red Hat, too, has a project working on patching running kernels. "kpatch allows you to patch a Linux kernel without rebooting or restarting any processes. This enables sysadmins to apply critical security patches to the kernel immediately, without having to wait for long-running tasks to complete, users to log off, or scheduled reboot windows. It gives more control over uptime without sacrificing security or stability." It looks closer to ksplice than to SUSE's kGraft in that it patches out entire functions at a time.

Comments (16 posted)

SUSE Labs Director Talks Live Kernel Patching with kGraft (

Libby Clark talks with Vojtech Pavlik, Director of SUSE Labs, about kGraft. "In this Q&A, Pavlik goes into more detail on SUSE's live kernel patching project; how the kGraft patch integrates with the Linux kernel; how it compares with other live-patching solutions; how developers will be able to use the upcoming release; and the project's interaction with the kernel community for upstream acceptance."

Comments (2 posted)

Broadcom releases SoC graphics driver source

Broadcom has announced the release of the source and documentation for its VideoCore IV graphics subsystem. This subsystem is found in the Raspberry Pi processor, among others. "The trend over the last decade has leaned towards greater openness in desktop graphics, and the same is happening in the mobile space. Broadcom — a long-time leader in graphics processors — is a frontrunner in this movement and aims to contribute to its momentum."

Comments (28 posted)

Kernel development news

Finding the proper scope of a file collapse operation

By Jonathan Corbet
March 5, 2014
System call design is never easy; there are often surprising edge cases that developers fail to consider as they settle on an interface. System calls involving filesystems seem to be especially prone to this kind of problem, since the complexity and variety of filesystem implementations means that there may be any number of surprises waiting for a developer who wants to create a new file-oriented operation. Some of these surprises can be seen in the discussion of a proposed addition to the fallocate() system call.

fallocate() is concerned with the allocation of space within a file; its initial purpose was to allow an application to allocate blocks to a file prior to writing them. This type of preallocation ensures that the needed space is available before trying to write the data that goes there; it can also help filesystem implementations lay out the allocated space more efficiently on disk. Later on, the FALLOC_FL_PUNCH_HOLE operation was added to deallocate blocks within a file, leaving a hole in the file.

In February, Namjae Jeon proposed a new fallocate() operation called FALLOC_FL_COLLAPSE_RANGE; this proposal included implementations for the ext4 and xfs filesystems. Like the hole-punching operation, it removes data from a file, but there is a difference: rather than leaving a hole in the file, this operation moves all data beyond the affected range to the beginning of that range, shortening the file as a whole. The immediate user for this operation would appear to be video editing applications, which could use it to quickly and efficiently remove a segment of a video file. If the removed range is block-aligned (which would be a requirement, at least for some filesystems), the removal could be effected by changing the file's extent maps, with no actual copying of data required. Given that files containing video data can be large, it is not hard to understand why an efficient "cut" operation would be attractive.

So what kinds of questions arise with an operation like this? One could start with the interaction with the mmap() system call, which maps a file into a process's address space. The proposed implementation works by removing all pages from the affected range to the end of the file from the page cache; dirty pages are written back to disk first. That will prevent the immediate loss of data that may have been written via a mapping, and will get rid of any memory pages that will be after the end of the file once the operation is complete. But it could be a surprise for a process that does not expect the contents of a file to shift around underneath its mapping. That is not expected to be a huge problem; as Dave Chinner pointed out, the types of applications that would use the collapse operation do not generally access their files via mmap(). Beyond that, applications that are surprised by a collapsed file may well be unable to deal with other modifications even in the absence of a collapse operation.

But, as Hugh Dickins noted, there is a related problem: in the tmpfs filesystem, all files live in the page cache and look a lot like a memory mapping. Since the page cache is the backing store, removing file pages from the page cache is unlikely to lead to a happy ending. So, before tmpfs could support the collapse operation, a lot more effort would have to go into making things play well with the page cache. Hugh was not sure that there would ever be a need for this operation in tmpfs, but, he said, solving the page cache issues for tmpfs would likely lead to a more robust implementation for other filesystems as well.

Hugh also wondered whether the uni-directional collapse operation should, instead, be designed to work in both directions:

I'm a little sad at the name COLLAPSE, but probably seven months too late to object. It surprises me that you're doing all this work to deflate a part of the file, without the obvious complementary work to inflate it - presumably all those advertisers whose ads you're cutting out, will come back to us soon to ask for inflation, so that they have somewhere to reinsert them.

Andrew Morton went a little further, suggesting that a simple "move these blocks from here to there" system call might be the best idea. But Dave took a dim view of that suggestion, worrying that it would introduce a great deal of complexity and difficult corner cases:

IOWs, collapse range is a simple operation, "move arbitrary blocks from here to there" is a nightmare both from the specification and the implementation points of view.

Andrew disagreed, claiming that a more general interface was preferable and that the problems could be overcome, but nobody else supported him on this point. So, chances are, the operation will remain confined to collapsing chunks out of files; a separate "insert" operation may be added in the future, should an interesting use case for it be found.

Meanwhile, there is one other behavioral question to answer; what happens if the region to be removed from the file reaches to the end of the file? The current patch set returns EINVAL in that situation, with the idea that a call to truncate() should be used instead. Ted Ts'o asked whether such operations should just be turned directly into truncate() calls, but Dave is set against that idea. A collapse operation that includes the end of the file, he said, is almost certainly buggy; it is better to return an error in that case.

There are also, evidently, some interesting security issues that could come up if a collapse operation were allowed to include the end of the file. Filesystems can allocate blocks beyond the end of the file; indeed, fallocate() can be used to explicitly request that behavior. Those blocks are typically not zeroed out by the filesystem; instead, they are kept inaccessible so that whatever stale data is contained there cannot be read. Without a great deal of care, a collapse implementation that allowed the range to go beyond the end of the file could end up exposing that data, especially if the operation were to be interrupted (by a system crash, perhaps) in the middle. Rather than set that sort of trap for filesystem developers, Dave would prefer to disallow the risky operations from the beginning, especially since there does not appear to be any real need to support them.

So the end result of all this discussion is that the FALLOC_FL_COLLAPSE_RANGE operation is likely to go into the kernel essentially unchanged. It will not have all the capabilities that some developers would have liked to see, but it will support one useful feature that should help to accelerate a useful class of applications. Whether this will be enough for the long term remains to be seen; system call API design is hard. But, should additional features be needed in the future, new FALLOC_FL commands can be created to make them available in a compatible way.

Comments (13 posted)

Tracing unsigned modules

By Jake Edge
March 5, 2014

The reuse of one of the "tainted kernel" flags by the signed-module loading code has led to a problem using tracepoints in unsigned kernel modules. The problem is fairly easily fixed, but there was opposition to doing so, at least until a "valid" use case could be found. Kernel hackers are not particularly interested in helping out-of-tree modules, and fixing the problem was seen that way—at first, anyway.

Loadable kernel modules have been part of the kernel landscape for nearly 20 years (kernel 1.2 in 1995), but have only recently gained the ability to be verified by a cryptographic signature, so that only "approved" modules can be loaded. Red Hat kernels have had the feature for some time, though it was implemented differently than what eventually ended up in the kernel. Basically, the kernel builder can specify a key to be used to sign modules; the private key gets stored (or discarded after signing), while the public key is built into the kernel.

There are several kernel configuration parameters that govern module signing: CONFIG_MODULE_SIG controls whether the code to do signature checking is enabled at all, while CONFIG_MODULE_SIG_FORCE determines whether all modules must be signed. If CONFIG_MODULE_SIG_FORCE is not turned on (and the corresponding kernel boot parameter module.sig_enforce is not present), then modules without signatures or those using keys not available to the kernel will still be loaded. In that case, though, the kernel will be marked as tainted.

The taint flag used, though, is the same as that used when the user does a modprobe --force to force the loading of a module built for a different kernel version: TAINT_FORCED_MODULE. Force-loading a module is fairly dangerous and can lead to kernel crashes due to incompatibilities between the module's view of memory layout and the kernel's. That can lead to crashes that the kernel developers are not interested in spending time on. So, force-loading a module taints the kernel to allow those bug reports to be quickly skipped over.

But loading an unsigned module is not likely to lead to a kernel crash (or at least, not because it is unsigned), so using the TAINT_FORCED_MODULE flag in that case is not particularly fair. The tracepoint code discriminates against force-loaded modules because enabling tracepoints in mismatched modules could easily lead to a system crash. The tracepoint code does allow TAINT_CRAP (modules built from the staging tree) and TAINT_OOT_MODULE (out-of-tree modules) specifically, but tracepoints in the modules get silently disabled if there is any other taint flag.

Mathieu Desnoyers posted an RFC patch to change that situation. It added a new TAINT_UNSIGNED_MODULE flag that got set when those modules were loaded. It also changed the test in the tracepoint code to allow tracing for the new taint type. It drew an immediate NAK from Ingo Molnar, who did not find Desnoyers's use case at all compelling: "External modules should strive to get out of the 'crap' and 'felony law breaker' categories and we should not make it easier for them to linger in a broken state."

But the situation is not as simple as Molnar seems to think. There are distribution kernels that turn on signature checking, but allow users to decide whether to require signatures by using module.sig_enforce. Since it is the distribution's key that is stored in the kernel image, strict enforcement would mean that only modules built by the distribution could be loaded. That leaves out a wide variety of modules that a user might want to load: modules under development, back-ported modules from later kernels, existing modules being debugged, and so on.

Module maintainer Rusty Russell was fairly unimpressed with the arguments given in favor of the change, at least at first, noting that the kernel configuration made it clear that users needed to arrange to sign their own modules: "Then you didn't do that. You broke it, you get to keep both pieces." That's not exactly the case, though, since CONFIG_MODULE_SIG_FORCE was not set (nor was module.sig_enforce passed on the kernel command line), so users aren't actually required to arrange for signing. But Russell was looking for "an actual valid use case".

The problem essentially boils down to the fact that the kernel is lying when it uses TAINT_FORCED_MODULE for a module that, in fact, hasn't been forced. Steven Rostedt tried to make that clear: "Why the hell are we setting a FORCED_MODULE flag when no module was forced????". He also noted that he is often the one to get bug reports from folks whose tracepoints aren't showing up because they didn't sign their module. As he pointed out, it is a silent failure and the linkage between signed modules and tracepoints is not particularly obvious.

Johannes Berg was eventually able to supply the kind of use case Russell was looking for, though. In his message, he summarized the case for unsigned modules nicely:

The mere existence of a configuration to allow unsigned modules would indicate that there are valid use cases for that (and rebuilding a module or such for development would seem to be one of them), so why would tracing be impacted, particularly for development.

Berg also provided another reason for loading unsigned modules: backported kernel modules from the wiki to support hardware (presumably, in his case, wireless network hardware) features not present in the distribution-supplied drivers. He was quite unhappy to hear those kinds of drivers, which "typically only diverge from upstream by a few patches", characterized as crap or law-breaking.

Berg's use case was enough for Russell to agree to the change and to add it to his pending tree. We should see Desnoyers's final patch, which has some cosmetic changes from the RFC, in 3.15. At that point, the kernel will be able to distinguish between these two different kinds of taint and users will be able to trace modules they have loaded, signed or unsigned.

Comments (27 posted)

Optimizing VMA caching

By Jonathan Corbet
March 5, 2014
The kernel divides each process's address space into virtual memory areas (VMAs), each of which describes where the associated range of addresses has its backing store, its protections, and more. A mapping created by mmap(), for example, will be represented by a single VMA, while mapping an executable file into memory may require several VMAs; the list of VMAs for any process can be seen by looking at /proc/PID/maps. Finding the VMA associated with a specific virtual address is a common operation in the memory management subsystem; it must be done for every page fault, for example. It is thus not surprising that this mapping is highly optimized; what may be surprising is the fact that it can be optimized further.

The VMAs for each address space are stored in a red-black tree, which enables a specific VMA to be looked up in logarithmic time. These trees scale well, which is important; some processes can have hundreds of VMAs (or more) to sort through. But it still takes time to walk down to a leaf in a red-black tree; it would be nice to avoid that work at least occasionally if it were possible. Current kernels work toward that goal by caching the results of the last VMA lookup in each address space. For workloads with any sort of locality, this simple cache can be quite effective, with hit rates of 50% or more.

But Davidlohr Bueso thought it should be possible to do better. Last November, he posted a patch adding a second cache holding a pointer to the largest VMA in each address space. The logic was that the VMA with the most addresses would see the most lookups, and his results seemed to bear that out; with the largest-VMA cache in place, hit rates went to over 60% for some workloads. It was a good improvement, but the patch did not make it into the mainline. Looking at the discussion, one can quickly come up with a useful tip for aspiring kernel developers: if Linus responds by saying "This patch makes me angry," the chances of it being merged are relatively small.

Linus's complaint was that caching the largest VMA seemed "way too ad-hoc" and wouldn't be suitable for a lot of workloads. He suggested caching a small number of recently used VMAs instead. Additionally, he noted that maintaining a single cache per address space, as current kernels do, might not be a good idea. In situations where multiple threads are running in the same address space, it is likely that each thread will be working with a different set of VMAs. So making the cache per-thread, he said, might yield much better results.

A few iterations later, Davidlohr has posted a VMA-caching patch set that appears to be about ready to go upstream. Following Linus's suggestion, the single-VMA cache (mmap_cache in struct mm_struct) has been replaced by a small array called vmacache in struct task_struct, making it per-thread. On systems with a memory management unit (almost all systems), that array holds four entries. There are also new sequence numbers stored in both struct mm_struct (one per address space) and in struct task_struct (one per thread).

The purpose of the sequence numbers is to ensure that the cache does not return stale results. Any change to the address space (the addition or removal of a VMA, for example) causes the per-address-space sequence number to be incremented. Every attempt to look up an address in the per-thread cache first checks the sequence numbers; if they do not match, the cache is deemed to be invalid and will be reset. Address-space changes are relatively rare in most workloads, so the invalidation of the cache should not happen too often.

Every call to find_vma() (the function that locates the VMA for a virtual address) first does a linear search through the cache to see if the needed VMA is there. Should the VMA be found, the work is done; otherwise, a traversal of the red-black tree will be required. In this case, the result of the lookup will be stored back into the cache. That is done by overwriting the entry indexed by the lowest bits of the page-frame number associated with the original virtual address. It is, thus, a random replacement policy for all practical purposes. The caching mechanism is meant to be fast so there would probably be no benefit from trying to implement a more elaborate replacement policy.

How well does the new scheme work? It depends on the workload, of course. For system boot, where almost everything running is single-threaded, Davidlohr reports that the cache hit rate went from 51% to 73%. Kernel builds, unsurprisingly, already work quite well with the current scheme with a hit rate of 75%, but, even in this case, improvement is possible: that rate goes to 88% with Davidlohr's patch applied. The real benefit, though, can be seen with benchmarks like ebizzy, which is designed to simulate a multithreaded web server workload. Current kernels find a cached VMA in a mere 1% of lookup attempts; patched kernels, instead, show a 99.97% hit rate.

With numbers like that, it is hard to find arguments for keeping this patch out of the mainline. At this point, the stream of suggestions and comments has come to a halt. Barring surprises, a new VMA lookup caching mechanism seems likely to find its way into the 3.15 kernel.

Comments (14 posted)

Patches and updates

Kernel trees


Core kernel code

Development tools

Device drivers


Memory management



Page editor: Jonathan Corbet


Does Fedora need a system-wide crypto policy?

By Nathan Willis
March 5, 2014

The Fedora project is debating a number of proposals for changes to implement in Fedora 21. One of the proposals is to implement a "cryptographic policy" framework, so that a single setting could be used to configure the security options for a suite of related lower-level cryptographic libraries. But not everyone is sure that it is possible to craft a set of cross-library settings that are both meaningful and easy to understand. If such settings require digging into the details of every affected library anyway, they do not add much value, after all. Conversely, if there is never a good use case for settings other than "be as secure as possible," then a policy framework may be overkill.

Jaroslav Reznik sent the proposal to the fedora-devel list on February 27, although the "owner" of the proposal is Nikos Mavrogiannopoulos. The proposal is to offer a set of pre-defined "security levels," each of which encompasses settings for all of Fedora's major cryptographic libraries—initially GnuTLS, OpenSSL, and Network Security Services (NSS). The system administrator could then select a security level and expect a consistent level of protection for all of the applications using the libraries.

Each individual level would set a number of configuration options, including the ciphers and key exchange algorithms available for use, the preferred order of ciphers and algorithms, the allowable protocol versions, and other parameters or options enabled (such as TLS safe renegotiation). In the proposal, the example level names include some that relate to cipher strength (LEVEL-80, LEVEL-112, LEVEL-128, LEVEL-256, each corresponding to the bit-size of the ciphers used), and some that relate to government or international specifications (ENISA-LEGACY and ENISA-FUTURE for the European Network and Information Security Agency, plus SUITEB-128 and SUITEB-256 for the NSA's Suite B). The administrator would thus only need to select the proper level for the system, rather than setting every individual property and option. Nevertheless, the exact contents of these levels is not spelled out in the proposal.

The proposed implementation plan is to create a configuration option called SYSTEM for each application that relies on one of the libraries—for example, a dummy cipher named SYSTEM. That configuration option would then function as a reference to the system-wide settings stored in some well-known location like /etc/crypto-profiles/config. Although tuning the application with the SYSTEM setting will default to the specific security level configured by the system administrator, it would still be possible to specify override options for the individual application.

How the SYSTEM cipher setting is defined for each of the libraries will vary, of course. The proposal also notes that the security level approach is designed to ensure consistent behavior for applications that rely on automatic security settings or do not have user-configurable options.

Nevertheless, there were still plenty of pointed questions asked in reply on the fedora-devel list. Bill Nottingham asked how the configuration options and choices could be made meaningful to administrators so that they could make a sufficiently informed decision:

For example although I 'know' what SUITEB might refer to, it still amounts to 'a set of algorithms the NSA deems sufficient'; it does not give me any meaningful knowledge to compare it to other settings. And for all I know I'm above the curve on understanding what some of these are; your typical administrator is likely to know even less. If they're merely described in terms of what they represent - is it going to make the choice clearer, or not?

Along related lines, Richard Jones asked why there would need to be multiple options at all, rather than just configuring a secure default setting.

Why wouldn't you always want to choose the most secure one?

I believe the proposal is trying to answer this question here:

'It may be that setting a high security level could prevent applications that connected to servers below that level to connect'

but it's rather unclear, and could do with at least more explanation and ideally some examples of things that wouldn't work.

In any case, I can't imagine I'd ever want a Fedora machine that wasn't 'most secure' (discounting external connections).

To those criticisms, Mavrogiannopoulos replied that there was a need for administrators to make a resource trade-off decision. For most servers today, he said, security settings on par with 64- or 80-bit key sizes is the norm, but a particular system might want a different setting depending on its uses.

Andrew Lutomirski raised a more practical implementation question, asking what would break "if the administrator does something silly like setting LEVEL-256." After all, he said, AES-256 is cryptographically weak (at least according to Bruce Schneier), but the 256-bit-key alternatives (Salsa20 and ChaCha20) are not widely supported. Thus, setting a high system-level default would break a lot of applications by leaving them with no usable ciphers enabled. Perhaps, he continues, it would be more meaningful to provide security levels that correspond to different risks of attack, such as "probably already broken by people with deep pockets," "probably safe against classical computers for a long time," and "probably safe against quantum computers for a long time."

Mavrogiannopoulos responded that, yes, a lot would break if the administrator set the security level too high, "but I don't think we protect from someone doing rm -fr / either :)." He also argued that a lot of the theoretical and academic attacks against ciphers are not practical concerns, referencing the same Schneier blog entry, where Schneier comments that scenarios like related-key attacks and attacks that break reduced-round variants are not real-world causes for panic.

Eventually, Lutomirski argued that only full control over the ciphers, options, modes, and hashes would provide real security—to which Mavrogiannopoulos replied that the whole point of the proposal was to spare the administrator from having to specify all of the settings, by creating a set of well-defined presets.

A few other questions came up in the debate. Omair Majid noted that there are crypto-using applications that do not rely on GnuTLS, OpenSSL, or NSS (mainly Java). Miloslav Trmač commented that the project would need to decide whether or not the meaning of the predefined levels would be fixed permanently or could be updated: "Will we remove a weak cipher from an existing level (ever / during a single Fedora release)? Will we add a cipher to a level (ever / during a single Fedora release)?." He also asked whether the proposal would mean patching applications that currently do not specify any preferences about cryptography, "i.e., packages that probably don't care too much about the specifics."

The amount of patching that it would require to implement a system-wide cryptographic policy was raised in a number of other comments; Mavrogiannopoulos agreed that it would entail a considerable amount of work to implement. At this stage, an implementation plan that assesses the amount of work required seems to be the major missing piece, and it is one that could conceivably throw a wrench into the short-term plan. Most people (aside from Lutomirski) agreed that the ability to define a system-wide "minimum cryptographic level," so to speak, would be a valuable addition to Fedora.

How much control should be allowed and what the precise meaning of the policy levels is are the types of detail that can turn into bikeshedding arguments. But even if there is agreement on those points, actually updating every application that uses a standard crypto library is a major undertaking; one that will require brute force of the non-cryptographic variety.

Comments (2 posted)

Brief items

Distribution quotes of the week

The question that was before us, and which is now likely to be before the project as a GR, is not whether we approve of standard interfaces and multiple implementations. Everyone, from the GNOME upstream to the GNOME package maintainers through the systemd maintainers, has indicated support for allowing multiple implementations of standard interfaces. Rather, the question that was before us was about the error case. When circumstances arise where those multiple implementations do not exist, who bears the burden of creating them?
-- Russ Allbery

However, please no file system Smögåsbord in the guided option. The ice cream truck offers a fantastic vanilla ice cream waffle cone. Please go inside for counter service if you want it in a cup, different flavors, sprinkles, flambé, or with sparklers attached.
-- Chris Murphy (Thanks to Matthew Miller)

Man I feel for you. I have the same problem with technological progress. I mean, I still remember having to transition to touch tone phones from rotary phones. Took me months of retraining to finally feel comfortable. I even hired a PT specialist to help with that.

And whoa don't even get me started with the transition from punchcards to teletype. Even today, when I'm sitting down to write any code...I still want to punch holes in things...over and over and over and over again. I bet a lot of us still do... the need to stab holes into things when writing code is so instinctive, so natural.

And truth be known, I still find myself wondering how I'm going to get the 8-track cassette into the cd-player slot in my car sometimes. I don't even know why I have those cassettes in the car still, they aren't even mine, they were hand me downs (like a lot of the craptastic initscripts at work I deal with actually)... nostalgia I guess. The feel of those cassettes, the weight of them, just so comforting.. ya' know. So yeah, I totally get where you are coming from with your fondness for your 30 year old initscripts.

-- Jef Spaleta

Comments (15 posted)

Introducing: Debian for OpenRISC

Christian Svensson has announced a version of Debian for the OpenRISC open-source processor. "Some people know that I've been working on porting Glibc and doing some toolchain work. My evil master plan was to make a Debian port, and today I'm a happy hacker indeed! Below is a link to a screencast of me installing Debian for OpenRISC, installing python2.7 via apt-get (which you shouldn't do in or1ksim, it takes ages! (but it works!)) and running a small Python script." (Thanks to Paul Wise.)

Comments (2 posted)

The first Ubuntu 14.04 'Trusty Tahr' beta

The first beta release for the upcoming Ubuntu 14.04 long-term support release is available for testing in a number of flavors: "This beta features images for Edubuntu, Kubuntu, Lubuntu, Ubuntu GNOME, Ubuntu Kylin, Ubuntu Studio, Xubuntu and the Ubuntu Cloud images."

Full Story (comments: 21)

Whonix 8 Released

Whonix Anonymous Operating System 8 has been released. "Whonix is an operating system focused on anonymity, privacy and security. It's based on the Tor anonymity network, Debian GNU/Linux and security by isolation. DNS leaks are impossible, and not even malware with root privileges can find out the user's real IP." Scroll down to find the changelog in the announcement.

Full Story (comments: none)

Distribution News

Debian GNU/Linux

Second Debian init system vote concludes

The second vote by the Debian technical committee addressed init system coupling. Bdale Garbee has announced the results of that vote. "With all 8 votes cast, this CFV on the init system coupling issue has ended in a tie between options "L" and "N". Given my vote on this issue, it should be no surprise that I use my casting vote to declare option "N" is the winner." (Thanks to Josh Triplett)

Option N: "The TC chooses to not pass a resolution at the current time about whether software may require specific init systems."

Comments (142 posted)

A general resolution proposed for Debian init system coupling

For some time, the proposal of a general resolution on the Debian init system question has seemed to be nearly inevitable. On February 28, Matthew Vernon duly proposed a GR to override the technical committee on the "coupling" issue. "This GR seeks to preserve the freedom of our users now to select an init system of their choice, and the project's freedom to select a different init system in the future. It will avoid Debian becoming accidentally locked in to a particular init system (for example, because so much unrelated software has ended up depending on a particular init system that the burden of effort required to change init system becomes too great)."

Interestingly, this GR may never come to a vote. A general resolution must be sponsored by at least five other Debian developers. As of this writing, only one developer (Ian Jackson) has sponsored this proposal. It seems that, perhaps, the Debian community has finally tired of this discussion and is ready to move on.

Full Story (comments: none)

bits from the DPL -- (end of January) + February 2014

Lucas Nussbaum has a few bits about his Project Leader activities, covering late January and February 2014. Topics include policy editors delegation, Secretary appointment, copyright assignment for Debian contributors, evaluation criteria for Trusted Organizations, how-can-i-help updates, and more.

Full Story (comments: none)

Debian Project Leader Elections 2014: Call for nominations

Debian Project Secretary Kurt Roeckx kicks off the Debian Project Leader election with the call for nominations. Nominations close March 9, and are followed by campaigning (March 10-30) and voting (March 31-April 13).

Full Story (comments: none)

Newsletters and articles of interest

Distribution newsletters

Comments (none posted)

Josefsson: Replicant 4.2 on Samsung S3

On his blog, Simon Josefsson describes the process of updating Replicant, the free-software-only Android-based mobile phone firmware project, from version 4.0 to 4.2. "I spent some time researching how to get the various non-free components running. This is of course sub-optimal, and the Replicant project does not endorse non-free software. Alas there aren’t any devices out there that meets my requirements and use only free software. Personally, I feel using a free core OS like Replicant and then adding some non-free components back is a better approach than using CyanogenMod directly, or (horror) the stock ROM. Even better is of course to not add these components back, but you have to decide for yourselves which trade-offs you want to make."

Comments (none posted)

Page editor: Rebecca Sobol


A JIT for grepping: jrep and rejit

March 5, 2014

This article was contributed by Alexandre Rames

Jrep is a grep-like program powered by the rejit library. So it is "just another" grepping program, except that it is quite fast. Opportunities to improve regular expression matching performance are interesting for the speed gain itself, but also for the technical aspects behind the scenes. This article introduces jrep and rejit by showing the time taken by jrep and GNU grep to recursively grep through the Linux kernel source for different regular expressions.

Note that jrep is a sample program implemented to showcase the rejit library. The rejit library is actually a Just-In-Time (JIT) compiler: at runtime it generates tailored machine code to match each regular expression. Both jrep and rejit are still in development, so jrep is far from being as fully featured as GNU grep (one of my favorite programs), and undoubtedly not as stable or as tested. Now that this is clear, let's look at what jrep and rejit can do.

Benchmark results

Benchmarks were run on an Ubuntu 13.10 machine, with 32GB of RAM, and a 12-core Intel i7-4930K CPU @ 3.40GHz (which supports SSE4.2). The engines are timed to grep through the Linux kernel 3.13.5 source, using a script (available in the rejit repository) generating commands like:

    $ time -p engine --recursive --with-filename --line-number regexp linux-3.13.5/ > /dev/null
Results show the best out of five runs for each engine and regular expression. Running multiple times ensures that the filesystem caches the files we are processing. The engine versions used are:
    $ cd <rejit_repo> && git rev-parse HEAD
    $ grep --version | head -n1
    grep (GNU grep) 2.17

regular expression graph]

The graphs presented show the time required by grep and jrep to search through the Linux kernel source for different regular expressions (regexps), indicated just above their associated graph bars in the extended regular expression syntax. The total time required by each engine is split between time spent in user and time spent in sys.

The user time is spent in user space, in our case mostly processing the files. The sys time is spent in the kernel. Jrep spends 75% of that time (computed for the regexp foobar) walking the file tree (using ftw) and mapping files to memory. Not considering kernel or ftw improvements, this 75% of sys can be seen as incompressible time required to get the files ready for processing. The other 25% are used elsewhere in the "matching" section of the program. Grep operates under the same constraint (using fts and manual buffer reading), but consistently shows less time spent in sys, so there are likely things to improve on this side for jrep.

When looking for unsigned, jrep slows down significantly due to the high number of matches. Some profiling is needed to figure out where the time is lost in this situation. For other strings it behaves well. The second regexp éphémère shows matching for non-ASCII characters. Rejit currently only partially supports UTF-8: non ASCII-characters are not yet supported in character classes (e.g. [çé0-9]).

Simple alternation of words show a similar trend:


The third alternation allows for common substrings extraction: the three alternated words contain the substring def. It allows searching for the substring, and when it is found completing the match from there. Both grep and jrep do so (via different mechanisms, out of scope for this article). Many engines do not perform such extractions, or sometimes only perform prefix and/or suffix extraction.

Both jrep and grep handle more complex regular expressions quite well:

[Complex regular
expression graph]

Many engines would try matching the regexps from the beginning, which can be inefficient. Similarly to the substring extraction above, both grep and jrep manage to look for the easier part of the regexps first and complete the match from there. For [a-z]{3,10}.*gnu.*[a-z]{3,10}, jrep generates code that first looks for gnu, and after it is found, goes backward and then forward to match the whole expression.

More complicated alternations show an area where rejit could improve:

[Slow alternations graph]

The handling of more complex alternations is a known (relatively) weak point of jrep (more precisely of rejit) in need of improvement. Grep uses a smart Boyer-Moore algorithm. To look for aaa|bbb|ccc at position p, it looks up the character at p + 2, and if it is not a, b, or c, knows it can jump three characters ahead to p + 3 (and then look at the character at p + 5).

On the other hand, like for single strings, rejit handles alternations simply: it applies brute force. But it does so relatively efficiently, so the performance is still good. To search for aaa|bbb|ccc at some position p in the text, rejit performs operations like:

      find 'aaa' at position p
      if found goto match
      find 'bbb' at position p
      if found goto match
      find 'ccc' at position p
      if found goto match
      increment position and goto loop
The complexity is proportional to the number of alternations. Worse, when the number of alternated expressions exceeds a threshold (i.e. when the compiler cannot allocate a register per alternated expression), rejit falls back to some slow default code. This is what happens for the two regexps with eight or more alternated strings. The code generation should be fixed to allow an arbitrary number of alternated strings.

In this situation, the initial multi-threading support in jrep can help. Multi-threading can be enabled with the -j (an alias for --jobs) option. Passing -jN will make jrep run with one thread listing the files to process in a queue, and N threads popping filenames from this list and processing them separately. Access to stdout is protected by a lock to ensure the results are printed "by file" and not interleaved.

As the results show, enabling multi-threading must be done carefully. Regular expressions requiring a lot of time matching in user space see improved performance, but regular expressions spending little time doing matching work may not benefit — or, worse, see decreased performance — from the multiple threads. When using multi-threading only the real time (time observed by the user) is reported, as sys and user are the sum of times spent for all cores, which would exceed the observed running time. All the regexps in the previous graphs are not conducive to multi-threading, except for the longer running "unsigned" case. The multi-threading support is not mature yet, and again profiling is needed to see what is happening. Ideally, useless additional threads should stay idle and not hurt performance.

GNU grep does not provide a multi-threading option.

About rejit and jrep

Not counting the underlying library, jrep only has about 500 lines of code. After jrep parses the command-line options, the file tree is traversed, each file is mapped to memory buffer and passed to rejit to find all the matches at once. If any matches are found jrep prints the results and carries on to the next file.

Note again that rejit is a prototype library. Many things are still unsupported. Notably, it only supports the x86_64 architecture. That being said, it has a few interesting features.

Like grep, it is non-backtracking, using finite automata to represent regular expressions, which was introduced into programming by Ken Thompson. For details see this excellent article by Russ Cox, or in short: it does not use an algorithm with exponential complexity. So, on the machine I am writing this article on, to check if the regexp (a?){25}a{25} matches the string aaaaaaaaaaaaaaaaaaaaaaaaa ("a" 25 times), rejit takes less than 0.001 second, while Google v8's Javascript engine (the backtracking engine used in Chrome and Android) takes about three seconds. If we try with 26 "a"s instead v8 would take twice as long, and again twice as long for 27 "a"s, etc.

The specifications of the machine do not matter here. The point is that some regular expressions can't be handled well by backtracking engines. In practice, the type of regular expressions causing this slow behaviour may not be very useful, but this is simply a risk if the program does not control what regular expressions are run (e.g. user search allowing regexps).

The common feature of back-references ((foo|bar)\1 matches foofoo and barbar) cannot be implemented without backtracking (an NP-complete problem). However, as noted by Cox in more detail, it does not justify the choice of a backtracking for the general algorithm. Backtracking can be used locally to handle back-references.

As seen in the benchmark results above, rejit is also (or at least can be) fast. Generating the code on the fly allows generating tailored code for every regular expression processed. In particular, the JIT allows detecting at runtime what CPU instructions are available and generating efficient code for the performance-critical sections. On this machine, rejit uses the Intel SSE4.2 extension. Except for those critical sections, the code generated is still rather poor and could benefit from a lot of optimization — but that is work for later.

The project was started as a JIT because I knew v8's regexp engine was a backtracking JIT, and I wanted to see if I could implement a non-backtracking one. Since I was unable to support all architectures at once, I had to first chose one architecture to support; I had quite a bit of experience with ARM, but not much with Intel, so x86_64 it was.

With hindsight I believe that the best solution would be to implement an architecture-independent engine (C/C++), and then locally extend it with JIT code. While producing an engine that would work on all platforms, most of the current performance could be achieved by implementing just a few pieces of assembly; more assembly stubs could then come in to bring faster speed to normally slow-to-match regular expressions.

If the project finds the resources to do so, that plan would be ideal. Otherwise I will likely focus half of my time on fixing bugs, and half on implementing features that I find interesting. The next area will probably be sub-matches and back-references. Some missing features and ideas are on the project TODO list. The library and sample programs (including jrep) are available under the GPLv3 license, and help is welcome.

The home page contains more information, some benchmarks showing other situations where rejit performs well (e.g. DNA matching), and some documentation. There is also an article that introduces the mechanisms used in rejit. I would be glad to discuss more about the project on the mailing list or by email (see the website).

Comments (21 posted)

Brief items

Quotes of the week

In particular, many of us never knew – or are in the process of forgetting – how dependent we used to be on proprietary software.

We didn’t get here because we failed in our duty to protect a prelaparian software commons, but because we succeeded in creating one. That is worth remembering.

Eric S. Raymond (hat tip to Paul Wise)

If we had a blue box with a madman inside, who we could persuade to bring our browsing history back from the future, implementing bélády's algorithm might be possible. Failing a perfect solution we are reduced to making a judgement based on the information available to us.
Vincent Sanders

Comments (none posted)

Bash-4.3 available

Bash version 4.3 has been released, incorporating a number of important bugfixes and new features. Among the bugs fixed, the most important "is the reworking of signal handling to avoid running signal and trap handlers in a signal handler context. This led to issues with glibc, which uses internal locks extensively and handles longjmps from user code very poorly." Among the most noteworthy new features are the "globasciiranges" option, "which forces the pattern matching code to treat [a-z] as if in the C locale," improvements to the `direxpand' option introduced in Bash 4.2, and support for negative subscripts when assigning and referencing indexed array elements.

Full Story (comments: none)

Buildroot 2014.02 released

Release 2014.2 of the Buildroot cross-compilation tool is available. This release cleans up a number of environment variable names, adds support for external packages, and adds the support infrastructure necessary for Python and Luarocks packages.

Full Story (comments: none)

GNU autoconf archive 2014.02.28 available

Version 2014.02.28 of the GNU autoconf archive has been released. Many new macros have been added, in addition to new option for existing macros like AX_PERL_EXT, AX_EXT, and AX_LUA.

Full Story (comments: none)

PulseAudio 5.0 available

Version 5.0 of the PulseAudio sound server has been released. The release notes highlight a number of new features, including a new implementation of tunnel modules and support for BlueZ 5.0's Advanced Audio Distribution Profile (A2DP). Notably, though, the A2DP support comes at a price: "The BlueZ project also decided to drop support for the HSP and HFP profiles, which were the profiles responsible for handling telephony audio. If you have a headset, its microphone won't work with BlueZ 5, because the microphone is only supported by the HSP and HFP profiles."

Full Story (comments: none)

Krita 2.8.0 released

Version 2.8.0 of the Krita painting application is out. New features include improved tablet support, high-quality scaling, integration with the "Gemini" sketch application, a new wrap-around mode, and much more.

Comments (none posted)

Newsletters and articles

Development newsletters from the past week

Comments (none posted)

What is good video editing software on Linux? (Xmodulo)

Xmodulo presents a brief overview of ten video editing applications available for Linux. "I will not cover subjective merits such as usability or interface design, but instead highlight notable features of each video editor."

Comments (18 posted)

Page editor: Nathan Willis


Brief items

FSF, SFLC and OSI to fight software patents in U.S. Supreme Court

The Free Software Foundation has joined forces with the Software Freedom Law Center and the Open Source Initiative in filing an amicus brief in software patent case *Alice Corp. v. CLS Bank* before the United States Supreme Court. "The jointly filed brief argues that the "machine or transformation" inquiry employed by the Court in *Bilski v. Kappos* is the correct, and exclusive, bright line test for patent eligibility of computer-implemented inventions. It says that not only do software idea patents fail established tests for patentability; they also violate the First Amendment."

Full Story (comments: 15)

Articles of interest

Free Software Supporter - Issue 71

The February 2014 issue of Free Software Supporter is out, with news from the Free Software Foundation. Topics include: FSF seeks Web Developer, FSF's next seminar on GPL Enforcement and Legal Ethics, FSF joins forces with SFLC and OSI to fight software patents, LibrePlanet, GNU MediaGoblin campaign, LulzBot TAZ 3 3D printer FSF-certified, and much more.

Full Story (comments: none)

Calls for Presentations

PyCon Sweden 2014

PyCon Sweden will be held May 20-21 in Stockholm, Sweden. The call for proposals deadline is March 16.

Full Story (comments: none)

oSC14 Keynote Confirmed

The schedule for the openSUSE Conference is coming together. The conference will be held April 25-28 in Dubrovnik, Croatia. The openSUSE board will open the conference on Friday and Michael Meeks will be the opening keynote speaker on Saturday. The call for papers deadline has been extended until March 31.

Comments (none posted)

Ohio LinuxFest 2014 Call for Presentations

Ohio LinuxFest will take place October 24-26 in Columbus, Ohio. They are looking for presentations on October 24-25. The CfP deadline is July 24, but the sooner the better.

Full Story (comments: none)

CFP Deadlines: March 6, 2014 to May 5, 2014

The following listing of CFP deadlines is taken from the CFP Calendar.

DeadlineEvent Dates EventLocation
March 10 June 9
June 10
Erlang User Conference 2014 Stockholm, Sweden
March 14 May 20
May 22
LinuxCon Japan Tokyo, Japan
March 14 July 1
July 2
Automotive Linux Summit Tokyo, Japan
March 14 May 23
May 25
FUDCon APAC 2014 Beijing, China
March 16 May 20
May 21
PyCon Sweden Stockholm, Sweden
March 17 June 13
June 15
State of the Map EU 2014 Karlsruhe, Germany
March 21 April 26
April 27
LinuxFest Northwest 2014 Bellingham, WA, USA
March 31 July 18
July 20
GNU Tools Cauldron 2014 Cambridge, England, UK
March 31 September 15
September 19
GNU Radio Conference Washington, DC, USA
March 31 June 2
June 4
Tizen Developer Conference 2014 San Francisco, CA, USA
March 31 April 25
April 28
openSUSE Conference 2014 Dubrovnik, Croatia
April 3 August 6
August 9
Flock Prague, Czech Republic
April 4 June 24
June 27
Open Source Bridge Portland, OR, USA
April 5 June 13
June 14
Texas Linux Fest 2014 Austin, TX, USA
April 7 June 9
June 10
DockerCon San Francisco, CA, USA
April 14 May 24 MojoConf 2014 Oslo, Norway
April 17 July 9 PGDay UK near Milton Keynes, UK
April 17 July 8 CHAR(14) near Milton Keynes, UK
April 18 November 9
November 14
Large Installation System Administration Seattle, WA, USA
April 18 June 23
June 24
LF Enterprise End User Summit New York, NY, USA
April 24 October 6
October 8
Operating Systems Design and Implementation Broomfield, CO, USA
April 25 August 1
August 3
PyCon Australia Brisbane, Australia
April 25 August 18 7th Workshop on Cyber Security Experimentation and Test San Diego, CA, USA
May 1 July 14
July 16
2014 Ottawa Linux Symposium Ottawa, Canada
May 1 May 12
May 16
Wireless Battle Mesh v7 Leipzig, Germany
May 2 August 20
August 22
LinuxCon North America Chicago, IL, USA
May 2 August 20
August 22
CloudOpen North America Chicago, IL, USA
May 3 May 17 Debian/Ubuntu Community Conference - Italia Cesena, Italy
May 4 July 26
August 1
Gnome Users and Developers Annual Conference Strasbourg, France

If the CFP deadline for your event does not appear here, please tell us about it.

Upcoming Events

FSFE: Registration opens for Document Freedom Day 2014

The Free Software Foundation Europe has announced open registration for Document Freedom Day. "This year the campaign day is March 26th, when people who believe in fair access to communications technology and Open Standards will again present, perform, and demonstrate."

Full Story (comments: none)

Events: March 6, 2014 to May 5, 2014

The following event listing is taken from the Calendar.

March 3
March 7
Linaro Connect Asia Macao, China
March 6
March 7
Erlang SF Factory Bay Area 2014 San Francisco, CA, USA
March 15
March 16
Chemnitz Linux Days 2014 Chemnitz, Germany
March 15
March 16
Women MiniDebConf Barcelona 2014 Barcelona, Spain
March 18
March 20
FLOSS UK 'DEVOPS' Brighton, England, UK
March 20 Nordic PostgreSQL Day 2014 Stockholm, Sweden
March 21 Bacula Users & Partners Conference Berlin, Germany
March 22 Linux Info Tag Augsburg, Germany
March 22
March 23
LibrePlanet 2014 Cambridge, MA, USA
March 24 Free Software Foundation's seminar on GPL Enforcement and Legal Ethics Boston, MA, USA
March 24
March 25
Linux Storage Filesystem & MM Summit Napa Valley, CA, USA
March 26
March 28
Collaboration Summit Napa Valley, CA, USA
March 26
March 28
16. Deutscher Perl-Workshop 2014 Hannover, Germany
March 29 Hong Kong Open Source Conference 2014 Hong Kong, Hong Kong
March 31
April 4
FreeDesktop Summit Nuremberg, Germany
April 2
April 4
Networked Systems Design and Implementation Seattle, WA, USA
April 2
April 5
Libre Graphics Meeting 2014 Leipzig, Germany
April 3 Open Source, Open Standards London, UK
April 7
April 9
ApacheCon 2014 Denver, CO, USA
April 7
April 8
4th European LLVM Conference 2014 Edinburgh, Scotland, UK
April 8
April 10
Open Source Data Center Conference Berlin, Germany
April 8
April 10
Lustre User Group Conference Miami, FL, USA
April 11 Puppet Camp Berlin Berlin, Germany
April 11
April 13
PyCon 2014 Montreal, Canada
April 12
April 13
State of the Map US 2014 Washington, DC, USA
April 14
April 17
Red Hat Summit San Francisco, CA, USA
April 25
April 28
openSUSE Conference 2014 Dubrovnik, Croatia
April 26
April 27
LinuxFest Northwest 2014 Bellingham, WA, USA
April 29
May 1
Embedded Linux Conference San Jose, CA, USA
April 29
May 1
Android Builders Summit San Jose, CA, USA
May 1
May 4
Linux Audio Conference 2014 Karlsruhe, Germany
May 2
May 3
LOPSA-EAST 2014 New Brunswick, NJ, USA

If your event does not appear here, please tell us about it.

Page editor: Rebecca Sobol

Copyright © 2014, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds