|
|
Subscribe / Log in / New account

LWN.net Weekly Edition for August 11, 2016

Security and reproducible-build progress in Guix 0.11

By Nathan Willis
August 10, 2016

The GNU Guix package-manager project recently released version 0.11, bringing with it support for several hundred new packages, a range of new tools, and some significant progress toward making an entire operating system (OS) installable using reproducible builds.

Guix is a "functional" package manager, built on many of the same ideas found in the Nix package manager. As the Nix site explains it, the functional paradigm means that packages are treated like values in a functional programming language—Haskell in Nix's case, Scheme in Guix's. The functions that build and install packages do so without side effects, so the system can easily offer nice features like atomic transactions, rollbacks, and the ability for individual users to build and install separate copies of a package without fear that they will interfere. Part of making such a system reliable is to ensure that builds are "reproducible"—meaning that two corresponding copies of a binary built on different systems at different times will be bit-for-bit identical.

GuixSD improvements

Our last look at Guix coincided with the 0.9 release in November 2015. That article explored the Guix System Distribution (GuixSD), an installable OS built with Guix packages on top of a base Linux system. At that time, however, GuixSD had to be installed manually, which could be a rather involved process. Since then, one of the most significant changes is that GuixSD can now be installed from a live USB image (a feature that debuted with the 0.10 release in March). That installation process can use binary packages, but one of Guix's calling cards is that source installation for packages is supported as well. Indeed, rebuilding every package from source in a reproducible manner was the original goal. The binary-package installation method offered now is seen as a shortcut for those interested in testing the system out.

In addition to USB installation, GuixSD has gained a security update mechanism. In the past, Guix's adherence to the functional package management paradigm posed a bit of a problem for deploying security updates: updating one package would trigger a rebuild for all dependent packages as well. A simple security patch (not introducing any ABI changes) to one version of a package should, in theory, not alter anything that would cause the dependent packages to build differently. But the functional model of the package manager necessitates the rebuilds anyway, so it does cause an inconvenience for the user.

Guix's solution is referred to as grafting. Essentially, a new package including the security fix is created (for instance, in a bash-fixed package to deploy a patched bash), and the definition of the original package (bash) is updated to point to the new package as a replacement. That "grafts" the new package into the dependency tree and prunes out the unpatched package. Consequently, although the dependency graph has changed, the dependent packages have their dependencies satisfied by the new dependency, so they do not need to be rebuilt. Other package managers that do not attempt to impose functional package-management guarantees do not have to go through such a process, but it was an important missing piece for Guix and GuixSD.

GuixSD has also inched closer to being ready for daily usage with the addition of several new system services. Among the new additions in the 0.11 release are mcron, the Dropbear SSH server, the Dico dictionary service, and a random-number-generation service. Support for RAID arrays using mdadm has also been added, as has device mapper support for LUKS-encrypted partitions.

This is also the first GuixSD release to include support for system-wide tests. Although Guix has long had a robust suite of unit tests and it uses continuous-integration tests on individual packages, in the past it has never had a system-testing framework. The 0.11 release closes that gap. The test framework runs GuixSD in a QEMU virtual machine that is connected to the host system with virtio-serial. There are tests defined for basic functionality, such as successfully starting all of the system services, user account creation, and so forth, as well as a growing set of tests for specific services. Finally, there is a test that starts the GuixSD installer image in VM, then installs and boots GuixSD in a separate VM image.

Packages and reproducibility

Considerable progress has also been made toward making the entire Guix system use reproducible builds. In the 0.10 release, a few core packages (such as glibc, Perl, and Python) were bit-for-bit reproducible. The guix challenge command (which compares binary packages to the output of local builds) was introduced in the 0.9 release, which made systematic testing of build reproducibility possible. Naturally, the testing revealed a lot of work for developers. As of 0.11, steady progress is reported on making all packages build reproducibly, although the project does not yet have a tracking page that shows the status of the effort. That said, Guix is one of several free-software projects working on reproducible builds; those individual projects share their results and have been pushing a number of changes upstream.

Raw numbers are provided for the total number of packages changed, though. The 0.11 release adds 484 new packages and updates 678 existing packages. As a bonus, users can now easily share their own local package builds with the community using the guix publish command. This option spawns an HTTP server (on port 8080) providing the package; other users can fetch and add it to their own system using the Guix tool set.

Incorporating binaries built by others has its share of risks, although the availability of guix challenge lessens the likelihood of surreptitious back doors being inserted. Nevertheless, as Guix has added support for more package origins beyond the local build, it has become necessary to provide tooling for users to manage the complexity. Another addition in 0.11 is an Emacs major mode for browsing, inspecting, and changing the sources of individual packages.

Naturally, there are quite a few smaller changes to be found in the new release as well. For instance, Guix supports multiple user profiles on the same system, and those profiles now follow the freedesktop.org XDG standards (including installation directories, menu specifications, and so on). There have also been many improvements to guix lint and other utilities.

Although Guix has now been in active development for more than three years, it is still a young project—and GuixSD is even younger. Both are still flagged as being not yet ready for daily usage, even though they have accumulated plenty of fans in the free-software community. The progress that the team makes would, no doubt, be impressive for any new "distribution" (if that is even the most appropriate term). The fact that Guix takes a starkly different approach to fundamental package-management tasks makes it all the more interesting to watch.

Comments (8 posted)

Better types in C using sparse and smatch

August 10, 2016

This article was contributed by Neil Brown

The primary motivation for my recent examinations of sparse and smatch came from a fascination with the idea that they can be used to make a better, safer version of C. They cannot be used to make it easier to write good programs, but they can make it harder to write bad programs by detecting constructs that are unwanted even though they are not errors in true C.

Sparse already provides for address_space and bitwise annotations on pointers and integers, respectively, ensuring that types the programmer wants to keep distinct can be kept distinct. Motivated by this existing functionality, and a particular need of my own, I set out to discover if either sparse and smatch (or both) could be used to keep track of which pointers might be null and to warn about any code that could lead to a null pointer being dereferenced. Though I cannot yet declare complete success, the results have been fairly encouraging and distinctly educational. In the interests of sharing this education, the current state of success and failure is presented below.

Preliminary observations

Dereferencing null pointers in C is far from a new concern, so it would be surprising if there was nothing already available to address this concern; a quick scan of the GCC documentation reveals that it already has a "nonnull" attribute for functions. The example in the documentation shows:

    extern void *my_memcpy(void *dest, const void *src, size_t len)
              		  __attribute__((nonnull (1, 2)));

This declaration tells the compiler that the first and second argument will never be null. Further examination shows that this is not useful for my purposes as it facilitates optimizations more than warnings. The compiler is free to remove any code in my_memcpy() that would only be run if one of those pointers were null, and it may sometimes warn if a null value is passed as an argument. Since it provides no certainty of warning and only applies to function arguments and not, for example, structure fields, I find it of little use.

My particular use case is the editor-building framework project that I spoke about at linux.conf.au in January [video], which currently contains about 18,000 lines of C code. I started out, as in many projects, not really being sure how I wanted various aspects to work. As the project matured, I realized that there were a great many places where I had assumed pointers would be non-null, but where I really should check. This doesn't apply to all pointers; some, by design, must never be null. Others merely should never be null, so checking is indicated. I could audit all that code manually, but I would much rather have a tool to help me.

Looking more closely at the tools at hand, I discovered that sparse knows about a rarely used "safe" attribute that is meant for "non-null/non-trapping pointers". If a variable is declared to be safe as, for example, in:

    char *p __attribute__((safe));

then any attempt to test whether the value of that variable is (or is not) null produces a warning. While this functionality is not, by itself, hugely useful, the fact that sparse already parses and stores the annotation is; it provides a basis on which to build.

A few moments thought are enough to determine that, while it must always be safe to dereference a safe variable, it does not follow that it is always unsafe to dereference other variables. As a trivial example:

    if (p)
	*p = 0;

must always be safe, at least against dereferencing a null pointer. This sort of dependency is not something that sparse is able to resolve, but it is exactly the sort of thing that smatch was built to handle.

As smatch was built on sparse, it has access to the safe attribute too, though it doesn't keep track of attributes quite as well as sparse and needs some coaxing. Once this attribute is tracked properly, smatch should be able to know when a variable is safe, either because it was annotated as being safe, or because its value has recently been tested and found to be non-null. As we found in my recent analysis, it is quite easy to extend smatch with a new checker, so that seemed like a profitable course to follow.

Building a checker for safe pointer dereferencing

Building a new checker for smatch is quite easy, though I must thank Dan Carpenter for providing me with an early example to work from. That example has since been discarded and rebuilt from scratch, but the knowledge gained was invaluable. A sanitized development history of my checker can be seen on GitHub with the first revision limited to reporting all the places in the code where the DEREF_HOOK is called. As this checker will eventually expect to find safe annotations and so will complain extensively about any program that isn't appropriately annotated, the checker will only activate if SMATCH_CHECK_SAFE is set in the environment. With this environment variable set, the enhanced smatch can be run on any C program and will report all the places were a pointer dereference is found. Somewhat surprisingly, it reports on a lot more too.

In most of the computer programming world, the term "dereference" is reserved for pointers. A "reference" is another name for a "pointer", and when code accesses the memory pointed to, it is said to be "dereferencing" that pointer. However, in sparse, the term DEREF — or more specifically EXPR_DEREF — refers to the operation of accessing a member within a structure, that is the dot (".") operator. So a construct like a->b is converted to (*a).b and parsed as:

	EXPR_DEREF( EXPR_PREOP('*', EXPR_SYMBOL('a')), 'b')

so dereferencing is a * prefix operation, and the dot operator is called EXPR_DEREF. Since sparse uses this terminology, it makes some sense for smatch to use it too, so DEREF_HOOK hooks fire both for member access and for real pointer dereference with the * operator. Once this is understood, it is easy to only consider DEREF_HOOK calls when an EXPR_PREOP expression is given.

With this more proper accounting, my project reports 7104 dereference operations — some of which I know to be unsafe, most of which I hope are safe and that I want the checker to confirm are safe. Now that the prototype checker is finding the target expressions, the implied_not_equal() interface provided by smatch can be used to start ignoring dereferences that can be determined to be safe. Adding that call reduces the number of dereferences reported to 1643. This large drop might seem to suggest that I had already been quite careful but, alas, this is not the case. When smatch notices that a pointer has been dereferenced, it records that it must now have a value in the range for valid pointers. This means that subsequent dereferencing on the same value will notice that the value is certainly not NULL. So a large part of this drop is just removing noise rather than detecting known-safe usage.

The next step involves adding a large number of __attribute__((safe)) annotations and updating the code to check for these. The word safe currently appears 871 times in my code, so this was not a trivial task, but as I had a tool to help me find places where it was needed, it was largely a mechanical one. Here the use of sparse in parallel with smatch was particularly useful. Though smatch shares much code with sparse, it does not perform all the same tests. In particular it doesn't complain if a safe value is tested, and doesn't complain if a function declaration uses different annotations from the function definition. Using sparse, I could be sure that functions were declared consistently and would often be warned when I declared something as safe that I probably shouldn't have.

Actually adding the text __attribute__((safe)) throughout the project would have resulted in extremely ugly code, but that is just the sort of problem that the C pre-processor turns into a non-problem:

    #ifdef __CHECKER__
    #define safe __attribute__((safe))
    #else
    #define safe
    #endif

Now I just use the simple word safe. e.g.

    struct pane *focus safe;

With lots of annotations and a version of my checker that ignores safe values, I had reduced the number of interesting pointer dereferences down to 786; still too many, but there was still some low-hanging fruit to be removed. One pattern that showed up repeatedly when adding safe annotations was that a safe value, possibly from a function parameter or a structure member, would be assigned to a local variable, and then the local variable would be dereferenced. Marking that local variable as safe seemed excessive; tracking this sort of status is exactly what smatch is good for.

After a little code rearrangement, a new hook was added to process all assignments and to mark the variable on the left as safe if the value on the right was known to be non-null. As with dereferences, we need to be selective about which assignments are considered: assignments like "+=" will never change the safe status of the left-hand-side, so only simple "=" assignments need to be considered. The easiest way to mark a variable as safe is to define a smatch state and associate that with the left-hand expression, and to be sure to remove it when there is the possibility of a null value being assigned. Doing this brings the number of interesting dereferences down to 374.

We are now using two distinct states to record that a variable may be safe to reference: the new "safe" state that is assigned when a value is assigned with a safe value, and the numeric-range state that is maintained internally by smatch. This causes a little confusion when the two need to be merged. For example in the code fragment:

    if (!p)
	p = safe_pointer;
    *p = 0;

For the case where p was originally null, the checker will mark p with the safe state when safe_pointer is assigned to it. For the case where p was not null, smatch will record this fact in its numeric-range state. When the code *p = 0 is reached, those two states will not have been merged as they are incompatible. Instead, the checker would need to examine the tree of historical states (described in the previous smatch article) and ensure that each branch is safe. This issue doesn't affect many cases in my code and so hasn't been addressed yet.

Once we have the option of marking variables, fields, functions, and function parameters as safe, we have introduced new places where errors can occur: only safe values may be assigned to, returned from, or passed into these various places. Given the infrastructure we already have, these checks can be added to the assignment hook, to a new function call hook, and to a return hook with a minimum of fuss, though, as the return hook doesn't know the type of the function, it needs to pass information to the end-of-function hook.

These various checks add nearly 500 new warning sites and, while this sounds like a lot, it doesn't really add new classes of errors. A good number of these reports are the actual errors that I wanted to find, where I haven't been careful enough and want to be reminded that I should add proper checking. Most of the rest fit into one of a small number of categories, some of which can be addressed with improvements to the assessment of when a value is safe, but some that will require more major surgery to properly resolve.

Detecting more "safe" values

Handling pointer arithmetic is obviously necessary in order to handle array references, as these are translated to pointer addition early in the parsing process. Using the lower-order bits of a pointer (that would normally be zero) to store some flags or other data is a technique that should be familiar to most kernel programmers. A simple example of this is the "red-black tree" code which stores the "color" of a node in the least significant bit of the parent pointer. The bit masking needed to extract a pointer, like the addition needed for arrays, needs to be recognized and handled by the dereference checker so that they don't cause it to lose track of which pointers are safe. This is not particularly hard, but requires more care than the other steps. Adding this reduces the number of possible null dereferences from 374 to 319.

A slight variation of pointer arithmetic is taking the address of a member of a structure. If ptr is a safe pointer to a structure containing the field member, then &(ptr->member) must be a safe pointer as well. Though such a construct will rarely be dereferenced directly, it will often be passed as an argument to a function. When trying to recognize a construct such as this within smatch, it is important to remember that the expression data structures used have not been completely normalized yet so, for example, parentheses and casts might still be present. Smatch provides strip_parens() that will just remove any enclosing parentheses, and strip_expr() that will also strip away casts and a few other constructs that are often uninteresting. Using these, an expression that finds the address of a structure member by way of a dereferenced pointer can be detected, and then the safety of that inner pointer assessed. Adding this check removed nearly 160 warnings about unsafe values being passed as function arguments.

Making allowances for code included from common header files is sometimes easy and sometimes challenging. If it is just a function declaration that needs some safe annotation, then just adding a new declaration to a local header files will often suffice:

    char *strncat(char *s1 safe, char *s2 safe, int n) safe;

The Python C-API provides some interfaces as macros that will dereference pointers that the programmer cannot declare as safe without changing the installed header files. Smatch provides an easy way to see if some code came from a macro expansion, but doesn't make it easy to tell if that macro was defined in a system include file — and so could be treated leniently — or in a local file — and so should be treated strictly. Adding a check for macros and ignoring any dereference that came from them removes about 100 warnings from external macros, but, unfortunately, it also removes about 70 warnings from macros local to the package that should be treated more strictly.

A need for a richer type language

After the easy (and the not-quite-so-easy) mechanisms for tracking safe pointers have been dealt with, the remaining warnings are a fairly even mix of bugs that should be fixed and use cases that I know are safe for reasons that cannot be described with a simple safe annotation. These fit into two general classes.

First, there some structures in which certain fields are normally guaranteed to be non-null, but within specific regions of code — typically during initialization — they might be null. I really want two, or maybe more, variants of a particular structure type: one where various fields are safe and one where they aren't. Then, when using a pointer to the non-safe type in a context where the safe version is needed, the individual members could be analyzed and warning given if the members weren't as safe as they should be. More generally, this seems to fit the concept of a parameterized type where the one type can behave differently in different contexts. Allowing some attribute to apply to a structure in a way that affects members of the structure seems conceptually simple enough. Retro-fitting the parsing and processing of those attributes to sparse would be a more daunting task.

The second class is best typified by an extensible buffer like:

    struct buf {
	char *text;
	unsigned int len;
    };

If len is zero, then text may be NULL. If len is not zero, then text will not be NULL (i.e. will be safe) and in fact will have len bytes allocated. I feel I want to write:

    char * text __attribute__(("cond-safe",len > 0));

This is similar to a parameterized type except that the variation in type is caused by a value within the structure rather than an attribute or parameter imposed on the structure. This sort of construct is normally referred to as a "dependent type", as the type of one field is dependent on the value of another. I have no doubt that smatch could be taught to handle the extra dependency of these dependent types, providing that sparse could parse them and record the dependency properly.

Properly resolving these two would require a substantial effort and so is unlikely to happen quickly. As an alternative, the time-honored tradition in C of using a type cast to hide code that the compiler cannot verify can be used. If I have a pointer that I know to be safe, I can cast it to (TYPE *safe), or, if I have a value that sparse thinks is safe but which I want to test anyway, I can test (void *)safe_pointer. With luck, this will allow all of the current warnings to be removed without too much ugliness.

Other possibilities

While I was working on this extension to smatch, the preliminary email discussions leading towards this year's Linux Kernel Summit were underway and Eric Biederman, quite independently, started a discussion thread titled "More useful types in the linux kernel" to explore the idea of strengthening the type system of C in order to benefit the development of the Linux kernel.

Biederman was initially thinking of a GCC plugin rather than enhancements to sparse, and his interest in pointer safety was more around whether appropriate locks and reference counts were held, rather than my simple question of whether the pointers are null or not. Stepping back from those details, though, the general idea seemed similar to my overall goal and it was pleasing to know that if this was a crazy idea I, at least, wasn't the only one to have it.

Subsequent discussion showed that, though not everyone wants to run a time-consuming checker every time they compile their code, many people would like to see more rigorous checks being applied. One observation that was particularly relevant to my work was that, in the kernel, pointers can have three different sorts of values: they can be valid, they can be null, or they can store a small negative error code. In the context of the kernel, just testing that a pointer is not zero is not enough to be sure it can safely be dereferenced.

There was even a suggestion that a function declaration might explicitly list the possible error codes that might be returned, which would make for a much richer type annotation than the simple safe flag that I have been working with. Whether this sort of detail is really worth the effort is hard to know without trying. It may allow us to automatically catch a lot more errors and provide reliable API documentation, but it might — as James Bottomley feared — end up as "a lot of pain, for what gain?"

As is often the case, abstract discussion is only of limited use. To find real answers we need to see real code and real results. When the required language extension is a single attribute that is already parsed by sparse, the exercise described here shows that getting those results is challenging but not prohibitive. For any more adventurous extensions, sparse would need to be be taught to parse more complex attributes and the difficulty of such a project is not one that I am able to estimate as yet. However we are a large community and there are clearly a few people interested. It is reasonable to hope that such extensions may yet be attempted and the results reported.

Comments (29 posted)

A quick thank-you from LWN

Last week's article about subscriptions led to a most welcome spike in subscribers. We would like to thank all of you who decided to buy subscriptions after seeing the article; the resulting increase has put us almost at the level we were at one year ago. Needless to say, that is most welcome, but it can also be seen as only a beginning. If you appreciate LWN and are not a subscriber, please consider picking up a subscription and helping to ensure that we remain on the net.

Meanwhile, our open position for a full-time editor remains open. Thanks to those of you who have applied. If you have not applied and think you might be interested, there is still time; we would also ask readers to encourage anybody they think might be suited to the job.

Comments (23 posted)

Page editor: Jonathan Corbet

Security

The TCP "challenge ACK" side channel

By Jake Edge
August 10, 2016

Side-channel attacks against various kinds of protocols (typically networking or cryptographic) are both dangerous and often hard for developers and reviewers to spot. They are generally passive attacks, which makes them hard to detect as well. A recent paper [PDF] describes in detail one such attack against the kernel's TCP networking stack; the bug (CVE-2016-5696) has existed since Linux 3.6, which was released in 2012. Ironically, the bug was introduced because Linux has implemented a countermeasure against another type of attack.

There are a number of pieces of information that an attacker needs to interfere with a TCP connection between two hosts. To start with, the so-called four-tuple, which consists of the source IP address, source port number, destination IP address, and destination port number, is needed. Several of those values can be guessed or inferred (e.g. destination IP and port), but there is another piece of information needed to actually interfere with a connection.

TCP has 32-bit sequence numbers that are used to order the packets in the connection stream. They are also an important part of the packets used to establish and break down connections. A packet that could interfere with a connection must have a sequence number that is within the receive window of the target. That window effectively determines the range of sequence numbers that are acceptable.

Once upon a time, in a far more trusting era, sequence numbers were fairly easily predicted, but those days are long gone. These days, sequence numbers are randomized to thwart various kinds of packet-injection and connection-spoofing attacks. An eavesdropper can still observe the sequence numbers in a conversation, but an "off-path" attacker must guess. By randomizing the initial sequence number (ISN) used by a connection, network stacks make guessing difficult enough to stop most off-path attacks.

But if a way can be found to more quickly narrow in on the sequence numbers used in a connection, off-path attackers can be more efficient in their probing—to the point where they can inject packets into an established connection. That is effectively what the researchers found.

But first, there is another obstacle to overcome: according to the paper, off-path attacks have generally been limited by the need to get unprivileged malware running on one of the endpoints to determine whether two hosts are actually communicating. But the researchers found a way to quickly determine whether two hosts are communicating and what port numbers they are using, without any assistance from malware.

Linux is the only operating system vulnerable to this attack because it is the only one that has faithfully implemented RFC 5961, which was proposed to avoid a different kind of packet injection attack. It uses "challenge ACKs" to avoid resetting a connection when a spoofed connection request (SYN) or connection termination (RST) packet with a sequence number within the receive window is received. The challenge ACK will allow long-lived connections to be more resistant to these spoofed packets that are meant to close the connection.

The challenge ACKs require that the original sender reply with the exact sequence number expected for the next packet, not just one within the receive window, which is more difficult for an off-path attacker to arrange. But challenge ACKs also consume resources, so the RFC recommends that a limit be imposed on the number of challenge ACKs sent over a given time frame (Linux used 100/second by default). Since challenge ACKs were expected to be rare occurrences, the counter for rate-limiting them was global for all TCP connections on the system—and the RFC specifically directed that regular ACKs should not be counted. Because of this, challenge ACKs provide a side channel:

At a very high level, the vulnerability allows an attacker to create contention on a shared resource, i.e., the global rate limit counter on the target system by sending spoofed packets. The attacker can then subsequently observe the effect on the counter changes, measurable through probing packets.

Through extensive experimentation, we demonstrate that the attack is extremely effective and reliable. Given any two arbitrary hosts, it takes only 10 seconds to successfully infer whether they are communicating. If there is a connection, subsequently, it takes also only tens of seconds to infer the TCP sequence numbers used on the connection.

The general outlines of an attack are as follows. The attacker establishes an ordinary connection to the server, then sends a stream of bogus RST and SYN packets to force the target to generate the maximum number of challenge ACKs. Some spoofed packets "from" the client of interest are also sent. If all of the expected 100 challenge ACKs are received on the regular connection, then the four-tuple in the spoofed packet does not represent an active connection, but if some are missing, they must have been sent as challenge ACKs in response to spoofed packets, indicating that a connection exists.

Once that is established, in-window sequence numbers need to be determined—challenge ACKs can help there too. Once again, challenge ACKs are provoked using a normal connection and the number received are counted. Spoofed RST packets with a guessed sequence number are also sent; the number of challenge ACKs received on the regular connection allow the attacker to infer whether the sequence number is within the window or not. Further probing with spoofed ACKs can narrow things down to the exact sequence number expected for the next packet.

Two kinds of attacks are described in the paper. The easiest is to simply reset an in-progress connection. The other hijacks the connection to inject content of the attacker's choosing. The paper describes the former being reliably deployed against SSH and Tor connections, while it mentions the latter being targeted at long-lived connections for data like video streams, advertisements, or news sites.

There are some more wrinkles to the attack, of course, including synchronizing with the host's clock so that the one-second boundary can be reliably determined. That, too, uses challenge ACKs. Other hurdles are also discussed in the paper. But the attack can have far-reaching effects as the team's short YouTube video demonstrates. It injects some JavaScript into a web session to display attacker-controlled content.

The research was also highlighted in an article in a University of California, Riverside (UCR) publication as most of the researchers are students or faculty there. Yue Cao, Zhiyun Qian, Zhongjie Wang, Tuan Dao and Srikanth V. Krishnamurthy of UCR were joined by Lisa M. Marvel from the United States Army Research Laboratory in writing the paper, which was presented at the USENIX Security Symposium on August 10.

It is an interesting and clever attack that sadly only lacks a catchy name, colorful logo, and hype-filled web site. Cao did alert kernel developers to the problem, which was fixed in the mainline in July (and appears in the 4.7 kernel). The fix raises the limit to 1000 challenge ACKs per second, but also adds some randomization to the value so that counting will be less effective. In addition, the patch notes per-socket rate-limiting is available, which could lead to the removal of the global challenge ACK count down the road; some work toward that end has been merged as well.

The fix has not made it to the stable kernels yet, but there is a mitigation available in the form of the tcp_challenge_ack_limit sysctl knob. Setting that value to something enormous (e.g. 999999999) will make it much harder for attackers to exploit the flaw.

Spoofing source IP addresses is not technically difficult, though it may be hard to get the packet through routers and the like in some cases. A Center for Applied Internet Data Analysis study shows that nearly half of the autonomous systems on the internet are at least partly spoofable, though. There are, as yet, no reports of attacks using this technique in the wild, though one would guess it won't be long before we do see some.

In the end, challenge ACKs seem a reasonable solution to a real problem, but Linux played the role of a guinea pig here. There are upsides to doing that, such as providing a platform where the researchers could discover the problems in the RFC. There are downsides, as well; Linux is currently getting some bad press about its networking implementation, for example. On the whole, though, these problems needed to be found—and now they are.

Comments (15 posted)

Brief items

Security quotes of the week

"Other players that possess the potential ability to limit piracy are the companies that own the major operating systems which control computers and mobile devices such as Apple, Google and Microsoft," one of the main conclusions reads.

"The producers of operating systems should be encouraged, or regulated, for example, to block downloads of copyright infringing material," the report adds.

Ernesto Van Der Sar on a report [PDF in Swedish] from the Black Market Watch and the Global Initiative against Transnational Organized Crime

If we're facing a situation where we see tampering on a massive scale, we could end up in a crisis far worse than Florida after the Bush/Gore election of 2000. If we do nothing until after we find problems, every proposed solution will be tinted with its partisan impact, making it difficult to reach any sort of procedural consensus. Nobody wants to imagine a case where our electronic voting systems have been utterly compromised, but if we establish processes and procedures, in advance, for dealing with these contingencies, such as commissioning paper ballots and rerunning the elections in impacted areas, we will disincentivize foreign election adversaries and preserve the integrity of our democracy.
Dan Wallach

Comments (18 posted)

Breaking through censorship barriers, even when Tor is blocked (Tor Blog)

The Tor Blog looks at using Pluggable Transports to avoid country-level Tor blocking. There are some new easy-to-follow graphical directions for using the transports. "Many repressive governments and authorities benefit from blocking their users from having free and open access to the internet. They can simply get the list of Tor relays and block them. This bars millions of people from access to free information, often including those who need it most. We at Tor care about freedom of access to information and strongly oppose censorship. This is why we've developed methods to connect to the network and bypass censorship. These methods are called Pluggable Transports (PTs). Pluggable Transports are a type of bridge to the Tor network. They take advantage of various transports and make encrypted traffic to Tor look like not-interesting or garbage traffic. Unlike normal relays, bridge information is kept secret and distributed between users via BridgeDB."

Comments (2 posted)

Study Highlights Serious Security Threat to Many Internet Users (UCR Today)

UCR Today reports that researchers at the University of California, Riverside have identified a weakness in the Transmission Control Protocol (TCP) in Linux that enables attackers to hijack users’ internet communications remotely. "The UCR researchers didn’t rely on chance, though. Instead, they identified a subtle flaw (in the form of ‘side channels’) in the Linux software that enables attackers to infer the TCP sequence numbers associated with a particular connection with no more information than the IP address of the communicating parties. This means that given any two arbitrary machines on the internet, a remote blind attacker, without being able to eavesdrop on the communication, can track users’ online activity, terminate connections with others and inject false material into their communications."

Comments (8 posted)

Check Point's "QuadRooter" vulnerabilities

Check Point has discovered four local-root vulnerabilities in Qualcomm-based Android devices and is hyping the result as "QuadRooter". "QuadRooter is a set of four vulnerabilities affecting Android devices built using Qualcomm chipsets. Qualcomm is the world’s leading designer of LTE chipsets with a 65% share of the LTE modem baseband market. If any one of the four vulnerabilities is exploited, an attacker can trigger privilege escalations for the purpose of gaining root access to a device." Actually getting the report requires registration. All four vulnerabilities are in Android-specific code; three of them are in out-of-tree modules (kgsl and ipc_router); the fourth is in the "ashmem" code in the staging tree.

Comments (14 posted)

New vulnerabilities

bsdiff: denial of service

Package(s):bsdiff CVE #(s):CVE-2014-9862
Created:August 8, 2016 Updated:November 3, 2016
Description: From the CVE entry:

Integer signedness error in bspatch.c in bspatch in bsdiff, as used in Apple OS X before 10.11.6 and other products, allows remote attackers to execute arbitrary code or cause a denial of service (heap-based buffer overflow) via a crafted patch file.

Alerts:
Debian-LTS DLA-697-1 bsdiff 2016-11-03
Mageia MGASA-2016-0288 bsdiff 2016-08-31
openSUSE openSUSE-SU-2016:1977-1 bsdiff 2016-08-06

Comments (none posted)

chromium: multiple vulnerabilities

Package(s):Chromium CVE #(s):CVE-2016-5139 CVE-2016-5140 CVE-2016-5141 CVE-2016-5142 CVE-2016-5143 CVE-2016-5144 CVE-2016-5145 CVE-2016-5146
Created:August 8, 2016 Updated:August 18, 2016
Description: From the openSUSE advisory:

Chromium was updated to 52.0.2743.116 to fix the following security issues: (boo#992305)

  • CVE-2016-5141: Address bar spoofing (boo#992314)
  • CVE-2016-5142: Use-after-free in Blink (boo#992313)
  • CVE-2016-5139: Heap overflow in pdfium (boo#992311)
  • CVE-2016-5140: Heap overflow in pdfium (boo#992310)
  • CVE-2016-5145: Same origin bypass for images in Blink (boo#992320)
  • CVE-2016-5143: Parameter sanitization failure in DevTools (boo#992319)
  • CVE-2016-5144: Parameter sanitization failure in DevTools (boo#992315)
  • CVE-2016-5146: Various fixes from internal audits, fuzzing and other initiatives (boo#992309)
Alerts:
Gentoo 201610-09 chromium 2016-10-29
Ubuntu USN-3058-1 oxide-qt 2016-09-14
Arch Linux ASA-201608-16 chromium 2016-08-17
Fedora FEDORA-2016-e9798eaaa3 chromium 2016-08-12
Mageia MGASA-2016-0279 chromium-browser-stable 2016-08-09
Debian DSA-3645-1 chromium-browser 2016-08-09
Red Hat RHSA-2016:1580-01 chromium-browser 2016-08-09
openSUSE openSUSE-SU-2016:1983-1 Chromium 2016-08-08
openSUSE openSUSE-SU-2016:1982-1 Chromium 2016-08-08

Comments (none posted)

Firefox: denial of service

Package(s):firefox, nss, thunderbird CVE #(s):CVE-2016-2839
Created:August 5, 2016 Updated:September 7, 2016
Description:

From the openSUSE advisory:

Cairo rendering crash due to memory allocation issue with FFmpeg 0.10.

Alerts:
openSUSE openSUSE-SU-2016:2378-1 Thunderbird 2016-09-25
openSUSE openSUSE-SU-2016:2254-1 thunderbird 2016-09-07
openSUSE openSUSE-SU-2016:2253-1 thunderbird 2016-09-07
SUSE SUSE-SU-2016:2195-1 firefox 2016-08-30
SUSE SUSE-SU-2016:2131-1 MozillaFirefox 2016-08-22
SUSE SUSE-SU-2016:2061-1 firefox, nspr, nss 2016-08-12
openSUSE openSUSE-SU-2016:2026-1 firefox, mozilla-nss 2016-08-11
Slackware SSA:2016-219-02 firefox 2016-08-06
Fedora FEDORA-2016-7dd68d253f firefox 2016-08-05
Ubuntu USN-3044-1 firefox 2016-08-05
openSUSE openSUSE-SU-2016:1964-1 MozillaFirefox, mozilla-nss 2016-08-05
Gentoo 201701-15 firefox thunderbird 2017-01-04
Gentoo 201701-15 firefox 2017-01-03

Comments (none posted)

firefox: multiple vulnerabilities

Package(s):firefox CVE #(s):CVE-2016-2835 CVE-2016-5250 CVE-2016-5251 CVE-2016-5255 CVE-2016-5260 CVE-2016-5261 CVE-2016-5266 CVE-2016-5268
Created:August 5, 2016 Updated:October 28, 2016
Description:

From the Arch Linux advisory:

CVE-2016-2835 - Mozilla developers and community members reported several memory safety bugs in the browser engine used in firefox and other Mozilla-based products. Some of these bugs showed evidence of memory corruption under certain circumstances, and we presume that with enough effort at least some of these could be exploited to run arbitrary code.

CVE-2016-5250 - Information disclosure through Resource Timing API during page navigation.

CVE-2016-5251 - Location bar spoofing via data URLs with malformed/invalid mediatypes.

CVE-2016-5255 - Crash in incremental garbage collection in JavaScript.

CVE-2016-5260 - Form input type change from password to text can store plain text password in session restore file.

CVE-2016-5261 - Integer overflow in WebSockets during data buffering.

CVE-2016-5266 - Information disclosure and local file manipulation through drag and drop.

CVE-2016-5268 - Spoofing attack through text injection into internal error pages.

Alerts:
Ubuntu USN-3112-1 thunderbird 2016-10-27
Debian-LTS DLA-658-1 icedove 2016-10-16
SUSE SUSE-SU-2016:2513-1 firefox 2016-10-12
SUSE SUSE-SU-2016:2431-1 firefox 2016-10-04
SUSE SUSE-SU-2016:2434-1 firefox 2016-10-04
Mageia MGASA-2016-0329 firefox/rootcerts/nss 2016-09-28
Debian-LTS DLA-636-1 firefox-esr 2016-09-27
Debian DSA-3674-1 firefox-esr 2016-09-22
CentOS CESA-2016:1912 firefox 2016-09-22
CentOS CESA-2016:1912 firefox 2016-09-22
CentOS CESA-2016:1912 firefox 2016-09-22
Scientific Linux SLSA-2016:1912-1 firefox 2016-09-21
Red Hat RHSA-2016:1912-01 firefox 2016-09-21
Arch Linux ASA-201609-3 thunderbird 2016-09-04
SUSE SUSE-SU-2016:2195-1 firefox 2016-08-30
SUSE SUSE-SU-2016:2131-1 MozillaFirefox 2016-08-22
SUSE SUSE-SU-2016:2061-1 firefox, nspr, nss 2016-08-12
openSUSE openSUSE-SU-2016:2026-1 firefox, mozilla-nss 2016-08-11
Slackware SSA:2016-219-02 firefox 2016-08-06
Fedora FEDORA-2016-7dd68d253f firefox 2016-08-05
Ubuntu USN-3044-1 firefox 2016-08-05
openSUSE openSUSE-SU-2016:1964-1 MozillaFirefox, mozilla-nss 2016-08-05
Arch Linux ASA-201608-2 firefox 2016-08-05
Gentoo 201701-15 firefox thunderbird 2017-01-04
Gentoo 201701-15 firefox 2017-01-03

Comments (none posted)

flex: buffer overflow

Package(s):flex CVE #(s):CVE-2016-6354
Created:August 9, 2016 Updated:February 2, 2017
Description: From the Red Hat bugzilla:

It was found that flex incorrectly resized the num_to_read variable in yy_get_next_buffer. The buffer is resized if this value is less or equal to zero.

With special crafted input it is possible, that the buffer is not resized if the input is larger than the default buffer size of 16k. This allows a heap buffer overflow.

It may be possible to exploit this remotely, depending on the application that is built using flex.

Alerts:
openSUSE openSUSE-SU-2016:2450-1 flex, at, libbonobo, netpbm, openslp, sgmltool, virtuoso 2016-10-04
openSUSE openSUSE-SU-2016:2378-1 Thunderbird 2016-09-25
openSUSE openSUSE-SU-2016:2254-1 thunderbird 2016-09-07
openSUSE openSUSE-SU-2016:2253-1 thunderbird 2016-09-07
Debian DSA-3653-2 flex 2016-09-04
SUSE SUSE-SU-2016:2195-1 firefox 2016-08-30
openSUSE openSUSE-SU-2016:2182-1 firefox, nss 2016-08-29
openSUSE openSUSE-SU-2016:2167-1 Firefox 2016-08-27
Debian DSA-3653-1 flex 2016-08-25
SUSE SUSE-SU-2016:2131-1 MozillaFirefox 2016-08-22
SUSE SUSE-SU-2016:2061-1 firefox, nspr, nss 2016-08-12
Fedora FEDORA-2016-c9ad9582f7 flex 2016-08-08
openSUSE openSUSE-SU-2017:0356-1 seamonkey 2017-02-02
Gentoo 201701-31 flex 2017-01-11
Fedora FEDORA-2016-8d79ade826 flex 2016-12-10
Mageia MGASA-2016-0396 flex 2016-11-23

Comments (none posted)

fontconfig: privilege escalation

Package(s):fontconfig CVE #(s):CVE-2016-5384
Created:August 9, 2016 Updated:December 15, 2016
Description: From the Debian advisory:

Tobias Stoeckmann discovered that cache files are insufficiently validated in fontconfig, a generic font configuration library. An attacker can trigger arbitrary free() calls, which in turn allows double free attacks and therefore arbitrary code execution. In combination with setuid binaries using crafted cache files, this could allow privilege escalation.

Alerts:
Oracle ELSA-2016-2601 fontconfig 2016-11-10
Red Hat RHSA-2016:2601-02 fontconfig 2016-11-03
openSUSE openSUSE-SU-2016:2272-1 fontconfig 2016-09-09
Mageia MGASA-2016-0287 fontconfig 2016-08-31
Ubuntu USN-3063-1 fontconfig 2016-08-17
Fedora FEDORA-2016-6802f2e52a fontconfig 2016-08-18
Debian-LTS DLA-587-1 fontconfig 2016-08-09
Fedora FEDORA-2016-e23ab56ce3 fontconfig 2016-08-08
Debian DSA-3644-1 fontconfig 2016-08-08
Scientific Linux SLSA-2016:2601-2 fontconfig 2016-12-14

Comments (none posted)

glibc: denial of service

Package(s):glibc CVE #(s):CVE-2016-5417
Created:August 8, 2016 Updated:August 10, 2016
Description: From the Arch Linux advisory:

The sockaddr_in6 allocated in resolv/res_init.c:317 is not freed, leaking 28 bytes per thread using the resolver (according to valgrind). The leak is triggered if name resolution functions are called in such a way that internal resolver data structures are only initialized partially. This issue may ultimately lead to denial of service by leaking extensive amounts of memory.

Alerts:
Arch Linux ASA-201608-7 lib32-glibc 2016-08-08
Arch Linux ASA-201608-6 glibc 2016-08-08

Comments (none posted)

hawk2: clickjacking prevention

Package(s):hawk2 CVE #(s):
Created:August 4, 2016 Updated:August 12, 2016
Description: From the SUSE advisory:

To prevent Clickjacking attacks, set Content-Security-Policy to frame-ancestors 'self' (bsc#984619)

Alerts:
openSUSE openSUSE-SU-2016:2028-1 hawk2 2016-08-11
SUSE SUSE-SU-2016:1946-1 hawk2 2016-08-03

Comments (none posted)

kernel: denial of service

Package(s):kernel CVE #(s):CVE-2015-8019
Created:August 5, 2016 Updated:August 10, 2016
Description:

From the SUSE advisory:

The skb_copy_and_csum_datagram_iovec function in net/core/datagram.c in the Linux kernel did not accept a length argument, which allowed local users to cause a denial of service (memory corruption) or possibly have unspecified other impact via a write system call followed by a recvmsg system call.

Alerts:
SUSE SUSE-SU-2016:1961-1 kernel 2016-08-04

Comments (none posted)

kernel: two vulnerabilities

Package(s):kernel CVE #(s):CVE-2016-6136 CVE-2016-5400
Created:August 9, 2016 Updated:August 10, 2016
Description: From the CVE entries:

Race condition in the audit_log_single_execve_arg function in kernel/auditsc.c in the Linux kernel through 4.7 allows local users to bypass intended character-set restrictions or disrupt system-call auditing by changing a certain string, aka a "double fetch" vulnerability. (CVE-2016-6136)

Memory leak in the airspy_probe function in drivers/media/usb/airspy/airspy.c in the airspy USB driver in the Linux kernel before 4.7 allows local users to cause a denial of service (memory consumption) via a crafted USB device that emulates many VFL_TYPE_SDR or VFL_TYPE_SUBDEV devices and performs many connect and disconnect operations. (CVE-2016-5400)

Alerts:
Oracle ELSA-2016-2574 kernel 2016-11-10
Mageia MGASA-2016-0364 kernel-tmb 2016-11-04
Red Hat RHSA-2016:2584-02 kernel-rt 2016-11-03
Red Hat RHSA-2016:2574-02 kernel 2016-11-03
Mageia MGASA-2016-0345 kernel 2016-10-18
Ubuntu USN-3097-2 linux-ti-omap4 2016-10-13
Ubuntu USN-3098-2 linux-lts-trusty 2016-10-10
Ubuntu USN-3097-1 kernel 2016-10-10
Ubuntu USN-3098-1 kernel 2016-10-10
Ubuntu USN-3084-4 linux-snapdragon 2016-09-19
Ubuntu USN-3084-3 linux-raspi2 2016-09-19
Ubuntu USN-3084-2 linux-lts-xenial 2016-09-19
Ubuntu USN-3084-1 kernel 2016-09-19
Debian-LTS DLA-609-1 kernel 2016-09-03
Debian DSA-3659-1 kernel 2016-09-04
Ubuntu USN-3070-3 linux-snapdragon 2016-08-30
Ubuntu USN-3070-2 linux-raspi2 2016-08-30
Ubuntu USN-3070-4 linux-lts-xenial 2016-08-30
Ubuntu USN-3070-1 kernel 2016-08-29
Fedora FEDORA-2016-754e4768d8 kernel 2016-08-08
Fedora FEDORA-2016-30e3636e79 kernel 2016-08-08
Scientific Linux SLSA-2016:2574-2 kernel 2016-12-14
Oracle ELSA-2016-3646 kernel 2.6.39 2016-11-21
Oracle ELSA-2016-3646 kernel 2.6.39 2016-11-21
Oracle ELSA-2016-3645 kernel 3.8.13 2016-11-21
Oracle ELSA-2016-3645 kernel 3.8.13 2016-11-21
Oracle ELSA-2016-3644 kernel 4.1.12 2016-11-21
Oracle ELSA-2016-3644 kernel 4.1.12 2016-11-21

Comments (none posted)

libreoffice: code execution

Package(s):libreoffice CVE #(s):CVE-2016-1513
Created:August 5, 2016 Updated:August 10, 2016
Description:

From the Ubuntu advisory:

Yves Younan and Richard Johnson discovered that LibreOffice incorrectly handled presentation files. If a user were tricked into opening a specially crafted presentation file, a remote attacker could cause LibreOffice to crash, and possibly execute arbitrary code.

Alerts:
Debian-LTS DLA-591-1 libreoffice 2016-08-09
Ubuntu USN-3046-1 libreoffice 2016-08-04

Comments (none posted)

minimatch: denial of service

Package(s):nodejs010-nodejs-minimatch CVE #(s):CVE-2016-1000023
Created:August 9, 2016 Updated:August 12, 2016
Description: From the Red Hat advisory:

A regular expression denial of service flaw was found in Minimatch. An attacker able to make an application using Minimatch to perform matching using a specially crafted glob pattern could cause the application to consume an excessive amount of CPU. (CVE-2016-1000023)

Alerts:
Red Hat RHSA-2016:1605-01 Red Hat OpenShift Enterprise 2016-08-11
Red Hat RHSA-2016:1583-01 rh-nodejs4-nodejs-minimatch 2016-08-09
Red Hat RHSA-2016:1582-01 nodejs010-nodejs-minimatch 2016-08-09

Comments (none posted)

mongodb: two vulnerabilities

Package(s):mongodb CVE #(s):CVE-2016-6494
Created:August 8, 2016 Updated:October 7, 2016
Description: From the Debian LTS advisory:

CVE-2016-6494: World-readable .dbshell history file

TEMP-0833087-C5410D: Bruteforcable challenge responses in unprotected logfile

Alerts:
Fedora FEDORA-2016-89060100d7 mongodb 2016-10-06
Fedora FEDORA-2016-4cedbd4308 mongodb 2016-10-03
Debian-LTS DLA-588-2 mongodb 2016-08-09
Debian-LTS DLA-588-1 mongodb 2016-08-08

Comments (none posted)

mupdf: denial of service

Package(s):mupdf CVE #(s):CVE-2016-6525
Created:August 8, 2016 Updated:August 31, 2016
Description: From the Debian LTS advisory:

A flaw was discovered in the pdf_load_mesh_params() function allowing out-of-bounds write access to memory locations. With carefully crafted input, that could trigger a heap overflow, resulting in application crash or possibly having other unspecified impact.

Alerts:
Gentoo 201702-12 mupdf 2017-02-19
Mageia MGASA-2016-0286 mupdf 2016-08-31
Arch Linux ASA-201608-22 mupdf 2016-08-31
Debian DSA-3655-1 mupdf 2016-08-26
Debian-LTS DLA-589-1 mupdf 2016-08-08

Comments (none posted)

nodejs-tough-cookie: denial of service

Package(s):nodejs-tough-cookie CVE #(s):
Created:August 9, 2016 Updated:October 3, 2016
Description: From the Node security advisory:

Versions 0.9.7 through 2.2.2 contain a vulnerable regular expression that, under certain conditions involving long strings of semicolons in the "Set-Cookie" header, causes the event loop to block for excessive amounts of time.

Alerts:
Fedora FEDORA-2016-286a8ec5b0 nodejs-tough-cookie 2016-10-01
Fedora FEDORA-2016-c0fd203d6e nodejs-tough-cookie 2016-08-09

Comments (none posted)

openntpd/busybox: denial of service

Package(s):openntpd busybox CVE #(s):CVE-2016-6301
Created:August 9, 2016 Updated:January 2, 2017
Description: From the Mageia advisory:

The busybox NTP implementation doesn't check the NTP mode of packets received on the server port and responds to any packet with the right size. This includes responses from another NTP server. An attacker can send a packet with a spoofed source address in order to create an infinite loop of responses between two busybox NTP servers. Adding more packets to the loop increases the traffic between the servers until one of them has a fully loaded CPU and/or network.

Alerts:
Mageia MGASA-2016-0277 openntpd/busybox 2016-08-09
Gentoo 201701-05 busybox 2017-01-01

Comments (none posted)

openssh: denial of service

Package(s):openssh CVE #(s):CVE-2016-6515
Created:August 10, 2016 Updated:August 15, 2016
Description: From the CVE entry:

The auth_password function in auth-passwd.c in sshd in OpenSSH before 7.3 does not limit password lengths for password authentication, which allows remote attackers to cause a denial of service (crypt CPU consumption) via a long string.

Alerts:
openSUSE openSUSE-SU-2016:2339-1 openssh 2016-09-19
Mageia MGASA-2016-0280 openssh 2016-08-31
Ubuntu USN-3061-1 openssh 2016-08-15
Debian-LTS DLA-594-1 openssh 2016-08-12
Fedora FEDORA-2016-4a3debc3a6 openssh 2016-08-10

Comments (none posted)

pbuilder: file overwrite

Package(s):pbuilder CVE #(s):
Created:August 4, 2016 Updated:August 10, 2016
Description: Due to a problem with the "eatmydata" option for pbuilder, files that should not be overwritten can be. More information is available in the bugs.debian.org entry.
Alerts:
Fedora FEDORA-2016-2e20730676 pbuilder 2016-08-04
Fedora FEDORA-2016-bdb86fbc7d pbuilder 2016-08-03

Comments (none posted)

pdns: denial of service

Package(s):pdns CVE #(s):CVE-2016-6172
Created:August 9, 2016 Updated:September 12, 2016
Description: From the Red Hat bugzilla:

It was found that PowerDNS does not implement reasonable restrictions for zone sizes. This allows an explicitly configured primary DNS server for a zone to crash a secondary DNS server, affecting service of other zones hosted on the same secondary server.

Alerts:
Mageia MGASA-2016-0324 pdns 2016-09-28
Debian-LTS DLA-627-1 pdns 2016-09-18
Debian DSA-3664-1 pdns 2016-09-10
openSUSE openSUSE-SU-2016:2116-1 pdns 2016-08-19
Fedora FEDORA-2016-7098bdc536 pdns 2016-08-08

Comments (none posted)

python-autobahn: insecure origin validation

Package(s):python-autobahn CVE #(s):
Created:August 5, 2016 Updated:August 10, 2016
Description:

From the Red Hat bug report:

Autobahn|Python incorrectly checks the Origin header when the 'allowedOrigins' value is set. This can allow third parties to execute legitimate requests for WAMP WebSocket requests against an Autobahn|Python/Crossbar.io server within another browser's context.

Alerts:
Fedora FEDORA-2016-acda4281c9 python-autobahn 2016-08-04

Comments (none posted)

squid: code execution

Package(s):squid CVE #(s):CVE-2016-5408
Created:August 4, 2016 Updated:August 10, 2016
Description: From the Red Hat advisory:

It was found that the fix for CVE-2016-4051 released via RHSA-2016:1138 did not properly prevent the stack overflow in the munge_other_line() function. A remote attacker could send specially crafted data to the Squid proxy, which would exploit the cachemgr CGI utility, possibly triggering execution of arbitrary code. (CVE-2016-5408)

Alerts:
Oracle ELSA-2016-1573 squid 2016-08-04
Scientific Linux SLSA-2016:1573-1 squid 2016-08-04
CentOS CESA-2016:1573 squid 2016-08-04
Red Hat RHSA-2016:1573-01 squid 2016-08-04

Comments (none posted)

stunnel: two vulnerabilities

Package(s):stunnel CVE #(s):
Created:August 8, 2016 Updated:August 10, 2016
Description: From the Slackware advisory:

patches/packages/stunnel-5.35-i586-1_slack14.2.txz: Upgraded.

Fixes security issues:

Fixed malfunctioning "verify = 4".

Fixed incorrectly enforced client certificate requests.

Alerts:
Slackware SSA:2016-219-04 stunnel 2016-08-06

Comments (none posted)

wireshark: denial of service

Package(s):wireshark CVE #(s):CVE-2016-6504
Created:August 8, 2016 Updated:August 10, 2016
Description: From the openSUSE bugzilla:

It may be possible to make Wireshark crash by injecting a malformed packet onto the wire or by convincing someone to read a malformed packet trace file. Affects 1.12.0 to 1.12.12, fixed in 1.12.13.

Alerts:
Debian-LTS DLA-595-1 wireshark 2016-08-15
Debian DSA-3648-1 wireshark 2016-08-12
openSUSE openSUSE-SU-2016:1974-1 wireshark 2016-08-06

Comments (none posted)

wireshark: denial of service

Package(s):wireshark CVE #(s):CVE-2016-6512 CVE-2016-6513
Created:August 9, 2016 Updated:August 10, 2016
Description: From the Wireshark advisories:

wnpa-sec-2016-48: The MMSE, WAP, WBXML, and WSP dissectors could go into an infinite loop. Discovered by Antti Levomäki. It may be possible to make Wireshark crash by injecting a malformed packet onto the wire or by convincing someone to read a malformed packet trace file. (CVE-2016-6512)

wnpa-sec-2016-49: The WBXML dissector could crash. Discovered by Antti Levomäki. It may be possible to make Wireshark crash by injecting a malformed packet onto the wire or by convincing someone to read a malformed packet trace file. (CVE-2016-6513)

See the oss-security CVE assignment for further information.

Alerts:
Arch Linux ASA-201608-20 wireshark-cli 2016-08-27
Mageia MGASA-2016-0275 wireshark 2016-08-03

Comments (none posted)

xen: denial of service

Package(s):xen CVE #(s):CVE-2016-6259
Created:August 8, 2016 Updated:August 10, 2016
Description: From the Red Hat bugzilla:

Supervisor Mode Access Prevention is a hardware feature designed to make an Operating System more robust, by raising a pagefault rather than accidentally following a pointer into userspace. However, legitimate accesses into userspace require whitelisting, and the exception delivery mechanism for 32bit PV guests wasn't whitelisted.

A malicious 32-bit PV guest kernel can trigger a safety check, crashing the hypervisor and causing a denial of service to other VMs on the host.

Alerts:
openSUSE openSUSE-SU-2016:2494-1 xen 2016-10-11
SUSE SUSE-SU-2016:2473-1 xen 2016-10-07
SUSE SUSE-SU-2016:2093-1 xen 2016-08-17
Fedora FEDORA-2016-0049aa6e5d xen 2016-08-08
Fedora FEDORA-2016-01cc766201 xen 2016-08-05
Mageia MGASA-2017-0012 xen 2017-01-09

Comments (none posted)

Page editor: Jake Edge

Kernel development

Brief items

Kernel release status

The current development kernel is 4.8-rc1, released on August 7. "This seems to be building up to be one of the bigger releases lately, but let's see how it all ends up. The merge window has been fairly normal, although the patch itself looks somewhat unusual: over 20% of the patch is documentation updates, due to conversion of the drm and media documentation from docbook to the Sphinx doc format."

Stable updates: the 4.6.6, 4.4.17, and 3.14.75 updates were released on August 10.

Comments (none posted)

Quotes of the week

Deferred probe is probably the best thing that ever happened for the quality of kernel error handling.
Mark Brown

/me hands Andy a time machine to go fix this properly, before so much silicon ships.
Borislav Petkov should hand out a few more of those.

Comments (none posted)

Kernel development news

The end of the 4.8 merge window

By Jonathan Corbet
August 10, 2016
By the time Linus released 4.8-rc1 and closed the merge window for this development cycle, 11,618 non-merge changesets had found their way into the mainline repository. That suggests that 4.8 will be a relatively busy development cycle, but not busy enough to break any records. Just over 1,000 of those changesets were pulled after last week's summary was written; some of the more interesting changes in that last set include:

  • The Ceph filesystem now has full RADOS namespace support. This feature has been partially supported since 4.5; the final pieces were merged for 4.8.

  • The OrangeFS filesystem has better in-kernel caching support; see the pull-request text for more information.

  • The new printk.devkmsg command-line parameter can be used to control the ability of user space to send data to the kernel log via /dev/kmsg. The default setting of ratelimit applies rate limiting to data from user space. Other possibilities are on (allowing unlimited logging, as older kernels did) and off to disable logging from user space entirely.

  • M68k binaries built for systems without a memory-management unit can now be run on ordinary, MMU-equipped systems as well. That will help developers of such applications debug them on more powerful systems.

  • The new "software RDMA over Ethernet" driver allows the use of InfiniBand remote DMA protocols over the kernel's network stack.

  • Reverse-mapping support has been added to the XFS filesystem; this feature allows the filesystem code to track the ownership of every block on a storage device. Reverse mapping in its current form is not hugely useful, but it will be a core part of a set of intended XFS features for future development cycles; these features include reflink(), copy-on-write data, data deduplication, much-improved bad block reporting, and better recovery from filesystem damage. As Dave Chinner put it: "There's a lot of new stuff coming along in the next couple of cycles, and it all builds in the rmap infrastructure."

  • The architecture emulation containers feature has been merged; it allows containers to run code built for an architecture that differs from that of the host system.

  • The post-init read-only memory kernel-hardening feature now works with data in loadable modules as well.

  • The hardened usercopy patches were merged after the 4.8-rc1 release. This feature adds more checking to the kernel functions that copy data between kernel and user space with the idea of making them harder to exploit.

  • New hardware support includes: RapidIO channelized mailbox controllers, IDT RXS Gen.3 SRIO switches, IBM POWER virtual SCSI target servers, Maxim MAX6916 SPI realtime clocks, Silead I2C touchscreens, SiS 9200 family I2C touchscreens, Broadcom iProc PWM controllers, STMPE expander PWM controllers, ChromeOS EC PWM controllers, and J-Core J2 processors.

One thing that did not make it this time around, despite being pushed during the merge window, is the "latent entropy" GCC plugin. This program instruments various kernel functions in an attempt to generate some entropy from randomness in how the hardware responds, especially during that period early in the boot process when entropy may be in short supply. Linus was unimpressed by the pull request and unconvinced by the techniques used in the plugin itself. He has indicated that he might eventually take the plugin, but not right away, so this one looks like it will wait until the 4.9 development cycle.

If the usual schedule holds, the final 4.8 release will come out on September 25, which will place the 4.9 merge window during the Kernel Recipes and LinuxCon Europe conferences. That will thus be a busy time, but, between now and then, the work of testing this kernel and fixing the bugs needs to be done.

Comments (2 posted)

Four new Android privilege escalations

By Jake Edge
August 10, 2016

The "QuadRooter" vulnerabilities are currently making lots of headlines, at least partly because they could impact up to 900 million Android devices. There are four separate bugs, each with its own CVE number. Interestingly, all are found in code that lives outside of the mainline kernel—but is obviously shipped in a lot of devices.

QuadRooter, which was announced with great fanfare by Check Point Software Technologies, consists of privilege escalation vulnerabilities that could be used by malicious apps to take control of an Android device—and, of course, the personal data stored on it. The four bugs were found in drivers for Qualcomm system-on-chips (SoCs) that are found in many Android phone models, including the flagship Google Nexus 5X, 6, and 6P handsets. The bugs are serious, but users can mitigate the risk somewhat by avoiding dubious apps.

The bugs are detailed in a report [registration required] from Check Point. Note that unchecking the "please send me email" box on the registration form does not actually seem to stop Check Point from sending emails. The vulnerabilities are found in three different subsystems of the Qualcomm kernel: the ipc_router interprocess communication (IPC) module, the ashmem shared-memory allocation subsystem, and two bugs in the kernel graphics support layer (kgsl) that is used to render graphics provided by user-space programs. None of those modules is in the mainline kernel, though ashmem is in the staging tree but that version does not contain the function that caused the vulnerability.

For the most part, the bugs themselves are fairly standard kinds of flaws. The two in kgsl are use-after-free vulnerabilities, the ashmem bug provides a way to get attacker-controlled data into the kernel, while the ipc_router bug is a memory corruption that can lead to code execution. It is noteworthy that, because the code is out of the mainline, it probably didn't get the attention, testing, fuzzing, and review that it might otherwise have received—from the kernel development community, anyway. Given its prevalence in Android devices, though, it did garner some amount of attention, from Check Point, at least, and perhaps from others who are far less likely to report on what they found.

A look inside the flaws is instructive. CVE-2016-2059 is the ipc_router code-execution bug. The module provides a new address family (AF_MSM_IPC) that can be used to create sockets. Users can convert "client" sockets to "control" sockets by way of an ioctl() call. Unfortunately, the conversion function locks the wrong list, which allows (malicious) callers to corrupt a different list. Elements on that list can be made to point to freed memory, which the attacker can control using "heap spraying".

The report goes into some detail on how that corruption can be used to call arbitrary kernel functions with attacker-controlled parameters, which makes for interesting reading. But the upshot is clear: root privileges can be gained and SELinux disabled, which gives the attacker complete control over the device and its contents.

The first of the kgsl bugs (CVE-2016-2503) is caused by a race condition in the function used to destroy a "syncsource" object in the kgsl_sync subsystem, which synchronizes graphics data between user space and the kernel. If two or more threads call the function with the same syncsource, the reference count can be decremented incorrectly, leading to a negative count. That will allow attackers to control the memory contents of the object that the kernel still thinks is in use, which can then be used to execute code of the attacker's choosing. The recent reference count hardening work might help avoid reference-count underflows like this.

The second kgsl use-after-free (CVE-2016-2504) vulnerability is even easier to trigger. There is an ioctl() that allows users (or attackers) to directly free a specific kgsl_mem_entry object by its ID number, without any access control, which means that another thread can free the object while the kernel still has a reference to this newly freed object. The usual use-after-free games can be played at that point.

The bug (CVE-2016-5340) in ashmem, which is a memory allocator that allows processes to easily share memory, is a bit different. The Qualcomm version of ashmem has diverged from the one in staging, with some new functions provided to access the struct file from a file descriptor as long as the file is an ashmem shared-memory file. But the is_ashmem_file() function simply tests if the file name is /ashmem, which is the file name used by the subsystem. However, a, perhaps obscure, deprecated feature of Android, to allow for large files that accompany an app's .apk file, also allows apps to mount a filesystem with an ashmem entry in the root:

Attackers can use a deprecated feature of Android, called Obb to create a file named ashmem on top of a file system. With this feature, an attacker can mount their own file system, creating a file in their root directory called "ashmem."

By sending the fd of this file to the get_ashmem_file function, an attacker can trick the system to think that the file they created is actually an ashmem file, while in reality, it can be any file.

Thus a malicious app could fool the ashmem subsystem into using attacker-controlled data in what it thinks is a file with contents that are normally completely under its control.

Check Point has created a QuadRooter Scanner app that is available in the Google Play store. It scans an Android device and reports which, if any, of the vulnerabilities affect it. There is some skepticism about how good of a job it actually does, however. On my Nexus 6P, the scanner reports that the phone is vulnerable to CVE-2016-2504 and CVE-2016-5340, which were not reported as fixed in the July Android Security Bulletin—the phone is updated with the July 5 update.

That would seem to indicate that a recently purchased flagship phone is still vulnerable to two of the bugs, though the August bulletin does mention a fix for CVE-2016-2504, but there is no mention of CVE-2016-5340. That update has not been made available over Google's Project Fi carrier as of yet, however. According to the report, Qualcomm was informed about the bugs in April and it confirmed that it has released updated code to OEMs.

But, as we have seen rather often in the Android world, those fixes are taking some time to make their way out to users. Even users of Google's phones and network are awaiting some fixes. Other carriers and device makers tend to lag even further behind—or fail to ever get updates out at all. That leaves lots of phone owners in a tricky spot.

Users who are not running random side-loaded apps are likely to be less vulnerable to problems from QuadRooter, though. That is not to say it is impossible for a malicious app to slip into the Google Play store, but it is definitely less probable. The source of these kinds of malicious apps will be some dodgy app store that promises to deliver the latest exciting game or other app. Users of vulnerable phones should steer clear of such sites and generally try to be alert to odd behavior. That's good advice even well after QuadRooter is fixed on phones, as there are undoubtedly other, similar bugs lurking out there, both in the mainline and various vendor kernels.

Comments (none posted)

The NET policy mechanism

By Jonathan Corbet
August 10, 2016
One of the heuristics that guide kernel development says that, whenever possible, the addition of tuning knobs should be resisted. Such knobs are seen as the developer giving up and pushing a tuning problem onto users; instead, the kernel should, whenever possible, tune itself to suit the current workload. An attempt to reduce the user's tuning responsibilities for the networking subsystem is running into resistance, though.

Arguably, no part of the kernel offers more opportunities for user tuning than networking. Queuing disciplines and traffic control allow the creation of elaborate, in-kernel routing for packets. Interrupt affinities and device polling can be tweaked, there are numerous congestion-control algorithms to choose between, queue lengths and packet-ring sizes can be played with, and so on. There is also a whole set of policies and knobs that can be set within the network interfaces themselves. The result is a subsystem with a great deal of flexibility, but also one that is complex and difficult for most people to tune properly. Thus, many administrators do not even try if they can avoid it. Unfortunately, they often cannot avoid it; as Ken Liang noted in the introduction to his kernel NET policy patch set, "network performance is not good with default system settings."

That patch set introduces a new high-level policy mechanism; the administrator can use it to describe the sort of workload that the networking subsystem should be tuned for. The options are:

  • CPU: the most important factor is reducing the amount of CPU time needed to keep up with the network.

  • Latency: the latency of network communications should be kept to a minimum.

  • Throughput: the goal is to push the maximum amount of data through the network.

These policies may be set at a per-interface level, in which case they apply to all communications flowing through the affected interface. Policies can also be set on a per-task and per-socket level, though, allowing different users to operate under different policies. In this case, the interface-level policy must be set to the special "mixed" option; if the interface is given any other policy, all communications through that interface must match that policy.

Exactly how these policies are implemented is not well documented in the patch set; that is not helped by the fact that, in the current version, there are no driver-level patches implementing the new policy-setting hooks. That support can be seen in a previous version of the patch set; it was seemingly removed in response to complaints about the length of the series as a whole. Therein, one sees that much of the functionality is dependent on Intel's "Ethernet Flow Director" technology, though Liang maintains that it can be made to work on any adapter that supports loadable flow-direction rules — as many high-end adapters do.

One aspect of the policy implementation is interrupt mitigation. Most high-speed network adapters can handle vast numbers (as in millions) of packets per second; if they generated interrupts for every packet sent or received, the system would be swamped. So these adapters support various mechanisms for reducing the number of interrupts delivered. This is where the policy comes in: reducing the number of interrupts raised by the interface can increase the amount of time it takes to process a packet, thus increasing latency. So a latency-sensitive policy will tolerate more interrupts, while a CPU-conserving policy will reduce interrupts to a minimum.

Multi-queue devices (the only type supported by this patch set) can steer packets to specific queues and vary their interrupt behavior for each. Multiple queues can be used to support policy goals in other ways as well; throughput-oriented queues can be longer and run at lower priority, while latency-oriented queues should be high-priority and short. So the other aspect of the NET policy patches is queue-selection logic that depends on the policy attached to each packet. When a policy is established, the queues (and their CPU/interrupt affinities) are set up automatically, so the administrator need not deal with that sort of complexity.

It will surprise few readers to learn that a number of networking developers expressed concerns about this patch set. Policy implementation in the kernel is generally something that developers try to avoid; the kernel is meant to implement mechanism, leaving policy decisions to others. Given that most of what the NET policy patches do can already be done from user space, some questioned why the remaining bits weren't added to the API so that policy selection could be done outside of the kernel.

The answer to this question, as found in the cover letter to the series, goes something like this. User space does not have access to the same level of information that the kernel has, and the information that is available can be stale and subject to race conditions. If you do push these decisions out to user space, you'll add more context switches and slow down the system as a whole. And only the kernel can manage competing requests from multiple users in a way that's fair to all. The networking developers understand these arguments, but not everybody seems convinced that solving the problem in user space is impossible.

Also, perhaps inevitably, it was suggested that, rather than coding queue selection into the policy code, that decision could be made by an eBPF program loaded from user space. Using eBPF would certainly add flexibility to the system, but it seems unlikely to make the task of policy administration easier.

As things stand now, it seems clear that quite a bit more effort will be required to convince the network development community that the NET policy patches are the best solution to the problem. But the problem itself is real; as Stephen Hemminger put it, "network tuning is hard, most people get it wrong, and nobody agrees on the right answer." Creating a set of canned policies in the kernel may not be the best solution to the problem, but the real proof of that would be to come up with a better solution, and those seem to be in short supply at the moment.

Comments (none posted)

Patches and updates

Kernel trees

Linus Torvalds Linux 4.8-rc1 Aug 07
Greg KH Linux 4.6.6 Aug 10
Sebastian Andrzej Siewior 4.6.5-rt10 ?
Greg KH Linux 4.4.17 Aug 10
Greg KH Linux 3.14.75 Aug 10

Architecture-specific

Core kernel code

Development tools

Device drivers

Device driver infrastructure

Documentation

Filesystems and block I/O

Memory management

Networking

kan.liang@intel.com Kernel NET policy ?

Security-related

Virtualization and containers

Miscellaneous

Stephen Hemminger iproute2 4.7.0 Aug 08

Page editor: Jonathan Corbet

Distributions

Debian to shift to a modern GnuPG

By Nathan Willis
August 10, 2016

The GnuPG project maintains several active branches that differ in algorithm support, API, and other important details. In most cases, multiple branches can coexist on a single system, but only one provides the default /usr/bin/gpg executable in any given configuration. Debian recently announced a decision to switch its default gpg from the "classic" branch of GnuPG to the "modern" branch, which will trigger several other changes for users and package maintainers.

GnuPG is currently developed in the "classic" (1.4) branch, the "stable" (2.0) branch, and the "modern" (2.1) branch. The classic branch provides a monolithic gpg binary, while both the stable and modern branches are modular. In particular, the cryptographic functions in the newer branches are provided by libgcrypt, and passphrases for the active session are managed by the gnupg-agent daemon rather than by the gpg binary directly. There is also a separate S/MIME module available for newer branches, while GnuPG classic lacks S/MIME support. Furthermore, while classic and stable support the same set of encryption and hash functions, GnuPG modern adds support for several new algorithms (most notably elliptic curve cryptography).

Thus, there are several advantages to moving to the newer branches, although the GnuPG project still suggests the classic branch as a good choice for servers and embedded systems. On the other hand, GnuPG modern introduces a change to the way keyrings are stored on disk, which could potentially cause migration pains if care is not taken. Specifically, in the earlier GnuPG branches, a user's private keys were stored in a separate file (secring.gpg) from their public keys (in pubring.gpg). But the public half of a user's own key pair was stored in both secring.gpg and pubring.gpg, meaning that steps were needed to keep the two in sync. This is clearly less than ideal.

In GnuPG modern, the keys are all stored together (although in an improved format that is easier to parse) and the gpg-agent program simply keeps track of which ones include a private component. The first time GnuPG modern is run on a system with the old-style keyring files, it performs a one-time conversion to the new format. The conversion is painless, unless some package unwisely makes assumptions about the way the ~/.gnupg directory is organized. But it is one-way; users wanting to revert to the old format should expect to do a significant amount of work.

Debian has always used GnuPG classic as its default, while allowing stable to be installed in parallel: classic provided the /usr/bin/gpg binary and stable provided /usr/bin/gpg2. Many other distributions have moved on from GnuPG classic already, although providing separate packages for both gpg and gpg2 seems to be the approach taken by Fedora and several others. Even in comparison to those distributions, however, Debian's decision to stick with stable instead of modern garnered periodic complaints. On August 3, Daniel Kahn Gillmor announced an upcoming change in an entry published on the Debian Administration blog. The distribution will soon begin packaging GnuPG modern under the "gnupg" package name, drop GnuPG stable, and repackage GnuPG classic as "gnupg1."

The /usr/bin/gpg binary will be provided by GnuPG modern, and /usr/bin/gpg2 will be a symbolic link to it, in order to prevent breaking any existing user scripts written explicitly for the newer GnuPG. And the old package will still be available for those who need it. There are some plausible reasons to need the old package, such as the ability to access archives encrypted using old, unsupported encryption algorithms or obsolete key formats (e.g., GnuPG modern drops support for the insecure and 20-year-old PGPv3 key format).

Gillmor cited several advantages to the change. Access to newer key algorithms and GnuPG modern's improved key storage are clear benefits. In addition, GnuPG modern uses a separate daemon (dirmngr) to query keyservers. The dirmngr process is persistent, so it can monitor the availability of keyservers (eliminating time-out problems caused by querying a keyserver that has gone offline). The daemon can also route all queries over Tor and it can use the sks-keyservers.net Certificate Authority (CA) to secure key requests with TLS without resorting to placing trust in a public CA. While it is possible to configure GnuPG classic for use with Tor or the sks-keyservers.net CA, built-in support is more convenient.

Gillmor also pointed out that the human-readable output of GnuPG modern's --list-keys command is improved over classic's output. The newer branch no longer displays short key IDs (which were called out for the possibility of collisions in June), but does list full key fingerprints and the user-ID validity values (e.g., ultimate, fully, or marginal), as well as the flags that indicate which subkeys are used for signing, authentication, and so on.

Impact

For end users, the switch to the new branch will likely only be noticeable in a few situations—and perhaps only if one is looking carefully. For instance, the gpg-agent process will prompt the user for the passphrase to unlock a key, rather than the gpg process, but the workflow itself will not be altered otherwise. Users who have existing keyrings on their machines will have those keyrings automatically updated to the new storage format the first time that they run GnuPG modern but, again, the transition is transparent enough that it is not likely to be noticed.

That said, there will be one immediately noticeable change for users on non-English systems, since the updated Debian packages split out the internationalization files into a separate package called gnupg-l10n.

On the other hand, the change will likely mean some work for Debian package maintainers, depending on how their packages use or depend on the old GnuPG. A number of packages include a Depends: gnupg configuration parameter but only use GnuPG to check signatures. For those packages, Gillmor suggests changing the dependency to the standalone signature-checker gpgv instead. This is a stripped-down program built from the GnuPG sources; Debian provides it as a separate package for convenience.

Packages that try to parse the contents of a user's ~/.gnupg/ directory are likely to break if they expect to find the old storage format. However, attempting to directly read that data is likely to be a bad idea anyway; GnuPG provides functions to access the keyring, and bypassing the official commands is of questionable wisdom.

Somewhat more defensible would be for a package to try parsing the human-readable output of the gpg command-line tool. Such packages will also encounter trouble after the changeover, because of the change in GnuPG modern's output format. But, Gillmor noted in the announcement, reliance on the human-readable output is a mistake anyway. GnuPG can produce colon-separated machine-readable output with the --with-colons switch; that output is easier to parse and is not changing format with the move to the new branch.

The announcement also notes that, although the gpg2 binary will exist as a symbolic link to gpg for now, it may go away some time in the future, so all package maintainers would be wise to examine references in their code and update accordingly.

The updated gnupg package and newly renamed gnupg1 package are currently in Debian experimental. A few bugs popped up so far but have been addressed; assuming that the packages seem to reach stability, they will shortly thereafter be added to Debian unstable, the next step along the path to eventual inclusion in a stable release.

Comments (6 posted)

Brief items

Distribution quotes of the week

All of these questions are where Matthew Miller pulls out a clip from a dinosaur movie eating a lawyer on a toilet or similar comedic point. Why? Because raptors live here and will eat you if you do not have a proper escape policy.
-- Stephen John Smoogen

[debian-private] is Debian's archived online version of water-cooler talk. Sometimes, historically significant things happen around the water-cooler, but most of the time…
-- David Kalnischkies

Comments (none posted)

Ubuntu 14.04.5 LTS released

The Ubuntu team has announced the release of Ubuntu 14.04.5 LTS (Long-Term Support) for its Desktop, Server, Cloud, and Core products, as well as other flavors of Ubuntu with long-term support. "We have expanded our hardware enablement offering since 12.04, and with 14.04.5, this point release contains an updated kernel and X stack for new installations to support new hardware across all our supported architectures, not just x86."

Full Story (comments: none)

Distribution News

Debian GNU/Linux

Debian LTS default-java switch to OpenJDK 7 - Icedtea plugin

The default Java version in Debian LTS Wheezy has been bumped to Java 7, as Java 6 could no longer be supported. "To follow this change, the icedtea-plugin package has been updated to depend on icedtea-7-plugin rather than icedtea-6-plugin. icedtea-6-plugin is unsupported as it depends on Java 6."

Full Story (comments: none)

Fedora

Fedora Account System (FAS) security issue

A vulnerability was identified and fixed in FAS. "The Fedora Infrastructure team identified a serious vulnerability in the Fedora Account System (FAS) web application. This flaw would allow a specifically formatted HTTP request to be authenticated as any requested user. The flaw was caused by a logic problem wherein the FAS web application would accept client certificates that were not intended to be supported. If the authenticated user had appropriate privileges, the attacker would then be able to add, edit, or remove user or group information." According to the team investigating the issue, they don't believe the flaw has been exploited.

Full Story (comments: none)

Newsletters and articles of interest

Distribution newsletters

Comments (none posted)

Bedrock Linux gathers disparate distros under one umbrella (InfoWorld)

InfoWorld takes a look at Bedrock Linux, an experimental distribution that makes it possible to use software from other, mutually incompatible Linux distributions, all under one roof. "Bedrock Linux uses virtual file systems to map the contents of various distributions into each other. The setup process involves installing one of any number of common distributions, then "hijacking" it to turn it into a Bedrock Linux installation. Be prepared for some heavy lifting, though. Right now, the setup process involves compiling Bedrock Linux's userland from scratch in the base distribution, then adding other distributions. Specific scripts exists for Debian-based distros (such as Ubuntu), Arch Linux, Yum-based distros (Fedora, CentOS, OpenSuse), Gentoo Linux, and a number of others, but in theory, any Linux distribution can be added."

Comments (none posted)

Copperhead OS: The startup that wants to solve Android’s woeful security (Ars Technica)

Ars Technica takes a look at Copperhead OS. "Copperhead OS, a two-man team based in Toronto, ships a hardened version of Android that aims to integrate Grsecurity and PaX into their distribution. Their OS also includes numerous security enhancements, including a port of OpenBSD’s malloc implementation, compiler hardening, enhanced SELinux policies, and function pointer protection in libc. Unfortunately for security nuts, Copperhead currently only supports Nexus devices. Google's Android security team have accepted many of Copperhead's patches into their upstream Android Open Source Project (AOSP) code base. But a majority of Copperhead's security enhancements are not likely ever to reach beyond the its small but growing user base, because of performance trade-offs or compatibility issues." Copperhead ships with F-Droid installed by default, but without Google Play; LWN reviewed this distribution in February.

Comments (none posted)

Replicant 6.0 early work, upstream work and F-Droid issue

The Replicant blog reports that Replicant is being updated from Android 4.2 to Android 6.0 by Wolfgang Wiedmeyer. Among many other improvements, Replicant 6.0 should bring full device encryption and SELinux support. The F-Droid issue centers around the discovery of software that does not comply with the GNU Free System Distribution Guidelines in the F-Droid repository. "While the list of such anti-features is displayed in red when selecting an application in F-Droid, applications with anti-features are still listed aside compliant ones. This is also quite confusing since free software isn’t expected to contain such anti-features in the first place."

Comments (3 posted)

Page editor: Rebecca Sobol

Development

Keysafe, a cloud-based key backup proposal

By Nathan Willis
August 10, 2016

In recent weeks, we have looked at the use of smartcards to securely generate and store cryptographic material, such as the secret half of OpenPGP key pairs. Coincidentally, Joey Hess recently shared his thoughts on an alternative approach to managing secret keys that he calls keysafe. Hess's approach divides the key into shards that are subsequently stored on multiple, independent cloud servers. At present, it is only a proposal, but it is one that he has asked the community to consider and weigh in on.

Hess announced keysafe on his blog in an August 5 post. The impetus, he said, was to overcome users' fear that they would misplace a key and thereby lose access to their data:

I feel that simple backup and restore of gpg keys (and encryption keys generally) is keeping some users from using gpg. If there was a nice automated solution for that, distributions could come preconfigured to generate encryption keys and use them for backups etc.

So Hess set out to define not just an algorithm for safely storing key material on remote servers, but one that could effectively automate the process, hiding complexity from the user. The proposal is up for comment from the rest of the community.

In brief, keysafe would allow users to store a secret (such as an OpenPGP secret key, but perhaps other data types as well) by first encrypting the key and a checksum (or, perhaps, hash), splitting the result into three shards with the Shamir's Secret Sharing algorithm, and generating unique identifiers for each each shard. The shards would be uploaded to separate servers through a Tor hidden service endpoint, using only the identifiers as references. In other words, the shards themselves are anonymous, and the servers would not store any user-identifying information with them.

But the shard identifiers are initially generated from a user-supplied password and the key ID of the secret, so that the user can later regenerate them and request the shards from the server pool. The same user-supplied password (plus a random component) would also be used to generate the key that encrypts the secret. That way, the user only has one password to remember.

Adding the random component makes it more difficult for attackers to brute force likely passwords. Furthermore, the cryptographic functions selected (such as Argon2) were chosen because they take some amount of real-world time to run. For a backup process like keysafe, this is acceptable, while attackers will find it an impediment to breaking the system.

The intended use case for the system is users encrypting private data. Consequently, the public key IDs corresponding to the secret keys stored in keysafe would not need to be uploaded to keyservers—in fact, uploading the key IDs would have a detrimental effect, because those IDs are used to generate the shard identifiers. Attackers trying to access keysafe material might start with key IDs that they harvest from public keyservers; if the key ID is available that way, it greatly reduces the search space.

Whether any one key pair is too sensitive to trust to a set of remote cloud servers is, ultimately, a personal question. Hess noted that when he was a Debian Developer, his secret key would have been a high-value target that could be exploited to compromise millions of machines; storing that sort of key on a cloud server would not be advisable.

The steps to store a secret key with ID keyID are:

  1. The user selects a password and an item name of their choosing (e.g., "KeyForFileFoo").
  2. Keysafe generates K = argon2(password, salt=item_name).
  3. Keysafe generates N1 = argon2(keyID, salt=1+item_name).
  4. Keysafe generates N2 = argon2(keyID, salt=2+item_name).
  5. Keysafe generates N3 = argon2(keyID, salt=3+item_name).
  6. Keysafe generates random R.
  7. The secret is encrypted with aes(secret+checksum, key=K+R).
  8. The encrypted payload is sharded with the Shamir algorithm, producing shards S1, S2, and S3.
  9. S1, S2, and S3 are uploaded over Tor to separate servers, using N1, N2, and N3 as the identifiers, respectively.

Hess recommends the Argon2 algorithm because it is resistant to GPU and ASIC optimization. He suggests tuning its parameters to take ten minutes of CPU time to generate the required material. Like several of the other implementation details, though, the emphasis at this stage is not on the specific numbers, but on the design.

The Shamir sharding algorithm, in this case, would set the threshold parameter to two, which means that while the payload is split into three shards, any two of them can be combined to recover the original secret. The Shamir algorithm is not new, of course, and other implementations exist. So, while the sharding operation is essential to making keysafe work, the real benefits of the proposal come with the addition of the remote cloud servers, which can be used by client-side programs to store and retrieve secrets as needed.

It is important in the long run that the servers be run by trusted entities, but Hess has put safeguards into the design to protect users from a malicious server operator.

He outlined several conditions that would be placed on the servers. First, they would need to not keep any logs and would have to sanitize the timestamps on all uploaded objects (both measures to resist correlation attacks against users). Second, whenever a client program requests a shard, the server would reply with a proof-of-work puzzle for the client to complete (thus rate limiting requests). Third, servers would not provide any method for clients to enumerate the items available in storage.

Hess suggests including a server list with the client program and making the first three servers the defaults, but allowing the user to specify other servers if desired. The client would also start requesting objects from the default servers, but would keep trying other servers on the list until it recovers the necessary shards. That way, the user does not need to remember which servers were initially used.

To recover a secret, the user needs to provide the item name and password. The keysafe client then takes the following steps:

  1. Regenerate K, N1, N2, and N3 from the name and password.
  2. Contact servers, requesting N1, N2, and N3, until any two of S1, S2, and S3 is successfully retrieved.
  3. Recombine the two shards, thus recovering the stored ciphertext.
  4. Guess values for R, AES-decrypting the ciphertext with K+R until the result is a secret payload with a checksum that validates.

Practically speaking, servers would want to enforce a size limit on uploaded shards, to prevent malicious users from abusing the system to store large files. Hess notes that this limit might be tricky to establish, since key sizes can get quite big if the key accrues a lot of signatures. But the signatures are not, strictly speaking, necessary for decryption, so they could be stripped away prior to the initial encryption and sharding stages.

Hess also discusses practical considerations for indicating the version of the keysafe protocol used in a stored object. Simply appending a version number to the shards uploaded to the servers is problematic, because there will presumably be fewer users in the early days, so those users' data would be easier for attackers to correlate. Appending the version number right before the sharding step is also bad, because that would provide an easy way for attackers to discover that they have found two shards that fit together: when combined, a valid version string would be visible as plaintext.

He suggests an alternative approach: for every new version of the system, the Argon2 parameters would be changed. Clients could then try different parameters in turn. The downside, he notes, is that this process significantly increases the time needed to recover a secret.

Versioning remains open to discussion, as does at least one other facet of the original proposal: in the final step, the client spends some non-trivial amount of time trying to decrypt the de-sharded ciphertext by guessing possible values of the R component. This step is intended to thwart brute-force attackers but, as described, is it a somewhat imprecise approach. He suggests choosing the length of R such that a CPU could decrypt the ciphertext in less than an hour, and a GPU in around one minute.

An alternative mentioned by Hess is the use of tunable work puzzles, as detailed in a 2015 paper [PDF] released by Ericsson Research. That approach would encrypt R using a tunable cryptographic function that takes the user-supplied password and a "difficulty bit string" as additional inputs, and attach the function's output to each shard. Since the user knows the password, the time required to "solve" the puzzle (and recover R) would depend only on the length of the difficulty bit string, while for attackers, it would depend on the size of both parameters combined.

Hess has been updating the keysafe page with comments sent in by others; one expects that trend to continue for at least some time. In the end, though, figuring out the cryptographic side of the proposal will only get part of the way to a solution. Equally important challenges will be developing the keysafe client and server software and getting others interested in deploying it. Perhaps finding people interested in the client side is no great difficulty—after all, the free-software community is generally interested in OpenPGP—but persuading trustworthy entities to stand up and run keysafe servers in a global pool might take more time.

Hess indicated in the original blog post that he regards simple key backup as a missing feature of git-annex (the project on which he does funded development work), so one might expect to see a working prototype emerge before too long if a consensus about the value of the system emerges. But Hess also seems keen on getting input about the design of the system from various subject-matter experts before moving forward, so it is also possible that this is just the beginning of a long discussion.

Comments (4 posted)

Brief items

Quote of the week

There is also a ton of talk about Artificial Intelligence, which is a way to pretend a few regular expressions make things better. I don't think that's fooling anyone today. Real AI might do something clever someday, but if it's truly intelligent, it'll run away once it gets a look at what's going on. I wonder if we'll have a place for all the old outdated AIs to retire someday.
Josh Bressers

Comments (none posted)

Booktype 2.1 released

Version 2.1 of the collaborative book-editing platform Booktype has been released. New features include a built-in image editor, a refreshed set of theme and layout options, and the ability to export books to standard XHTML. Also included is a change to the internal commenting system; while earlier releases supported collaborative live chats in a sidebar, they lacked the ability to leave a persistent comment for other users to read later. That feature is now supported.

Comments (none posted)

The first public Kirigami release

The KDE project has announced the first public release of the Kirigami interface framework. "Now, with KDE’s focus expanding beyond desktop and laptop computers into the mobile and embedded sector, our QWidgets-based components alone are not sufficient anymore. In order to allow developers to easily create Qt-based applications that run on any major mobile or desktop operating system (including our very own existing Plasma Desktop and upcoming Plasma Mobile, of course), we have created a framework that extends Qt Quick Controls: Welcome Kirigami!"

Comments (none posted)

The GNU C Library version 2.24 is now available

The 2.24 version of the GNU C Library (glibc) has been released. It comes with lots of bug fixes, including five for security vulnerabilities (four stack overflows and a memory leak). Some deprecated features have been removed, as well as deprecating the readdir_r() and readdir64_r() functions in favor of readdir() and readdir64(). There are also additions to the math library (nextup*() and nextdown*()) to return the next representable value toward either positive or negative infinity.

Full Story (comments: 27)

Discourse 1.6 is available

Version 1.6 of the Discourse online discussion framework has been released. Highlighted new features include a full-thread vertical timeline widget that provides an always-visible overview of the discussion , a way to merge several small posts into a single message after they have been published, and a mechanism to take threads off of the public board and into a private conversation. This release also comes after the project went through two independent security tests, although the full fruits of that work may still be to come.

Comments (none posted)

Lumina Desktop 1.0.0 released

Version 1.0.0 of the Lumina Desktop Environment has been released. "After roughly four years of development, I am pleased to announce the first official release of the Lumina desktop environment! This release is an incredible realization of the initial idea of Lumina – a simple and unobtrusive desktop environment meant for users to configure to match their individual needs." Lumina is a from-scratch, BSD-licensed desktop system.

Comments (9 posted)

Newsletters and articles

Development newsletters from the past week

Comments (none posted)

Let's Encrypt will be trusted by Firefox 50

The Let's Encrypt project, which provides a free SSL/TLS certificate authority (CA), has announced that Mozilla has accepted the project's root key into the Mozilla root program and will be trusted by default as of Firefox 50. This is a step forward from Let's Encrypt's earlier status. "In order to start issuing widely trusted certificates as soon as possible, we partnered with another CA, IdenTrust, which has a number of existing trusted roots. As part of that partnership, an IdenTrust root 'vouches for' the certificates that we issue, thus making our certificates trusted. We’re incredibly grateful to IdenTrust for helping us to start carrying out our mission as soon as possible. However, our plan has always been to operate as an independently trusted CA. Having our root trusted directly by the Mozilla root program represents significant progress towards that independence." The project has also applied for inclusion the CA trust roots maintained by Apple, Microsoft, Google, Oracle, and Blackberry. News on those programs is still pending.

Comments (5 posted)

Page editor: Nathan Willis

Announcements

Brief items

Christoph Hellwig's case against VMware dismissed

The GPL-infringement case brought against VMware by Christoph Hellwig in Germany has been dismissed by the court; the ruling is available in German and English. The decision seems to be based entirely on uncertainty over where his copyrights actually lie and not on the infringement claims. "Nonetheless, these questions (on which the legal interest of the parties and their counsel presumably focus) can and must remain unanswered. This is because the very first requirement for conducting an examination, namely that code possibly protected for the Plaintiff as a holder of adapter’s copyright has been used in the Defendant’s product, cannot be established. " The ruling will be appealed.

Comments (30 posted)

EFF Announces 2016 Pioneer Award Winners

The Electronic Frontier Foundation (EFF) has announced the winners of the 2016 Pioneer Awards: "Malkia Cyril of the Center for Media Justice, data protection activist Max Schrems, the authors of the “Keys Under Doormats” report that counters calls to break encryption, and the lawmakers behind CalECPA—a groundbreaking computer privacy law for Californians."

Comments (none posted)

The Tor Social Contract

The Tor Project has announced that it has published its Social Contract. "Our social contract is a set of behaviors and goals: not just the promised results we want for our community, but the ways we seek to achieve them. We want to grow Tor by supporting and advancing these guidelines in the time we are working on Tor, while taking care not to undermine them in the rest of our time. The principles can also be used to help recognize when people's actions or intents are hurting Tor. Some of these principles are established norms; things we've been doing every day for a long time; while others are more aspirational -- but all of them are values we want to live in public, and we hope they will make our future choices easier and more open. This social contract is one of several documents that define our community standards, so if you're looking for things that aren't here (e.g. something that might be in a code of conduct) bear in mind that they might exist, in a different document."

Full Story (comments: none)

Articles of interest

The People’s Code (White House blog)

US Chief Information Officer Tony Scott introduces the Federal Source Code Policy, on the White House blog. "By making source code available for sharing and re-use across Federal agencies, we can avoid duplicative custom software purchases and promote innovation and collaboration across Federal agencies. By opening more of our code to the brightest minds inside and outside of government, we can enable them to work together to ensure that the code is reliable and effective in furthering our national objectives. And we can do all of this while remaining consistent with the Federal Government’s long-standing policy of technology neutrality, through which we seek to ensure that Federal investments in IT are merit-based, improve the performance of our government, and create value for the American people." (Thanks to David A. Wheeler)

Comments (6 posted)

Vice-President’s Report — The State of the GNOME Foundation

Jeff Fortin Tam reports on the state of the GNOME Foundation. "Generally speaking, this year was a bit less intense than the one before it (we didn’t have to worry about a legal battle with a giant corporation this time around!) although we did end up touching a fair amount of legal matters, such as trademark agreements. One big item we got cleared was the Ubuntu GNOME trademark agreement. We also welcomed businesses that wanted to sell GNOME-related merchandise, you can find them listed here—supporting them by purchasing GNOME-related items also supports the Foundation with a small percentage shared as royalties." (Thanks to Paul Wise)

Comments (none posted)

Calls for Presentations

Linux Plumbers Conference refereed-track CFP

The Linux Plumbers Conference is co-located with the Kernel Summit this year rather than with a large conference like LinuxCon. As a result, it is running its own refereed presentation track alongside the usual microconference schedule. The deadline for proposals is September 1; interested speakers are encouraged to make themselves heard before then.

Full Story (comments: none)

CFP Deadlines: August 11, 2016 to October 10, 2016

The following listing of CFP deadlines is taken from the LWN.net CFP Calendar.

DeadlineEvent Dates EventLocation
August 15 October 5
October 7
Netdev 1.2 Tokyo, Japan
August 17 September 21
September 23
X Developers Conference Helsinki, Finland
August 19 October 13 OpenWrt Summit Berlin, Germany
August 20 August 27
September 2
Bornhack Aakirkeby, Denmark
August 20 August 22
August 24
7th African Summit on FOSS Kampala, Uganda
August 21 October 22
October 23
Datenspuren 2016 Dresden, Germany
August 24 September 9
September 15
ownCloud Contributors Conference Berlin, Germany
August 31 November 12
November 13
PyCon Canada 2016 Toronto, Canada
August 31 October 31 PyCon Finland 2016 Helsinki, Finland
September 1 November 1
November 4
Linux Plumbers Conference Santa Fe, NM, USA
September 1 November 14 The Third Workshop on the LLVM Compiler Infrastructure in HPC Salt Lake City, UT, USA
September 5 November 17 NLUUG (Fall conference) Bunnik, The Netherlands
September 9 November 16
November 18
ApacheCon Europe Seville, Spain
September 12 November 14
November 18
Tcl/Tk Conference Houston, TX, USA
September 12 October 29
October 30
PyCon.de 2016 Munich, Germany
September 13 December 6 CHAR(16) New York, NY, USA
September 15 October 21
October 23
Software Freedom Kosovo 2016 Prishtina, Kosovo
September 25 November 4
November 6
FUDCon Phnom Penh Phnom Penh, Cambodia
September 30 November 12
November 13
T-Dose Eindhoven, Netherlands
September 30 December 3 NoSlidesConf Bologna, Italy
September 30 November 5
November 6
OpenFest 2016 Sofia, Bulgaria
September 30 November 29
November 30
5th RISC-V Workshop Mountain View, CA, USA
September 30 December 27
December 30
Chaos Communication Congress Hamburg, Germany
October 1 October 22 2016 Columbus Code Camp Columbus, OH, USA

If the CFP deadline for your event does not appear here, please tell us about it.

Upcoming Events

FSFE: Free Software Foundation Europe Summit 2016

The FSFE Summit will take place September 2-4, co-located with QtCon in Berlin, Germany. "Apart from working on furthering the adoption of Free Software in Europe, we will also be celebrating the FSFE's 15th anniversary."

Full Story (comments: none)

Events: August 11, 2016 to October 10, 2016

The following event listing is taken from the LWN.net Calendar.

Date(s)EventLocation
August 10
August 12
MonadLibre 2016 Havana, Cuba
August 12
August 14
GNOME Users and Developers European Conference Karlsruhe, Germany
August 12
August 16
PyCon Australia 2016 Melbourne, Australia
August 18
August 20
GNU Hackers' Meeting Rennes, France
August 18
August 21
Camp++ 0x7e0 Komárom, Hungary
August 20
August 21
FrOSCon - Free and Open Source Software Conference Sankt-Augustin, Germany
August 20
August 21
Conference for Open Source Coders, Users and Promoters Taipei, Taiwan
August 22
August 24
ContainerCon Toronto, Canada
August 22
August 24
LinuxCon NA Toronto, Canada
August 22
August 24
7th African Summit on FOSS Kampala, Uganda
August 24
August 26
YAPC::Europe Cluj 2016 Cluj-Napoca, Romania
August 24
August 26
KVM Forum 2016 Toronto, Canada
August 25
August 26
Linux Security Summit 2016 Toronto, Canada
August 25
August 26
The Prometheus conference Berlin, Germany
August 25
August 26
Xen Project Developer Summit Toronto, Canada
August 25
August 28
Linux Vacation / Eastern Europe 2016 Grodno, Belarus
August 27
September 2
Bornhack Aakirkeby, Denmark
August 31
September 1
Hadoop Summit Melbourne Melbourne, Australia
September 1
September 8
QtCon 2016 Berlin, Germany
September 1
September 7
Nextcloud Conference Berlin, Germany
September 2
September 4
FSFE summit 2016 Berlin, Germany
September 7
September 9
LibreOffice Conference Brno, Czech Republic
September 8 LLVM Cauldron Hebden Bridge, UK
September 8
September 9
First OpenPGP conference Cologne, Germany
September 9
September 10
RustConf 2016 Portland, OR, USA
September 9
September 15
ownCloud Contributors Conference Berlin, Germany
September 9
September 11
GNU Tools Cauldron 2016 Hebden Bridge, UK
September 9
September 11
Kiwi PyCon 2016 Dunedin, New Zealand
September 13
September 16
PostgresOpen 2016 Dallas, TX, USA
September 15
September 19
PyConUK 2016 Cardiff, UK
September 15
September 17
REST Fest US 2016 Greenville, SC, USA
September 16
September 22
Nextcloud Conference Berlin, Germany
September 19
September 23
Libre Application Summit Portland, OR, USA
September 20
September 22
Velocity NY New York, NY, USA
September 20
September 21
Lustre Administrator and Developer Workshop Paris, France
September 20
September 23
PyCon JP 2016 Tokyo, Japan
September 21
September 23
X Developers Conference Helsinki, Finland
September 22
September 23
European BSD Conference Belgrade, Serbia
September 23
September 25
OpenStreetMap State of the Map 2016 Brussels, Belgium
September 23
September 25
PyCon India 2016 Delhi, India
September 26
September 27
Open Source Backup Conference Cologne, Germany
September 26
September 28
Cloud Foundry Summit Europe Frankfurt, Germany
September 27
September 29
OpenDaylight Summit Seattle, WA, USA
September 28
October 1
systemd.conf 2016 Berlin, Germany
September 28
September 30
Kernel Recipes 2016 Paris, France
September 30
October 2
Hackers Congress Paralelní Polis Prague, Czech Republic
October 1
October 2
openSUSE.Asia Summit Yogyakarta, Indonesia
October 3
October 5
OpenMP Conference Nara, Japan
October 4
October 6
LinuxCon Europe Berlin, Germany
October 4
October 6
ContainerCon Europe Berlin, Germany
October 5
October 7
International Workshop on OpenMP Nara, Japan
October 5
October 7
Netdev 1.2 Tokyo, Japan
October 6
October 7
PyConZA 2016 Cape Town, South Africa
October 7
October 8
Ohio LinuxFest 2016 Columbus, OH, USA
October 8
October 9
Gentoo Miniconf 2016 Prague, Czech Republic
October 8
October 9
LinuxDays 2016 Prague, Czechia

If your event does not appear here, please tell us about it.

Page editor: Rebecca Sobol


Copyright © 2016, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds