|
|
Subscribe / Log in / New account

LWN.net Weekly Edition for January 12, 2017

Maintainers for desktop "critical infrastructure"

By Jake Edge
January 11, 2017

By any measure, the PulseAudio sound server is an important part of most Linux desktops. But, like other free-software projects, PulseAudio is understaffed and relies on volunteers to maintain it. An effort by the most active maintainer, Tanu Kaskinen, to use the Patreon platform to help fund PulseAudio maintenance makes one wonder how much of the free software we depend on is similarly suffering.

It was not that long ago that even critical internet infrastructure was largely being ignored by the companies that relied on it. The Heartbleed fiasco made it obvious that OpenSSL was not thriving with mostly volunteer maintainers. Heartbleed led to the formation of the Core Infrastructure Initiative (CII), which targets critical infrastructure projects (OpenSSL, OpenSSH, GnuPG, and so on). Initially, the CII provided grants to projects to help fund their development and maintenance, and evidently still does, though it is moving toward using threat modeling to target projects for security auditing according to the FAQ.

That work is great, but it is limited by a number of factors: funding and the interests of its members, primarily. Few of the companies involved have much, if any, interest in the Linux desktop. Some might argue that there aren't any companies with that particular interest, though that would be disingenuous. In any case, though, desktop Linux is a community-supported endeavor, at least more so than server or cloud Linux, which likely means some things are slipping through the cracks.

Kaskinen left his job in 2015 to be able to spend more time on PulseAudio (and some audio packages that he maintains for OpenEmbedded). For the last four months or so, he has been soliciting funds on Patreon. Unlike Kickstarter and other similar systems, Patreon is set up to provide ongoing funding, rather than just a chunk of money for a particular feature or project. Donors pledge a monthly amount to try to support someone's work going forward.

So far, Kaskinen has attracted 18 patrons who provide $77 per month, which approximately covers his rent according to the Patreon page. His needs are rather modest, as he is looking for $340 per month. [Update: These numbers are based on a misunderstanding, see the first comment below.] Patrons get immediate access to his monthly reports, while others must wait a few weeks (reminiscent of a certain weekly publication perhaps). Once they are freely available, the reports are published on his blog. He is actively looking for other reward ideas for those who donate at more than the $1 per month minimum.

A look through the reports gives the impression of an active maintainer working on bugs, reviewing patches, answering questions on IRC, writing documentation, and so on. Much of that could have been done in his "spare time", though presumably at a much slower rate. In the meantime, Kaskinen is burning through his savings to help support the many users of PulseAudio. It is undoubtedly the plight of many maintainers, though most probably just try to fit that work around their day job, rather than to try to do it full time.

There are a number of companies and groups that use PulseAudio as part of their Linux distribution. That starts with the traditional distributions, such as Fedora, Red Hat Enterprise Linux, Ubuntu, SUSE, and openSUSE, but goes beyond that. Mobile and automobile-focused distributions, such as Tizen, GENIVI, and Automotive Grade Linux, also use (or can use) the audio server. One would think they and others might have an interest in having a full-time PulseAudio maintainer.

It is a dilemma for the free-software world. Our projects will be much better off with full-time maintainers, and lots of projects already have that thanks to various companies in our community, but what should be done for projects that fall through the cracks? Though it is early going for Kaskinen, it is hard to see Patreon-based campaigns being the ultimate solution, though they can certainly help.

The free-software community itself, or at least the individuals who make up a large chunk of it, seem unlikely to be able to solve this problem directly. While that may be unfortunate at some level, the reality is that millions of folks, all over the world, with varying income levels and even awareness of how the software they use comes about, probably cannot be relied upon to directly fund these kinds of projects. For that, it will take organizations and/or companies to help identify, and ultimately fund, maintenance of critical desktop infrastructure.

Plenty of that infrastructure is being funded, of course. The major desktop environments have companies or groups of companies that employ the maintainers, developers, and others for those projects. Web browsers are in good shape, overall, as are some of the office and productivity suites. But there is a second tier of applications (and plumbing in the case of PulseAudio) that may not be receiving the attention it deserves and, perhaps in some cases, requires.

The urgency that Heartbleed provided is probably never going to occur in the Linux desktop realm, however. There is less monoculture and vastly less of an installed base. Android is, of course, a much bigger target, but has a large company behind it and doesn't use much from desktop Linux (other than the kernel).

The kernel model for maintainers seems to work quite well, overall. Companies employ maintainers of various subsystems to, essentially, continue maintaining. Those companies get the benefit of having those people on their staff as well as the benefit of a better-maintained kernel for themselves and others. But the kernel is unique; other parts of our free-software desktop infrastructure are not so centrally placed, thus not so well-maintained.

One hopes that Kaskinen can find enough patrons to meet his modest needs to continue with his work. But it would be better still if we could find a way as a community to make it possible for maintainers (and others) to do their work without giving up all of their free time—or their savings.

Comments (30 posted)

The long road to getrandom() in glibc

By Jonathan Corbet
January 9, 2017
The GNU C library (glibc) 2.25 release is expected to be available at the beginning of February; among the new features in this release will be a wrapper for the Linux getrandom() system call. One might well wonder why getrandom() is only appearing in this release, given that kernel support arrived with the 3.17 release in 2014 and that the glibc project is supposed to be more receptive to new features these days. A look at the history of this particular change highlights some of the reasons why getting new features into glibc is still hard.

Glibc remains a conservative project. There are a number of good reasons for that, but it does mean that developers proposing new features tend to run into roadblocks; that has certainly happened with getrandom(). The kernel's random number subsystem maintainer, Ted Ts'o, has been known to complain about the delay in support for this system call; he has suggested that "maybe the kernel developers should support a libinux.a library that would allow us to bypass glibc when they are being non-helpful". Peter Gutmann resorted to channeling Sir Humphrey Appleby when describing the glibc project's approach to getrandom(). But what really caused the delay here?

Glibc bug 17252, requesting the addition of getrandom(), was filed in August 2014, five days after the 3.17 kernel release. Glibc developer Joseph Myers responded twice in the following six months, suggesting that, if anybody wanted getrandom() in glibc, they would need to go onto the project's mailing list and work to drive the development forward. The first reason for the delay is thus simple: nobody stepped up to do the work.

One might wonder why it took so long for somebody to come along and implement a simple system-call wrapper. In its essence, the code that will appear in the 2.25 release is:

    /* Write LENGTH bytes of randomness starting at BUFFER.  Return 0 on
       success and -1 on failure.  */
    ssize_t
    getrandom (void *buffer, size_t length, unsigned int flags)
    {
      return SYSCALL_CANCEL (getrandom, buffer, length, flags);
    }

Such a function does not seem particularly hard to write. The original patch for getrandom() support, finally posted by Florian Weimer in June 2016, was rather more complicated than that, though. Weimer, knowing that the glibc project is conservative and wants the library to work in almost all situations, attempted to cover every base he could think of. So the patch included documentation updates, test programs, and several other details that, in turn, led to a number of sticking points that surely slowed the eventual acceptance of the patch.

The first obstacle, though, had little to do with the patch itself; it was, instead, brought about by the project's reluctance to add wrappers for Linux-specific system calls at all. Glibc does not see itself as a Linux-specific project, so it naturally prefers standardized interfaces that can be supported on all systems. The project has sporadically discussed its policy around Linux-specific calls over the last couple of years. In 2015, Myers described it as:

The result is a de facto status of "syscall wrappers present for almost all syscalls added up to Linux 3.2 / glibc 2.15 but for nothing added since then", which certainly doesn't make sense.

A draft policy for Linux-specific wrappers has existed since about then but, lacking consensus in a strongly consensus-oriented project, it has never achieved any sort of official status. Thus, even though this policy states that system-call wrappers should be added by default in the absence of reasons to the contrary, Roland McGrath responded to the initial patch posting with a terse message saying: "You need to start with rationale justifying the new nonstandard API and why it belongs in libc". That justification was not hard, given that a number of projects have been asking for this wrapper, and that adding the BSD getentropy() interface on top of it is easily done, but this challenge foreshadowed much of what was to come.

A trickier question was: what should glibc do when running on pre-3.17 kernels (or non-Linux kernels) that lack getrandom() support? The initial patch included a set of emulation functions so that getrandom() calls would always work; they would read the data from /dev/random or /dev/urandom as appropriate. Doing so involved keeping open file descriptors to those devices (lest later calls fail if the application does a chroot()). But using file descriptors in libraries is always fraught with perils; applications may have their own ideas of which descriptors are available, or may simply run a loop closing all descriptors. So the code took pains to use high-numbered descriptors that applications presumably don't care about, and it used fstat() to ensure that the application had not closed and reopened its descriptors between calls.

This usage of file descriptors drew a number of comments; it is something that glibc tries to avoid whenever possible. After some discussion, it was concluded that glibc should provide only a wrapper for the system call, without emulation. If an application calls getrandom() on a kernel where that system call is not supported, the glibc wrapper will simply return ENOSYS and it will be up to the application to use a fallback. That decision removed a fair amount of code and one obstacle to merging.

In writing the patch, Weimer worried that there may be a number of applications out there with their own function called getrandom(), which may or may not provide the same interface and semantics as the glibc version. The prospect was especially troubling because a getrandom() call that does not actually return random data may not cause any visible problems in the application at all — until some attacker notices this behavior and exploits it. So he employed a bunch of macro and symbol-versioning trickery to detect and prevent confusion over which getrandom() function to use.

This feature, too, was unpopular; glibc does not normally add extra layers of protection around its symbols in this way. The tricks made it impossible to take the address of the function, among other things. After extensive discussion, Weimer backed down and removed the interposition protection, but he clearly was not entirely happy about it.

The most extensive argument, though, was over whether getrandom() should be a thread cancellation point. In other words, what should happen if pthread_cancel() is called on a thread that is currently blocked in getrandom()? The original patch did make getrandom() into a cancellation point; it still behaves that way in the version merged for 2.25, but it had to survive a lot of argument to get there.

Weimer wanted getrandom() to be a cancellation point because the system call can block indefinitely, even if it almost never blocks at all. The Python os.urandom() episode showed that this blocking can, in rare situations, cause real problems. So, he said, it should be possible for a cancellation-aware program to respond to an overly slow getrandom() call.

The objections here seemed to be, for the most part, objections to cancellation points in general. It is true that cancellation points are problematic in a number of ways. To the implementation issues one can add the fact that most programs are not cancellation-aware and may not respond well to a thread cancellation in an unexpected place. A version of getrandom() that adds a new cancellation point could thus lead to unfortunate behavior. Additionally, getrandom() is supposed to always succeed; the possibility of cancellation adds a failure mode that is not a part of the system call itself.

On the other hand, Carlos O'Donell argued that getrandom() is analogous to read() and thus should behave the same way; read() is a cancellation point. The argument went back and forth over months, and included detours into whether there should be a separate getrandom_nocancel() function or an additional "cancellation point please" argument to getrandom(). In the end, getrandom() remained an unconditional cancellation point. The BSD-compatible getentropy() implementation included in the patch is not a cancellation point, though.

With these issues resolved, the conversation came to a close on December 12 when getrandom() and getentropy() were merged into the glibc repository. A feature that has been shipping in the Linux kernel for over two years will finally be available to application developers without the need to create special system-call wrappers. Now all that's left is all the other Linux-specific system calls that still lack glibc wrappers.

Comments (67 posted)

Page editor: Jonathan Corbet

Security

SipHash in the kernel

By Jonathan Corbet
January 10, 2017
A hash function performs a one-way computation on a set of data, producing a set of bytes that, one hopes, is effectively random and which cannot be used to derive the input data. The kernel uses hash functions in numerous places for everything from the generation of security-sensitive sequence numbers to the implementation of hash tables. The security of those functions is increasingly in doubt, and, seemingly, their performance can be improved as well. The process of replacing these hash functions will begin with the 4.11 kernel, which should see the introduction of the SipHash pseudo-random function.

SipHash is the creation of Jean-Philippe Aumasson and (inevitably) Daniel J. Bernstein; readers interested in the details can find them in this paper [PDF]. It was designed with a number of objectives in mind, starting with being a cryptographically secure hash function. In practice, what that means is that it is computationally infeasible to derive the input data from its corresponding hash, or to derive the secret data used in the hashing operation even given the ability to see the output for a set of chosen inputs. Another important objective was speed, especially with smaller inputs. Many existing hash functions have a high setup overhead; that cost matters little when large amounts of data are being hashed, but it hurts for the hashing of smaller inputs. As it happens, many of the hashing operations in the kernel are applied to small chunks of data, so lower overhead would be welcome.

The list of SipHash users is large and growing; many projects have adopted it in an attempt to defend against hash-collision attacks. These attacks exploit a known hash function to cause a hash table to degrade into a simple linear list, with potentially devastating effects on performance. The Python language switched to SipHash in 2013; other users include various BSD distributions, Perl, Ruby, Rust, and more. This move is not universally acclaimed, but most seem to see it as a step in the right direction. Thus far, however, the kernel has lacked a SipHash implementation.

What does the kernel use instead? As might be expected with a large body of code like the kernel, different algorithms are employed in different settings. The generation of TCP sequence numbers, for example, is done using the MD5 hash function, which has been regarded as insecure for some time. That is potentially problematic, since an attacker who can predict sequence numbers can interfere with or inject data into network connections. The get_random_int() and get_random_long() functions used extensively throughout the kernel are also based on MD5. The "syncookies" that can be employed to defend against SYN flood attacks are, instead, generated with SHA-1 which, while more secure than MD5, is showing its age as well. SHA-1 is also used in the core random-number generator, in the BPF subsystem, and elsewhere.

Use of those algorithms, however, pales next to the usage of a function called jhash() (and its variants), a Jenkins hash implementation. The kernel contains a lot of hash tables, and, as a general rule, jhash() is the hash function used to place data into hash buckets. This function has the advantage of being quite fast, but it makes no claims to cryptographic security. Many in-kernel users include some secret data of their own as a defense against collision attacks. But if the results of the hash are visible to a hacker (and simply listing the contents of the table in order may suffice), then deriving that secret data is a relatively easy task.

Jason Donenfeld set out to replace all of these hashing functions with an implementation of SipHash inside the kernel. SipHash uses an explicit secret key for collision defense, so the first order of business for an in-kernel user is the generation of that key:

    #include <linux/siphash.h>

    siphash_key_t hash_secret;
    get_random_bytes(&hash_secret, sizeof(hash_secret));

The use of get_random_bytes() is, according to the documentation, the only proper way to initialize this secret. Thereafter, of course, kernel code should take care not to expose the secret outside of the kernel itself, or the protection against hash collisions will be lost.

The hashing of data is done with:

    u64 siphash(const void *data, size_t len, const siphash_key_t *key);

The return value will be a 64-bit hash of the input data. There are a number of optimized variants for constant-size input data, but most developers need not worry about those since the generic version will pick one of those at compile time if appropriate.

SipHash is significantly faster than either MD5 or SHA-1, while producing results that are deemed to be more secure. So the replacement of the older algorithms with SipHash should not be a difficult decision to make. The same is not true for jhash(), which is much faster than SipHash. In an attempt to convince jhash() users to make the switch, Donenfeld added a "HalfSipHash" variant:

    u32 hsiphash(const void *data, size_t len, const hsiphash_key_t *key);

This version uses a reduced variant of the SipHash algorithm to produce a smaller and less secure result. Users of jhash() who do not want to pay the cost of SipHash might just be convinced to use this version instead, an outcome described by Donenfeld as "a terrifying but potentially useful possibility". Potential users will note that, while hsiphash() is faster than siphash(), it still takes about three times as long as jhash() to produce a result. The better security that comes from using it should justify the cost in many settings, but it also seems likely that jhash() won't be going away anytime soon.

The patch set has been through a few revisions, with some relatively small changes being made. The biggest complaint about it seems to have come from networking maintainer David Miller, who was not entirely happy about moving away from hash functions that are implemented by the CPUs themselves:

This and the next patch are a real shame, performance wise, on cpus that have single-instruction SHA1 and MD5 implementations. Sparc64 has both, and I believe x86_64 can do SHA1 these days. It took so long to get those instructions into real silicon, and then have software implemented to make use of them as well.

The interesting thing, as a couple of participants pointed out, is that Linux is not actually using these hashing instructions even on the hardware that supports them. Among other things, they require some setup cost that takes away a lot of the performance benefit, especially for small input data arrays. So the existence of hardware-based implementations is, for now, not relevant.

In any case, Miller applied the patches on January 9, so they should make it into the mainline during the 4.11 merge window. The process of converting at least some of those jhash() users has not yet begun, though, and can be expected to take some time.

Comments (8 posted)

Brief items

Security quotes of the week

; DROP TABLE "COMPANIES";-- LTD
Someone registers an xkcd-inspired company name in the UK

On the other hand, there are techniques that can identify attackers with varying degrees of precision. It's rarely just one thing, and you'll often hear the term "constellation of evidence" to describe how a particular attacker is identified. It's analogous to traditional detective work. Investigators collect clues and piece them together with known mode of operations. They look for elements that resemble other attacks and elements that are anomalies. The clues might involve ones and zeros, but the techniques go back to Sir Arthur Conan Doyle.
Bruce Schneier

Just as responsible websites won't permit a user to create an account without a password, and many attempt to prevent users from selecting incredibly weak passwords, we must start the process of requiring 2-factor use on a routine basis, both for the protection of users and of the companies that are serving them — and for the protection of society in a broader sense as well. We can no longer permit this to be simply an optional offering that vast numbers of users ignore.

This will indeed be a painful bullet to bite in some important respects. Doing 2-factor properly isn't cheap, but it isn't rocket science either. High quality commercial, proprietary, and open source solutions all exist. User education will be critical. There will be some user backlash to be sure. Poor quality 2-factor systems will need to be upgraded on a priority basis before the process of requiring 2-factor use can even begin.

It's significant work, but if we care about our users (and stockholders!) we can no longer keep kicking this can down the road.

Lauren Weinstein

Comments (17 posted)

Kadlec: The MongoDB hack and the importance of secure defaults

Tim Kadlec looks at the ongoing MongoDB compromises and how they came to be. "Before version 2.6.0, that wasn’t true. By default, MongoDB was left open to remote connections. Authentication is also not required by default, which means that out of the box installs of MongoDB before version 2.6.0 happily accept unauthenticated remote connections."

Comments (10 posted)

CVE-2016-9587: an unpleasant Ansible vulnerability

The Ansible project is currently posting release candidates for the 2.1.4 and 2.2.1 releases. They fix an important security bug: "CVE-2016-9587 is rated as HIGH in risk, as a compromised remote system being managed via Ansible can lead to commands being run on the Ansible controller (as the user running the ansible or ansible-playbook command)." Until this release is made, it would make sense to be especially careful about running Ansible against systems that might have been compromised.

Update: see this advisory for much more detailed information.

Full Story (comments: 6)

New vulnerabilities

flac: three vulnerabilities

Package(s):flac CVE #(s):
Created:January 6, 2017 Updated:January 12, 2017
Description: Three crashes from crafted files are noted in the 2015 flac bug report.
Alerts:
Fedora FEDORA-2017-f0d976df9e mingw-flac 2017-01-12
Fedora FEDORA-2017-e7f9c23746 flac 2017-01-06

Comments (none posted)

icoutils: code execution

Package(s):icoutils CVE #(s):CVE-2017-5208
Created:January 9, 2017 Updated:January 11, 2017
Description: From the Debian advisory:

Choongwoo Han discovered that a programming error in the wrestool tool of the icoutils suite allows denial of service or the execution of arbitrary code if a malformed binary is parsed.

Alerts:
Mageia MGASA-2017-0044 icoutils 2017-02-07
Ubuntu USN-3178-1 icoutils 2017-01-24
Fedora FEDORA-2017-7c221d6f49 icoutils 2017-01-17
Fedora FEDORA-2017-3d7734a8b2 icoutils 2017-01-17
Debian-LTS DLA-789-1 icoutils 2017-01-17
openSUSE openSUSE-SU-2017:0168-1 icoutils 2017-01-17
openSUSE openSUSE-SU-2017:0166-1 icoutils 2017-01-17
openSUSE openSUSE-SU-2017:0167-1 icoutils 2017-01-17
Arch Linux ASA-201701-13 icoutils 2017-01-09
Debian DSA-3756-1 icoutils 2017-01-09

Comments (none posted)

irssi: multiple vulnerabilities

Package(s):irssi CVE #(s):CVE-2017-5193 CVE-2017-5194 CVE-2017-5195 CVE-2017-5196
Created:January 10, 2017 Updated:January 20, 2017
Description: From the openSUSE advisory:

irssi was updated to fix four vulnerabilities that could result in denial of service (remote crash) when connecting to malicious servers or receiving specially crafted data. (boo#1018357)

  • CVE-2017-5193: NULL pointer dereference in the nickcmp function
  • CVE-2017-5194: out of bounds read in certain incomplete control codes
  • CVE-2017-5195: out of bounds read in certain incomplete character sequences
  • CVE-2017-5196: Correct an error when receiving invalid nick message
Alerts:
Ubuntu USN-3184-1 irssi 2017-02-01
Fedora FEDORA-2017-d2e7217e2a irssi 2017-01-30
Fedora FEDORA-2017-7f9e997585 irssi 2017-01-30
Gentoo 201701-45 irssi 2017-01-19
Mageia MGASA-2017-0018 irssi 2017-01-14
Arch Linux ASA-201701-14 irssi 2017-01-12
Slackware SSA:2017-011-03 irssi 2017-01-11
openSUSE openSUSE-SU-2017:0094-1 irssi 2017-01-09
openSUSE openSUSE-SU-2017:0093-1 irssi 2017-01-09

Comments (none posted)

jasper: three vulnerabilities

Package(s):jasper CVE #(s):CVE-2016-9395 CVE-2016-9398 CVE-2016-9591
Created:January 9, 2017 Updated:January 11, 2017
Description: From the SUSE advisory:

- CVE-2016-9395: Invalid jasper files could lead to abort of the library caused by attacker provided image. (bsc#1010977)

- CVE-2016-9398: Invalid jasper files could lead to abort of the library caused by attacker provided image. (bsc#1010979)

- CVE-2016-9591: Use-after-free on heap in jas_matrix_destroy. (bsc#1015993)

Alerts:
openSUSE openSUSE-SU-2017:0101-1 jasper 2017-01-10
SUSE SUSE-SU-2017:0084-1 jasper 2017-01-08

Comments (none posted)

kernel: denial of service

Package(s):kernel CVE #(s):CVE-2016-9806
Created:January 11, 2017 Updated:January 11, 2017
Description: From the CVE entry:

Race condition in the netlink_dump function in net/netlink/af_netlink.c in the Linux kernel before 4.6.3 allows local users to cause a denial of service (double free) or possibly have unspecified other impact via a crafted application that makes sendmsg system calls, leading to a free operation associated with a new dump that started earlier than anticipated.

Alerts:
SUSE SUSE-SU-2017:0471-1 kernel 2017-02-15
SUSE SUSE-SU-2017:0464-1 kernel 2017-02-15
openSUSE openSUSE-SU-2017:0458-1 kernel 2017-02-13
openSUSE openSUSE-SU-2017:0456-1 kernel 2017-02-13
SUSE SUSE-SU-2017:0407-1 kernel 2017-02-06
Oracle ELSA-2017-3508 kernel 4.1.12 2017-01-12
Oracle ELSA-2017-3508 kernel 4.1.12 2017-01-12
Ubuntu USN-3168-2 linux-lts-trusty 2017-01-11
Ubuntu USN-3168-1 kernel 2017-01-11

Comments (none posted)

kopete: encryption botch

Package(s):kopete CVE #(s):
Created:January 6, 2017 Updated:January 11, 2017
Description: From the SUSE bugzilla entry:

When updating OTR GUI icon properly set OTR instance tag Without configured instance tag libotr library does not encrypt sent messages and moreover it even does not report any error that message was not encrypted.

This should fix a bug when OTR "encrypted" icon is shown in GUI and libotr itself does not want to encrypt messages. It happened when Kopete window with active OTR session was closed and after that again opened.

Alerts:
openSUSE openSUSE-SU-2017:0034-1 kopete 2017-01-05
openSUSE openSUSE-SU-2017:0035-1 kopete 2017-01-05

Comments (none posted)

libtiff: XML External Entity (XXE) attacks

Package(s):tiff CVE #(s):CVE-2016-9318
Created:January 10, 2017 Updated:January 11, 2017
Description: From the CVE entry:

libxml2 2.9.4 and earlier, as used in XMLSec 1.2.23 and earlier and other products, does not offer a flag directly indicating that the current document may be read but other files may not be opened, which makes it easier for remote attackers to conduct XML External Entity (XXE) attacks via a crafted document.

Alerts:
openSUSE openSUSE-SU-2017:0446-1 libxml2 2017-02-11
Gentoo 201701-16 tiff 2017-01-09

Comments (none posted)

pgbouncer: authentication bypass

Package(s):pgbouncer CVE #(s):CVE-2015-6817
Created:January 11, 2017 Updated:January 11, 2017
Description: From the Gentoo advisory:

A remote attacker might send a specially crafted package possibly resulting in a Denial of Service condition. Furthermore, a remote attacker might bypass authentication in configurations using the "auth_user" feature.

Alerts:
Gentoo 201701-24 pgbouncer 2017-01-11

Comments (none posted)

php7: denial of service

Package(s):php7 CVE #(s):CVE-2016-9936
Created:January 9, 2017 Updated:January 11, 2017
Description: From the CVE entry:

The unserialize implementation in ext/standard/var.c in PHP 7.x before 7.0.14 allows remote attackers to cause a denial of service (use-after-free) or possibly have unspecified other impact via crafted serialized data. NOTE: this vulnerability exists because of an incomplete fix for CVE-2015-6834.

Alerts:
Arch Linux ASA-201701-28 php 2017-01-19
openSUSE openSUSE-SU-2017:0061-1 php7 2017-01-08

Comments (none posted)

php-swiftmailer: code execution

Package(s):php-swiftmailer CVE #(s):CVE-2016-10074
Created:January 9, 2017 Updated:January 23, 2017
Description: From the CVE entry:

The mail transport (aka Swift_Transport_MailTransport) in Swift Mailer before 5.4.5 might allow remote attackers to pass extra parameters to the mail command and consequently execute arbitrary code via a \" (backslash double quote) in a crafted e-mail address in the (1) From, (2) ReturnPath, or (3) Sender header.

Alerts:
Debian DSA-3769-1 libphp-swiftmailer 2017-01-22
Debian-LTS DLA-792-1 libphp-swiftmailer 2017-01-19
Fedora FEDORA-2016-b65e546846 php-swiftmailer 2017-01-09
Fedora FEDORA-2016-f7ef82c1b4 php-swiftmailer 2017-01-09

Comments (none posted)

phpBB: two vulnerabilities

Package(s):phpBB CVE #(s):CVE-2015-1431 CVE-2015-1432
Created:January 11, 2017 Updated:January 11, 2017
Description: From the CVE entries:

Cross-site scripting (XSS) vulnerability in includes/startup.php in phpBB before 3.0.13 allows remote attackers to inject arbitrary web script or HTML via vectors related to "Relative Path Overwrite." (CVE-2015-1431)

The message_options function in includes/ucp/ucp_pm_options.php in phpBB before 3.0.13 does not properly validate the form key, which allows remote attackers to conduct CSRF attacks and change the full folder setting via unspecified vectors. (CVE-2015-1432)

Alerts:
Gentoo 201701-25 phpBB 2017-01-11

Comments (none posted)

phpmyadmin: two vulnerabilities

Package(s):phpmyadmin CVE #(s):CVE-2016-9862 CVE-2016-9863
Created:January 11, 2017 Updated:January 11, 2017
Description: From the CVE entries:

An issue was discovered in phpMyAdmin. With a crafted login request it is possible to inject BBCode in the login page. All 4.6.x versions (prior to 4.6.5) are affected. (CVE-2016-9862)

An issue was discovered in phpMyAdmin. With a very large request to table partitioning function, it is possible to invoke a Denial of Service (DoS) attack. All 4.6.x versions (prior to 4.6.5) are affected. (CVE-2016-9863)

Alerts:
Gentoo 201701-32 phpmyadmin 2017-01-11

Comments (none posted)

puppet-tripleo: access restriction bypass

Package(s):puppet-tripleo CVE #(s):CVE-2016-9599
Created:January 6, 2017 Updated:January 11, 2017
Description: From the Red Hat advisory:

An access-control flaw was discovered in puppet-tripleo's IPtables rules management, which allowed the creation of TCP/UDP rules with empty port values. Some API services in Red Hat OpenStack Platform director are not exposed to public networks, which meant their $public_ssl_port value was set to empty (for example, openstack-glance, which is deployed by default on both undercloud and overcloud). If SSL was enabled, a malicious user could use these open ports to gain access to unauthorized resources. (CVE-2016-9599)

Alerts:
Red Hat RHSA-2017:0025-01 puppet-tripleo 2017-01-05

Comments (none posted)

sway: unspecified

Package(s):sway CVE #(s):
Created:January 9, 2017 Updated:January 11, 2017
Description: From the Sway 0.11 release announcement:

This release includes 139 changes from 12 authors. The biggest feature 0.11 offers is the first steps towards the goal of a secure Wayland desktop by adding new knobs to secure your sway installation - read sway-security(7) for details. These are only the first steps towards a secure sway, and no promises are made about how well it works. Please test it and look for ways to break it and provide feedback on your experiences.

Alerts:
Fedora FEDORA-2016-c6ae9b6cf8 sway 2017-01-07
Fedora FEDORA-2016-12c39f958b sway 2017-01-07

Comments (none posted)

syncthing: two vulnerabilities

Package(s):syncthing, syncthing-gtk CVE #(s):
Created:January 9, 2017 Updated:January 11, 2017
Description: From the syncthing 0.14.14 release announcement:

Two distinct security vulnerabilities have been corrected in this release. Either would let a remote attacker, controlling a device that is already accepted by Syncthing, perform arbitrary reads and writes to files outside the configured folders.

The first issue is that path validation was lacking in several places, resulting in Syncthing accepting index entries for files like "../../foo", thus resulting in a path above the configured folder.

The second issue is that where path validation was correct, symlinks could be used to trick Syncthing. An attacker could create a symlink "foo -> ../../" and then request the contents of "foo/something", again escaping the constraints of the folder.

Alerts:
openSUSE openSUSE-SU-2017:0043-1 syncthing, syncthing-gtk 2017-01-08
openSUSE openSUSE-SU-2017:0045-1 syncthing, syncthing-gtk 2017-01-08

Comments (none posted)

tinymce: cross-site scripting

Package(s):tinymce CVE #(s):
Created:January 6, 2017 Updated:January 11, 2017
Description: From the Red Hat bugzilla entry:

XSS issue was found in media plugin that did not properly filter out some script attributes.

Alerts:
Fedora FEDORA-2016-8d8d7d6d47 tinymce 2017-01-06

Comments (none posted)

tomcat: information disclosure

Package(s):tomcat CVE #(s):CVE-2016-8745
Created:January 9, 2017 Updated:February 20, 2017
Description: From the Debian advisory:

It was discovered that incorrect error handling in the NIO HTTP connector of the Tomcat servlet and JSP engine could result in information disclosure.

Alerts:
Mageia MGASA-2017-0050 tomcat 2017-02-18
Ubuntu USN-3177-2 tomcat 2017-02-02
Ubuntu USN-3177-1 tomcat6, tomcat7, tomcat8 2017-01-23
Debian-LTS DLA-779-1 tomcat7 2017-01-11
Debian DSA-3755-1 tomcat8 2017-01-08
Debian DSA-3754-1 tomcat7 2017-01-08

Comments (none posted)

unrtf: code execution

Package(s):unrtf CVE #(s):CVE-2016-10091
Created:January 6, 2017 Updated:January 11, 2017
Description: From the Mageia advisory:

A Stack-based buffer overflow has been found in unrtf 0.21.9, which affects functions including cmd_expand, cmd_emboss and cmd_engrave (CVE-2016-10091).

Alerts:
Mageia MGASA-2017-0007 unrtf 2017-01-06

Comments (none posted)

webkit2gtk: multiple vulnerabilities

Package(s):webkit2gtk CVE #(s):CVE-2016-4613 CVE-2016-4657 CVE-2016-4666 CVE-2016-4707 CVE-2016-4728 CVE-2016-4733 CVE-2016-4734 CVE-2016-4735 CVE-2016-4759 CVE-2016-4760 CVE-2016-4761 CVE-2016-4762 CVE-2016-4764 CVE-2016-4765 CVE-2016-4767 CVE-2016-4768 CVE-2016-4769 CVE-2016-7578
Created:January 11, 2017 Updated:January 11, 2017
Description: From the Ubuntu advisory:

A large number of security issues were discovered in the WebKitGTK+ Web and JavaScript engines. If a user were tricked into viewing a malicious website, a remote attacker could exploit a variety of issues related to web browser security, including cross-site scripting attacks, denial of service attacks, and arbitrary code execution.

Alerts:
Ubuntu USN-3166-1 webkit2gtk 2017-01-10

Comments (none posted)

Page editor: Jake Edge

Kernel development

Brief items

Kernel release status

The current development kernel is 4.10-rc3, released on January 8. Linus said: "It still feels a bit smaller than a usual rc3, but for the first real rc after the merge window (ie I'd compare it to a regular rc2), it's fairly normal."

Stable updates: 4.9.1, 4.8.16, and 4.4.40 were released on January 6, followed by 4.9.2, 4.8.17, and 4.4.41 on January 9. Note that 4.8.17 is the final 4.8.x update.

The (large) 4.9.3 and 4.4.42 updates are in the review process as of this writing; they can be expected on or after January 12.

Comments (none posted)

Quotes of the week

Every time KASLR makes something work differently, a kitten turns all Schrödinger on us.
Andy Lutomirski

For me, mm/ is a blank spot on the map marked "Hic sunt dracones." While that's better than "Lasciate ogne speranza, voi ch'intrate", it's still very intimidating.
— "George Spelvin"

Comments (none posted)

Kernel development news

Last-minute control-group BPF ABI concerns

By Jonathan Corbet
January 11, 2017
One of the features pulled into the mainline during the 4.10 merge window is the ability to attach a BPF program to a control group; that program can then filter packets received or transmitted by processes within the control group. The feature itself is relatively uncontroversial (though some would prefer a different implementation). Until recently, the feature's interface and semantics were also uncontroversial — or at least not closely examined. Since the feature was merged, however, some concerns have been raised. The development community will have to decide whether changes need to be made, or the feature temporarily disabled, before the 4.10 release sets the interface in stone.

The conversation was started by Andy Lutomirski, who played with the new capability for a while and found a few things that worried him. The first of these is that the bpf() system call is used to attach the program to the control group. This is, he thinks, fundamentally a control-group operation, not a BPF operation, so it should be handled through the control-group interface. If, in the future, somebody adds the ability to impose other types of controls — controls that don't involve BPF programs — then the use of bpf() would make no sense. And, in any case, he said, bpf() is an increasingly unwieldy multiplexer system call.

This objection didn't get far; there does not seem to be a large contingent of developers interested in adding other packet-filtering mechanisms to control groups. BPF developer Alexei Starovoitov dismissed the idea, suggesting that any other mechanism could be just as easily implemented in BPF. Networking maintainer David Miller agreed with Starovoitov on this issue, so it seems that little is likely to change on this point.

The next issue runs a little deeper. Control groups are hierarchical in nature and, with version 2 of the control-group interface, all controllers are expected to behave in a fully hierarchical manner. The BPF filter mechanism is not a proper controller (a bit of an interface oddity in its own right), but its behavior in control-group hierarchies is still of interest. Controller policies are normally composed as one moves down the hierarchy. For example, if a control group is configured with the CPU controller to have 10% of the available CPU time, then a sub-group of that group is configured to get 50%, it will end up with 50% of the 10% the parent group has, or 5% in absolute terms.

If a process is running in a two-level control group hierarchy, where both levels have filter programs attached, one might think that both filters would be run — that the restrictions imposed by those filters would be additive. But that is not what happens; instead, only the filter program at the lowest level is run, while those at higher levels are ignored. The upper level filter might prohibit certain kinds of traffic, but the mere existence of a lower-level filter overrides that prohibition. In a setting where one administrator is setting filters at all levels, these semantics might not be a problem. But if one wants to set up a system with containers and user namespaces, where containers can add filter programs of their own, this behavior would allow the system-level policy to be circumvented.

Starovoitov acknowledged that, at a minimum, there might be a use case for composing all the filters in a given hierarchy. But he also asserted that "the current semantics is fine for what it's designed for" and said that different behavior can be implemented in the future. The problem with that approach is that changing the semantics would be a significant ABI change that could easily break systems that were designed around the 4.10 semantics; such a change would not be allowed. In the absence of a plan for how the new semantics could be added in a compatible way, it has to be assumed that, if 4.10 is released with the current behavior, nobody will be able to change it going forward.

Other developers (Peter Zijlstra and Michal Hocko) have expressed concerns about this behavior as well. Zijlstra asked control-group maintainer Tejun Heo for his thoughts on the matter, but no such thoughts have been forthcoming as of this writing. Starovoitov seems convinced that the current semantics are not problematic, and that they can be changed in some (unspecified) way without breaking compatibility in the future.

Lutomirski's final worry is a bit more nebulous. Until now, control groups have been concerned with resource control; the addition of BPF filters changes the game. These programs could be another way for an attacker to run hostile code; they could, for example, interfere with the input to a setUID program, leading to potential privilege escalation issues. The programs could also stash useful information where an attacker could find it.

This sounds a lot like seccomp with a narrower scope but a much stronger ability to exfiltrate private information.

Unfortunately, while seccomp is very, very careful to prevent injection of a privileged victim into a malicious sandbox, the CGROUP_BPF mechanism appears to have no real security model. There is nothing to prevent a program that's in a malicious cgroup from running a setuid binary.

For now, attaching a network filter program is a privileged operation, so exploits are not an immediate concern. But as soon as somebody tries to make it work within user namespaces a whole new can of worms would be opened up. Lutomirski put out a "half-baked proposal" that would prevent the creation of "dangerous" control groups (those that have filter programs attached) unless various conditions were met to prevent privilege escalation issues in the future.

That proposal has not met with a lot of approval. Once again, such restrictions would need to be imposed from the outset to limit the risk of breaking systems in the future; that would imply that this feature would need to be disabled for the 4.10 release. But there seems to be little interest in doing that; while Starovoitov agreed early on that there was work to be done in the security area, he once again said that it could be done at some future point.

That is where the discussion stands, as of this writing. If no action is taken, 4.10 will be released with a new feature despite the existence of concerns about its ABI and security. History has some clear lessons about what can happen when new ABIs are shipped with this kind of unanswered question; indeed, one need not look beyond control groups for examples of the kinds of problems that can be created. Given the probable outcome here, one can only hope that the BPF developers are correct that some way can be found to address the semantic and security issues without creating ABI compatibility problems.

Comments (2 posted)

Bulk memory allocation without a new allocator

By Jonathan Corbet
January 10, 2017
The kernel faces a number of scalability challenges resulting from the increasing data rates that can be handled by peripherals like storage devices and network interfaces. Often, the key to improved throughput is doing work in batches; in many cases, the overhead of performing a series of related operations is not much higher than for performing a single operation. Memory allocation is one place where batching offers the potential for significant performance improvements, but there has, so far, been no agreement on how that batching should be done. A new patch set from Mel Gorman might just show how this problem can be solved.

Network interfaces tend to require a lot of memory; all those incoming packets have to be put somewhere, after all. But the overhead of allocating that memory is high, to the point that it can limit the maximum throughput of the system as a whole. In response, driver developers are resorting to workarounds like allocating (then splitting up) high-order pages, but high-order page allocation can stress the system as a whole and runs counter to normal kernel development practice. It would be good to have a better way.

At the 2016 Linux Storage, Filesystem, and Memory-Management Summit, networking developer Jesper Dangaard Brouer proposed the creation of a new memory allocator designed from the beginning for batch operations. Drivers could use it to allocate many pages in a single call, thus minimizing the per-page overhead. The memory-management developers at this session understood the problem, but disagreed with the idea of creating a new allocator. Doing so, they said, would make the memory-management subsystem less maintainable. Additionally the new allocator would tend to repeat the mistakes of the existing allocators and, by the time it had all the necessary features, it might not be any faster.

The right solution, from the memory-management perspective, is to modify the existing page allocator, reducing overheads and making it more friendly to multi-page allocations. This has not been done so far for a simple reason: most memory users immediately zero every page they allocate, an operation that is far more expensive than the allocation itself. That zeroing is not necessary for pages that will be overwritten with incoming packet data by a network interface, though, so high-performance networking workloads are more seriously affected by the overhead in the allocator. Fixing that overhead in the existing page allocator would solve the problem for the networking subsystem while avoiding the addition of a new allocator and providing improved performance for all parts of the kernel.

The idea made sense, but only had one shortcoming: nobody had actually done the work to improve the existing page allocator in this way. That situation has changed, though, with the posting of Gorman's bulk page allocator patch set. The patches are relatively small, but the claimed result is a significant improvement in page-allocator performance.

Two fundamental changes are required to support both allocations; both take the same form. The first of these addresses the function buffered_rmqueue(), which removes a page from a per-CPU free list in preparation for handing it out in response to an allocation request. Since the list is per-CPU, there is no locking required before making changes, but it is necessary to disable interrupts on the relevant CPU to prevent concurrent access from an interrupt handler. Disabling and restoring interrupts takes some significant time, and that time adds up if it must be done repeatedly for each page being allocated.

Gorman's patch set splits up this function in a way that is common in kernel programming. A new function (__rmqueue_pcplist()) removes a page from the list but does not concern itself with disabling interrupts; that is expected to be handled by the caller. A call to rmqueue_pcplist() (without the leading underscores) will disable interrupts and allocate the page in the usual way. But now other code can disable interrupts once, then call __rmqueue_pcplist() multiple times to allocate a whole set of pages.

Similarly, __alloc_pages_nodemask() spends a fair amount of time figuring out which zone of memory should be used to satisfy a given request, then returns a page. Here, too, those two operations can be split apart, so that the zone calculation can be reused for multiple page allocations rather than being performed anew for each page.

With these two changes in place, Gorman's patch set can add a new allocation function:

    unsigned long alloc_pages_bulk(gfp_t gfp_mask, unsigned int order,
				   unsigned long nr_pages, struct list_head *list);

This function will attempt to allocate nr_pages pages in an efficient manner, storing them in the given list. The order argument suggests that any size of allocation can be done in bulk but, in the current patch, any order other than zero (single pages) will result in a failure return.

The result of using this interface, he says, is a "roughly 50-60% reduction in the cost of allocating pages". That should help the networking developers in their quest to improve packet throughput rates. They will find that some assembly is required, though; Gorman went as far as to show that the memory-allocator overhead can be reduced, but stopped short of creating an API with all of the features that those developers need. His plan is to merge the preparatory patches without the alloc_pages_bulk() API with the idea that the actual bulk-allocation API should be designed by the developers who need it. Thus, once these changes find their way into the mainline, it will be up to the networking crew to do something useful with them.

Comments (2 posted)

Patches and updates

Kernel trees

Linus Torvalds Linux 4.10-rc3 Jan 08
Greg KH Linux 4.9.2 Jan 09
Greg KH Linux 4.9.1 Jan 06
Greg KH Linux 4.8.17 Jan 09
Greg KH Linux 4.8.16 Jan 06
Greg KH Linux 4.4.41 Jan 09
Greg KH Linux 4.4.40 Jan 06

Architecture-specific

Core kernel code

David Carrillo-Cisneros optimize ctx switch with rb-tree Jan 10
Davidlohr Bueso sched: Introduce rcuwait Jan 11

Device drivers

Device driver infrastructure

Heikki Krogerus USB Type-C Connector class Jan 05
Rob Herring Serial slave device bus Jan 06

Memory management

Networking

Security-related

Jason A. Donenfeld Introduce The SipHash PRF Jan 07

Virtualization and containers

Miscellaneous

Page editor: Jonathan Corbet

Distributions

Rethinking Fedora multilib support

By Jake Edge
January 11, 2017

The Fedora Modularity effort is bringing changes to the distribution, particularly in order to build these modules that will encompass a "unit of functionality" such as a web server. An ongoing discussion on the fedora-devel mailing list is looking at the pros and cons of changing the distribution's longstanding multilib mechanism, which is what allows 32 and 64-bit libraries to coexist on the system. An initial proposal to use containers as the new mechanism was shot down quickly, but other possibilities are being discussed. Where it will all lead is anyone's guess, but as noted in our December look at the possibility of annual Fedora releases, the project is clearly considering and discussing fairly massive changes going forward.

The proposals have come from Stephen Gallagher, who posted the first, container-based idea on January 5. In it, he suggested that, instead of separating libraries into /usr/lib for 32-bit libraries and /usr/lib64 for 64-bit ones, there should be a shared 32-bit container runtime used to run 32-bit programs on 64-bit systems. That idea swiftly ran aground.

Gallagher outlined some advantages and disadvantages of the approach, but complaints were immediately heard about programs like Wine, Steam, and Skype that require 32-bit OS support and are not likely to be containerized anytime soon. Beyond that, though, it would fundamentally change the way 32-bit applications are built on top of Fedora. Instead of using the -m32 GCC flag and the relevant 32-bit libraries in /usr/lib, some kind "special dance to enter a container environment" would have to be done, as Tom Hughes put it. In the end, there are no real user benefits, Ben Rosser said:

Speaking from an end-user perspective, I actually really like the way multilib on Fedora is currently implemented. All I need to do to get a 32-bit application-- be it some Windows application under wine, some proprietary application like Steam, etc.-- to work is to install the 32-bit packages via yum/dnf, and then things Just Work.

I understand that from a building-the-distribution perspective the way this is done currently is kind of a hack, but I can't help but notice that the *only* benefits to this proposal would be that it makes building the distribution easier. There are no proposed benefits for our users beyond breaking the way things currently work with probably no upgrade path. And whether we like it or not, users, myself included, install nonfree software like Steam on systems and generally expect it to continue working from release to release.

The fast reactions in the thread led Gallagher to put out a second proposal roughly six hours after the first. He summarized the objections raised to the first proposal and listed two alternatives that had been proposed in the thread. The first would adopt the Debian Multiarch mechanism, which uses a /usr/lib/$ARCH-linux-gnu directory scheme. One advantage to that might be the emergence of a de facto standard between distributions. The other suggestion from the thread was to default installations to a single architecture (i.e. 32 or 64 bit) and only install libraries for that, but to allow additional architectures to be enabled in the DNF package manager for those users that need them.

As Gallagher noted, the two are not incompatible and, in fact, "their combination may indeed prove to be a superior solution to the one I initially came up with and suggested". He then went on to point out some problems that the transition would engender, but called them "surmountable". Moving the libraries would likely require leaving some symbolic links behind for binaries that expect to find them in /usr/lib[64]. RPM specification files may need to be adjusted so that the wrong versions of dependencies don't get installed during times when the i686 and x86_64 mirrors are not in sync. Also:

Switching to this layout might give a false (or possibly accurate, in some cases) impression that one could expect Debian/Ubuntu packages to function "out of the box" on Fedora (if using something like Alien). Education is key here.

There were complaints that the Debian library directory structure does not follow the Filesystem Hierarchy Standard (FHS). But Gallagher seemed unconcerned about strictly following the FHS: "we try to stay as close as possible to it, but if it doesn't meet our needs, we'll work around it". Hughes thinks the Debian organization is clearly an improvement, but is not so sure about making a switch:

If we were starting now to support multilib then I would certainly suggest that the Debian design is the better one but whether it's enough of an improvement to merit the pain of changing is a rather different question.

My reasons for thinking it's better are much the same as what other people have already said - that it treats all arches as equals and scales readily to whatever is needed rather than just bolting on a single 32/64 bit split as a kind of special case.

But Bill Nottingham is concerned that the change is being motivated only by build problems for Fedora and may not be keeping users firmly in mind:

While I fully understand how our current multilib system is a mess for the build and release process (being in certain respects responsible), I'm leery of using that to make drastic changes.

The whole point of building an OS/module/etc for users is to keep the complexity on the build side and out of the users hands - they don't care whether half the packages switched from autoconf to meson, whether twenty things are now written in Rust, or whether the entire python stack jumped minor (or major!) versions. They just want the system to upgrade and the software they use to keep working.

While it is true that build problems are motivating Gallagher to look at the multilib support, he definitely does not want to leave users behind:

As Bill pointed out, things "just work" for users right now and that's something we'd like to avoid breaking. However, that does *not* mean that it is trivial to do on the build side. We're currently building out an entirely new infrastructure to support modules; we'd like to take a look at what we did the first time and see if (with more experience and hindsight) we can do a better job now, and ideally one we can share between the two approaches.

There is still opposition to the whole Modularity idea, however, especially from Kevin Kofler. Most participating in the thread seem to be on board with the plan, but Kofler, as he often does, sees things differently:

What was never discussed was whether modules are something worth rebuilding "an entirely new infrastructure" to begin with. I disagree that they are even a desirable feature to begin with, they just fragment and thus dilute the Fedora platform, and have the potential to seriously hurt integration across the distribution and increase code duplication and its resulting bloat.

As part of the discussion, Langdon White pointed out that, for example, there is no real need for KDE and httpd to be tightly integrated, but that the current Fedora model forces the two to share libraries. Florian Weimer expanded on that:

Apache httpd and KDE are very interesting examples. Both KDE and Apache httpd integrate with Subversion, on two levels: KDE has Subversion client support, Apache httpd has server support. And Subversion is implemented using apr (the Apache Portable Runtime library).

So unless we start building Subversion twice, once for use with Apache httpd, and once for use within KDE, modules containing KDE and Apache httpd will have to agree on the same version of Subversion and the same version of apr.

As Fedora project leader Matthew Miller said, that is an example of where the distribution has hobbled itself "in our well-meaning attempt to integrate everything". There are other ways to handle those kinds of problems in today's Fedora (using multiple libraries with version numbers as part of the name as Weimer noted), but the Modularity effort will provide an easier way to do that.

The conversation is still ongoing as of this writing and no real conclusions have been drawn. The Fedora project, and its leader in particular, are looking toward a future where distributions do their jobs in a different way than they do today. It is not so much that the role that a distribution project plays is changing, but that the way it goes about it is. As Miller put it: "It is entirely about how we can better deliver the universe of free and open source software." That has always been a distribution's job, but the way to do so seems different these days and Fedora is doing its best to keep up.

Comments (14 posted)

Brief items

Distribution quotes of the week

Ultimately Gentoo is working together to produce a product. If it is a product that you want to use, then use it. If it is a product that you don't want to use, please contact us for a full refund, and consider contributing to somebody who makes a product that you do want to use. That's why we have 4000 linux distributions and not one. There is no rule saying we can have only one source-based distro either, we certainly don't have just one binary distribution. Of late, I'm not convinced that a lot of newer Gentoo users even care that it is source-based. There is also no rule saying that we can only have one distro that doesn't require running systemd... :)
Rich Freeman

A few months after joining, someone figured out that to pgp signatures to be useful, keys need to be cross-signed. Hence young me taking a long bus trip from countryside Finland to the capital Helsinki to meet the only other DD in Finland in a cafe. It would still take another two years until I met more Debian people, and it could be proven that I'm not just an alter ego of Lars ;)
Riku Voipio

It'd be nice to have the newest of the newest of everything in a Debian stable release. That seems to be incompatible with actually making a stable release.
Lars Wirzenius

I knew it as soon as I crowned Fedora 25 the best distro of 2016—I was going to hear about it from Linux Mint fans.

How could I proclaim the best distro of the year before the latest version of Mint arrived? There's nothing like some guy on the Internet overlooking your favorite distro to make the hairs in your neckbeard start twitching angrily [/sarcasm]. I understand, it happens to me every time someone fails to recognize that Arch is the best distro of every year.

Scott Gilbertson (reviews Linux Mint 18.1)

Comments (none posted)

Distribution News

Debian GNU/Linux

Delegation for the DebConf Committee

Debian project leader Mehdi Dogguy has announced the creation of the DebConf Committee as an official team of the Debian project. This committee will make final decisions about who will organize DebConf, take long-term responsibility for DebConf, and advise the DPL on decisions that are not delegated.

Full Story (comments: none)

Release update: Soft freeze for stretch

Debian stretch is now in soft freeze. No new source packages will enter stretch now. The full freeze is scheduled for February 5.

Full Story (comments: none)

Fedora

Fedora 2017 January Elections: Interviews

Fedora elections are open for voting until January 19. The Fedora Community Blog has pointers to interviews with the candidates.

Comments (none posted)

Red Hat Enterprise Linux

Red Hat Enterprise Linux 6.9 beta

Red Hat has released a beta for version 6.9 of its Enterprise Linux distribution. "While prioritizing ongoing stability and security features for critical platform deployments, Red Hat Enterprise Linux 6.9 Beta also supports the next generation of cloud-native applications through an updated Red Hat Enterprise Linux 6 base image. The Red Hat Enterprise Linux 6.9 Beta base image enables customers to migrate their existing Red Hat Enterprise Linux 6 workloads into container-based applications - suitable for deployment on Red Hat Enterprise Linux 7, Red Hat Enterprise Linux Atomic Host, and Red Hat OpenShift Container Platform."

Comments (none posted)

Newsletters and articles of interest

Distribution newsletters

Comments (none posted)

The Best Linux Distros for 2017 (Linux.com)

Jack Wallen picks his favorite distributions for different tasks. Parrot Linux for sysadmins, LXLE for a light-weight distribution, Elementary OS for the desktop, and more are covered. "[Best distribution for those with something to prove] is a category specific to those who want to show their prowess with the Linux operating system. This is for those who know Linux better than most and want a distribution built specificly to their needs. When this flavor of Linux is desired, there is only one release that comes to mind...Gentoo."

Comments (none posted)

Page editor: Rebecca Sobol

Development

Python 2.8?

By Jake Edge
January 11, 2017

The appearance of a "Python 2.8" got the attention of the Python core developers in early December. It is based on Python 2.7, with features backported from Python 3.x. In general, there was little support for the effort—core developers tend to clearly see Python 3 as the way forward—but no opposition to it either. The Python license makes it clear that these kinds of efforts are legal and even encouraged—any real opposition to the project lies in its name.

Larry Hastings alerted the python-dev mailing list about the Python 2.8 project (which has since been renamed to "Placeholder" until another name can be found). It is a fork of Python 2.7.12 with features like function annotations, yield from, async/await, and the matrix multiplication operator ported from Python 3. It is meant to be a drop-in replacement for Python 2.7, so it won't have features that are incompatible with it. It is aimed at those who are not ready (or willing) to make the jump to Python 3, but want some of the features from it.

The name "Python 2.8" implies a level of support, though; it also uses a name (Python) that is trademarked by the Python Software Foundation. Steven D'Aprano recalled discussions at the time of the decision to stop Python 2.x development at 2.7:

I seem to recall that when we discussed the future of Python 2.x, and the decision that 2.7 would be the final version and there would be no 2.8, we reached a consensus that if anyone did backport Python 3 features to a Python 2 fork, they should not call it Python 2.8 as that could mislead people into thinking it was officially supported.

He and others called for the project to be renamed. An issue was filed for the project suggesting a rename. As it turns out, the owner of the project, Naftali Harris, is amenable to the change, which simplifies things greatly. Had that not been the case, though, it is not entirely clear that the PSF Trademark Usage Policy precludes using the name "Python" that way.

David Mertz, who is a member of the PSF Trademarks committee, believes that "Python 2.8" would be a misuse of the trademark and referred it to the committee. Terry Reedy agreed, saying that the project was a "derived work" and that clause 7 of the Python License does not automatically allow the use of the PSF trademarks.

But Marc-Andre Lemburg noted that the trademark policy is seemingly written to allow for uses like this. The policy says:

[...] stating accurately that software is written in the Python programming language, that it is compatible with the Python programming language, or that it contains the Python programming language, is always allowed. In those cases, you may use the word "Python" or the unaltered logos to indicate this, without our prior approval.

He pointed out that the project also fulfilled the license requirements by listing the differences from 2.7.12 as is required in clause 3. But he agreed that a name change should be requested. For his part, Guido van Rossum is not particularly concerned by the existence of the project:

While I think the name is misleading and in violation of PSF policy and/or license, I am not too worried about this. I expect it will be tough to port libraries from Python 3 reliably because it is not true Python 3 (e.g. str/bytes). So then it's just a toy. Who cares about having 'async def' if there's no backport of asyncio?

Mertz, however is not so sure. The existence of a "Python 2.8" may "serve as a pretext for managers to drag their feet further on migration plans", which will be detrimental to organizations where that happens. PEP 404 (the "Python 2.8 Un-release Schedule") makes it quite clear that the core development team (and, presumably, the PSF) is resolute about a 2.8 release: "There never will be an official Python 2.8 release. It is an ex-release."

But there are various other projects that have "Python" in their names (IronPython, ActivePython, MicroPython, etc.) as well as projects with names that are suggestive of Python without directly using the name (Jython, PyPy, Cython, Mython, and so on). Where is the line to be drawn? As with all trademark questions, though, it comes down to a question of user confusion: will users expect that something called "Python 2.8" is officially endorsed and supported by the PSF? The answer would seem to clearly be "yes".

Luckily, everyone is being fairly reasonable—no legal action has been needed or even really considered. The fact that Harris was willing to change the name obviated any need to resort to legal remedies. The GitHub issue thread is full of suggestions for alternate names, replete with Monty Python references—our communities love to bikeshed about names. There are also some snide comments about Python 3 and the like, but overall the thread was constructive.

As far as new names go, an early favorite was "Pythonesque", but calling the binary "pesque" reminded some of the word "pesky", which is not quite what Harris is after (though "pyesque" might work). He renamed the project to "Placeholder" on December 12 "while we find a good permanent name that I like and that works for the PSF". The current leader appears to be Pyvergent (since Mython already exists and one might guess that Harris is not entirely serious about Placeholder). In any case, he said, the decision does not need to be made immediately.

At this point, Placeholder appears to largely be a one-developer project. Its GitHub history starts in October 2016 and some real progress has seemingly been made; quite a few features have been ported from Python 3. The issues list shows some ambitious plans that might make it less of a "toy" than Van Rossum envisioned. If it ends up being popular and attracting more of a community, it could perhaps become a strong player in the Python world.

There is a balance to be struck on trademark policies for free-software projects. As we saw in the Debian-Mozilla trademark conflict, which resulted in the "Iceweasel" browser and was resolved early last year, distributions and others want to be able to make changes to projects while still being able to use the trademarks. As Nick Coghlan pointed out, for Python, Linux distributions are likely pushing the envelope the furthest:

Linux distros probably diverge the furthest out of anyone distributing binaries that are still recognised as a third party build of CPython, such that the Linux system Python releases are more properly called "<distro> Python" rather than just Python. However, distro packaging formats are also generally designed to clearly distinguish between the unmodified upstream source code and any distro-specific patches, so the likelihood of confusion is low (in a legal sense).

It would seem that the PSF might want to tighten its policy slightly such that it retains control over "Python x.y" and similar trademarks, while still allowing the Python name to appear in the names of other related projects (like MicroPython). That way, if legal action is actually needed at some point (which no one wants to see, of course) it will be clear that the intent and the policy line up. Fragmentation is a clear possibility given the "forkable" nature of free-software projects, but it is certainly not unreasonable for the parent project to retain a measure of control to reduce confusion—that is precisely what trademarks are for.

Comments (65 posted)

Brief items

Development quotes of the week

How did the Unix directory tree grow into the sprawling ent that it is today? It happened mostly through incremental change, not by design. You could almost say it happened organically.
Lars Wirzenius

Will Stallman ever say the HiFive 1 is Free as in speech? Absolutely not. Instead, the HiFive 1 is an incrementally more Free microcontroller compared to a PIC, ARM, or AVR. There will be people who will argue – over the Internet, using late-model Intel processors with Management Engines — this is insufficient to be called Free and Open Source. To them, I will simply link to the Nirvana fallacy and ask them to point me to a microcontroller that is more Free and Open Source. Let’s not cut down the idea of an Open Source microcontroller because it’s not perfect on the first release.
Brian Benchoff (Thanks to Paul Wise)

It's been a long, fun ride, and I'm proud of the PostgreSQL we have today: both the database, and the community. Thank you for sharing it with me.
Josh Berkus retires from the PostgreSQL core team

Comments (none posted)

digiKam 5.4.0 is released

The digiKam team has announced the release of version 5.4.0 of the digiKam Software Collection, a photo editing system. "This version introduces several improvements to the similarity search engine and a complete re-write of video file support." Under the hood, digiKam has been fully ported to the QtAV framework to handle video and audio files.

Comments (none posted)

Synfig 1.2.0 released

Synfig Studio 1.2.0, a 2D animation system, has been released. This version features a completely rewritten render engine and new lipsync features, along with many improvements and bugfixes.

Comments (19 posted)

Newsletters and articles

Development newsletters

Comments (none posted)

My WATCH runs GNU/Linux And It Is Amazing (LearntEmail)

The LearntEmail blog has a look at running AsteroidOS on the LG Watch Urbane smartwatch. "It looks like a watch, it smells like a watch, but it runs like a normal computer. Wayland, systemd, polkit, dbus and friends look very friendly to hacking. Even Qt is better than android, but that's debatable. My next project - run Gtk+ on the watch :)" (Thanks to Paul Wise.)

Comments (16 posted)

Page editor: Rebecca Sobol

Announcements

Brief items

Goodbye to GNU Libreboot

Richard Stallman has announced that the Libreboot project is no longer a GNU project. "A few months ago, the maintainer of GNU Libreboot decided not to work on Libreboot for the GNU Project any more. That was her decision to make. She also asserted that Libreboot was no longer a GNU package -- something she could not unilaterally do. The GNU Project had to decide what to do in regard to Libreboot. We have decided to go along with the former GNU maintainer's wishes in this case..."

Full Story (comments: 2)

Tracing Summit 2016 videos

The Tracing Summit 2016 was held in Berlin, Germany on October 12. Videos of the sessions are available on YouTube and may also be accessed from the schedule.

Comments (none posted)

Articles of interest

FSFE Annual Report 2016

The Free Software Foundation Europe presents its annual report for 2016. "It has been a busy year for the FSFE. Upholding the principles of Free Software and protecting citizens' from being exploited are ongoing challenges we tackled from a variety of angles. We (and by "we", we mean the staff and volunteers at the FSFE) pored over hundreds of pages of policies and legislations, looking for loopholes through which Free Software could be attacked."

Full Story (comments: none)

Calls for Presentations

Vault CFP deadline approaching

The Vault Storage and Filesystems conference will be held March 22 and 23 in Cambridge, MA, USA, immediately after the Linux Storage, Filesystem, and Memory-Management Summit. The call for presentations expires on January 14, and the conference organizers would really like to get a few more proposals in before then. Developers interested in speaking at a technical Linux event are encourage to sign up.

(Also, don't forget the LWN CFP deadlines calendar, which is a good way to stay on top of conference proposal deadlines.)

Comments (none posted)

<Programming> 2017: Call for workshop, symposium & poster submissions

<Programming> 2017 (The Art, Science, and Engineering of Programming) will take place April 3-6 in Brussels, Belgium. There will be 10 co-located events at the 2017 conference. The announcement contains information about the call for participation for these co-located events.

Full Story (comments: none)

CFP Deadlines: January 12, 2017 to March 13, 2017

The following listing of CFP deadlines is taken from the LWN.net CFP Calendar.

DeadlineEvent Dates EventLocation
January 13 May 22
May 24
Container Camp AU Sydney, Australia
January 14 March 22
March 23
Vault Cambridge, MA, USA
January 20 March 17
March 19
FOSS Asia Singapore, Singapore
January 31 May 16
May 18
Open Source Data Center Conference 2017 Berlin, Germany
February 6 May 8
May 11
OpenStack Summit Boston, MA, USA
February 12 June 9
June 10
Hong Kong Open Source Conference 2017 Hong Kong, Hong Kong
February 18 March 18 Open Source Days Copenhagen Copenhagen, Denmark
February 24 June 26
June 29
Postgres Vision Boston, MA, USA
February 26 April 3
April 4
Power Management and Scheduling in the Linux Kernel Summit Pisa, Italy
February 27 April 6
April 8
Netdev 2.1 Montreal, Canada
February 28 May 18
May 20
Linux Audio Conference Saint-Etienne, France
February 28 May 2
May 4
samba eXPerience 2017 Goettingen, Germany
March 1 May 6
May 7
LinuxFest Northwest Bellingham, WA, USA
March 4 May 31
June 2
Open Source Summit Japan Tokyo, Japan
March 6 June 18
June 23
The Perl Conference Washington, DC, USA
March 7 August 23
August 25
JupyterCon New York, NY, USA
March 12 April 26 foss-north Gothenburg, Sweden

If the CFP deadline for your event does not appear here, please tell us about it.

Upcoming Events

Embedded Linux Conf + OpenIoT Summit Agenda Announced

The Linux Foundation has announced the program for Embedded Linux Conference + OpenIoT Summit NA, which takes place February 21-23 in Portland, OR. Keynote speakers include Linus Torvalds (in discussion with Dirk Hohndel), robotics expert Guy Hoffman, and Intel's Imad Sousou.

Full Story (comments: none)

SCALE 15X: Registration open for SCALE

Registration is open for the Southern California Linux Expo or SCALE 15X, held March 2-5 in Pasadena, CA.

Full Story (comments: none)

Events: January 12, 2017 to March 13, 2017

The following event listing is taken from the LWN.net Calendar.

Date(s)EventLocation
January 16 Linux.Conf.Au 2017 Sysadmin Miniconf Hobart, Tas, Australia
January 16
January 17
LCA Kernel Miniconf Hobart, Australia
January 16
January 20
linux.conf.au 2017 Hobart, Australia
January 18
January 19
WikiToLearnConf India Jaipur, Rajasthan, India
January 27
January 29
DevConf.cz 2017 Brno, Czech Republic
February 2
February 3
Git Merge 2017 Brussels, Belgium
February 4
February 5
FOSDEM 2017 Brussels, Belgium
February 7
February 9
AnacondaCON Austin, TX, USA
February 14
February 16
Open Source Leadership Summit Lake Tahoe, CA, USA
February 15
February 16
Prague PostgreSQL Developer Day 2017 Prague, Czech Republic
February 17 Swiss Python Summit Rapperswil, Switzerland
February 18
February 19
PyCaribbean Bayamón, Puerto Rico, USA
February 20
February 24
OpenStack Project Teams Gathering Atlanta, GA, USA
February 21
February 23
Embedded Linux Conference Portland, OR, USA
February 21
February 23
OpenIoT Summit Portland, OR, USA
March 2
March 5
Southern California Linux Expo Pasadena, CA, USA
March 2
March 3
PGConf India 2017 Bengaluru, India
March 6
March 10
Linaro Connect Budapest, Hungary
March 7 Icinga Camp Berlin 2017 Berlin, Germany
March 10
March 12
conf.kde.in 2017 Guwahati, Assam, India
March 11
March 12
Chemnitzer Linux-Tage Chemnitz, Germany

If your event does not appear here, please tell us about it.

Page editor: Rebecca Sobol


Copyright © 2017, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds