LWN.net Weekly Edition for December 8, 2016
The apparent end of CyanogenMod
The world is full of successful corporations built with free software and, as a general rule, the free-software community has benefited from the existence of those companies. But the intersection of corporate interest and free software can be a risky place where economic interests may be pursued to the detriment of the underlying software and its community. Arguably, that is what has happened with the CyanogenMod project, which is now faced with finding a new home and, probably, a new name.CyanogenMod is an alternative build of the Android operating system; it got its start in 2009. It quickly grew to become the most popular of the Android "mods", with a substantial community of contributors and support for a wide range of devices. Early users appreciated its extensive configurability, user-interface improvements, lack of dubious vendor add-on software, and occasionally, privacy improvements. For many users, CyanogenMod was a clear upgrade from whatever version of Android originally shipped on their devices.
In 2013, CyanogenMod founder Steve Kondik obtained some venture capital and started Cyanogen Inc. as a vehicle to further develop the CyanogenMod distribution. The initial focus was on providing a better Android to handset manufacturers, who would choose to ship it instead of stock Android from Google. There was a scattering of initial successes, including the OnePlus One handset, but the list of devices shipping with CyanogenMod never did get that long.
Recently there have been signs of trouble at Cyanogen; these include layoffs in July and, most recently, the news that Steve Kondik has left the company. Cyanogen Inc. will now proceed without its founder and, seemingly, with relatively little interest in the CyanogenMod distribution, which, arguably, has lost much of the prominence it once had. Devices running CyanogenMod were once easily found at free-software gatherings; now they are somewhat scarce. Anecdotally, it would seem that far fewer people care about the continued existence of CyanogenMod than did a few years ago.
There are certainly numerous reasons for interest in CyanogenMod to have declined. Perhaps at the top of the list is actions on Google's part. The Android experience has improved, sometimes by borrowing ideas that appeared in CyanogenMod first. We no longer have to install CyanogenMod to gain (some) control over application permissions, block unwanted notifications, or get quick access to flashlight functionality. Meanwhile, Google's licensing rules for Android and, in particular, for the proprietary layers at the top of the stack constitute a strong disincentive for vendors thinking about experimenting with alternatives like CyanogenMod.
But actions at Cyanogen Inc. are also certainly a part of this picture.
The pace of development for CyanogenMod appears to have slowed; there is
still no release based on Android 7 ("Nougat") available outside of
the nightly builds. In general, CyanogenMod releases have been de-emphasized
in favor of those nightly builds — a discouraging development for users who
are unwilling to risk bricking their devices with a random snapshot. When
releases do happen, the headline features show a different set of
priorities. The current release, Cyanogen OS 13.1, adds
the ability to "view trending Tweets right on your lock
screen
". This feature is likely to be useful for the US
President-elect when his staff changes his lock code again, but it's not
likely to be the sort
of thing traditional CyanogenMod users were looking for; neither are the
new Skype and Cortana integration features.
In other words, Cyanogen Inc. would appear to have left its user base behind in its pursuit of revenue. That change has resulted in tension within the company, no doubt exacerbated by the failure of that revenue to arrive. Now, this process has culminated in a change of control at the company, a new focus on its "MOD" initiative, and Kondik's departure.
Kondik expressed
a fair amount of bitterness over the end of Cyanogen Inc. as he had
envisioned it. But he is also worried about the future of CyanogenMod as a
community. He did not say outright that Cyanogen Inc. would no longer
support the project, but he does say that the project has "about two
months
" to find new servers. So it is not unreasonable to believe
that, if
CyanogenMod is to continue as an even remotely community-oriented project,
it needs to arrange for a new home.
It also most likely needs a new name, since Kondik signed the existing name over to the company he is now estranged from. Name changes for prominent projects are possible, but they are not easy; witness how long LibreOffice has struggled against the OpenOffice name, for example. A rebooted CyanogenMod will have to fight a similar battle while trying to win back the community mindshare it once had.
One can only hope that the project succeeds. Android has been massively beneficial to the free-software community, but we absolutely need a more open, community-oriented alternative. We need a place where new features and new approaches to privacy can be tried out. We need an alternative for users whose devices are no longer supported by their vendors. We need a mobile distribution that is managed with its users' interest in mind. We need, in other words, a mobile distribution that brings us all of the benefits that we expect from free software.
CyanogenMod was never a perfect community-oriented distribution, but it is still the best we have in this area. Without it, mobile systems, even Android, would be that much more proprietary and that much harder to control. CyanogenMod is an expression of just the sort of freedom we have been working for all these years; let us hope that its next phase of existence will be successful.
OpenID for authentication
I've just registered on "yet another web site", which meant another scramble for a username that's not already in use, another randomly-generated and highly-unmemorable password, and another email address provided for account recovery in the event that it all goes wrong. Along with my flying car and jetpack, I'd rather hoped the 21st century would bring a better way of managing my identity on the internet; in 2005 it did just that with the release of the OpenID standard for distributed authentication. Sadly, OpenID has remained a fringe player in this field.
How it doesn't work now
We've all experienced this registration pain: the multiple permutations of our preferred username, trying to find a variant that we can remember which no-one has already taken, and the generation of yet another nonce password, which our browser will have to remember for us. Need I mention the issues of either re-entering that password for storage on other browsers we use, or the use of some centralized storage system that's either local to a trusted device whose loss will be a near-disaster, or is stored in someone else's cloud, with all the trust issues that involves? They also want an email address for recovery. Do I use my regular one, and risk them losing control of it, or even simply selling it to spammers? Or do I generate a one-time one, which my mail server then has to be configured to know about?In any case, isn't username/password authentication a bit stale? Most banking web sites now require some variant of two-factor authentication (2FA); it's time this became standard practice. But if every site rolls its own 2FA, then my storage-of-random-credentials problem just got even bigger. The right way forward for most web sites is federated authentication: the site delegates the business of ensuring that I'm really me to some generally available third-party. This is now starting to happen, but unfortunately the two big players are Google and Facebook, and you don't have to wear a tinfoil hat not to want to give them information about every single web site you ever use.
Things can be better
Fortunately, in 2005, OpenID was created. This is a distributed authentication system that, in its simplest form, ties my identity to a URL of my choice. That URL uses some link tags in the header to nominate my OpenID authentication provider of choice. You can see this in the top of my OpenID identity URL, https://www.teaparty.net/:
<html>
<head>
<link rel="openid.server" href="https://openid.yubico.com/server.php" />
<link rel="openid.delegate" href="https://openid.yubico.com/server.php/idpage?user=vvgdbhcgutti" />
</head>
<body>
Regular web page stuff...
</body>
</html>
The links specify to whom I delegate the business of authenticating me
(openid.server), and how I identify myself to them
(openid.delegate). Note that these are not set in stone;
indeed, they're downright disposable. If my current delegate goes out
of business, I lose trust in their ability to handle this securely
and reliably, or I just forget how I authenticate to them and I can't
raise their customer service department, thirty seconds with vi
(or three minutes with emacs) can put a new delegate in place.
In fact, when one provider did fail, that's exactly what happened: I used Google to find OpenID providers, picked one I liked, registered with it, and changed my delegation links. If I were feeling particularly paranoid and wanted to run my own authentication provider, I could, but even if I choose not to, authentication stays very much in my control because of my ability to re-delegate with the touch of a text editor.
So when I approach an OpenID-capable web site to log in, things unfold as the diagram shows.
- I go to the web site, and tell it I want to log in as my identity URL, https://www.teaparty.net/.
- The web site retrieves https://www.teaparty.net/ from my webserver and parses those tags.
- The web site contacts openid.yubico.com and POSTs a request to establish a shared secret using Diffie-Hellman key exchange (the protocol calls this "associating").
- The web site sends my web browser off to openid.yubico.com to authenticate, as I specified in the delegate tag, using our pre-agreed method. The web site provides a nonce as part of this exchange.
- Provided that I authenticate to its satisfaction, openid.yubico.com sends my browser back to the web site with a token that is provably from that provider, and which is intended for the web site as part of this particular authentication exchange. This is possible because of the shared secret set up in step three, and the nonce provided in step four.
Doing OpenID well requires a little diligence:
- Ensure that your identity URL is served under HTTPS, with a properly-signed certificate, from a domain secured with DNSSEC. You can't force any given web site to check certificates and DNS signatures, but providing them makes life harder for attackers. It used to be expensive and painful to do these things, but between Let's Encrypt and the fairly widespread availability of DNSSEC, it's no longer either of those.
- Your delegated authentication provider knows which web site is asking you to authenticate through it, and it should provide this information to you in step four. Make sure you check that it matches the web site you think you're authenticating to.
- As you may have guessed from the URLs above, I use one of the slots on my YubiKey for one-time password authentication. Whatever you plump for, use something more robust than simple username/password authentication with your delegated authentication provider.
The OpenID protocol has also had some security issues. These have generally turned out to be implementation-dependent (and fixed by the implementation's authors) or can be sidestepped by the use of SSL/TLS throughout the authentication chain.
How's that working out?
Wikipedia claims that "there are over 1 billion OpenID enabled accounts on the Internet [...] and approximately 1,100,934 sites have integrated OpenID consumer support". Unfortunately, with the honorable exception of ServerFault, few of the sites I find myself logging into are in that 1,100,934. One reason for this does not reflect well on big web sites: when you go through the business of registering with a big site, they can ask you for all sorts of biographical information. You don't have to give it to them, but many people will without thinking about it, and this allows the big sites to target advertising more effectively, and thus make more money. If all they get is a naked URL, they know nothing about you; there is thus a considerable incentive to make you go through the pain of registration. It is true that Google and Facebook logins are becoming more commonly available, but as Facebook makes clear, it will provide data to any web site that uses Facebook as a federated authentication provider:
OpenID has been extended, with a newer version of the protocol, to OpenID Connect.
This new version allows a web site not merely to authenticate the user
but also "to obtain basic profile information about the end-user in
an interoperable and REST-like manner
". One might be forgiven for
suspecting this was to help OpenID gain support with the big web site
brigade by addressing the concerns above, but it need not affect us.
The authentication provider cannot provide information it does not have; if
you suspect that it's trying to acquire it, you can, again, always
run your own authentication provider.
Some web sites shouldn't support OpenID, or indeed any distributed authentication scheme at all, because the actions that an authenticated user can perform are too significant to risk any unnecessary failures in the authentication chain. Bank web sites obviously come into this category, but, arguably, interactions with government (e.g. online tax return filing) do as well. However, excepting those, I would like every web site that uses accounts to support OpenID, and I'd settle for a few more every day. If you're putting up a project site, please consider looking around for an OpenID plugin for your site code, and enabling it.
Recently, Joseph Graham posted to the FSFE Fellowship mailing list, saying he'd created a small FluxBB site, https://www.freesoftwareuk.org.uk/, for free-software enthusiasts to chew the fat, arrange pub meets, and so on. I immediately asked him to add OpenID support, which he did. By his own account, he found a plugin with a little searching; installing it was a matter of downloading a ZIP file, putting it where the forum's software could find it, and enabling it via the forum administrative interface. It is true that the plugin has some shortcomings, and he intends to find a better alternative, or write his own, shortly, but getting off on the right foot was not particularly painful.
If you're of a mind to add it to your WordPress site, there is a plugin that
permits authentication via OpenID both for commenters, and for login to
the main site; installation instructions are linked from the above page, and
do not look onerous. Drupal really seems to have gotten the idea; the project notes
that "users can create Drupal accounts using their OpenID. This lowers
the barrier to registration, which is good for the site, and offers
convenience and security to the users
". Drupal
decided to burn the candle at both ends, by supporting OpenID not only
as a client, to allow authentication
to a Drupal instance via OpenID
but also as a
provider, in order to use a Drupal installation as a delegated
authentication provider.
The OpenID project maintains a collection of libraries and developer tools, including Apache and PHP modules to allow sites to permit users to authenticate via OpenID, and implementations of authentication servers to run a delegated authentication provider. With this many options, it's surprisingly simple to use OpenID for authentication, and not much harder to support it. While 2017 is unlikely to bring me a flying car or a jetpack, with a little help from the internet it might yet bring me distributed, decentralized identity management.
Security
Locking down module parameters
Support for UEFI secure boot has been available in most mainstream Linux distributions for several years now, but there are still some wrinkles to work out. Some distributors are concerned about various ways for the root user to alter the kernel in ways that would allow the secure-boot assurances to be circumvented. For years, there have been efforts to "lock down" the kernel so that the known ways of evading secure boot can be disabled by distributions and others. The latest piece of that puzzle is a proposal to annotate kernel module parameters such that some can be disallowed when a secure-boot kernel is running.
Back in November, David Howells posted the latest version of the "kernel lockdown" patches, which he has picked up and expanded from those pushed by Matthew Garrett back in 2012 and 2013. The patch set restricted a lot of different functionality that would allow user space to modify the running kernel image (which, in turn, allows user space to circumvent secure boot). The restrictions disallow things like loading unsigned kernel modules, writing to /dev/mem or /dev/kmem, using kexec_load() to run a different kernel, various ways to directly access the hardware, and so on.
Previous iterations of the patch set have run aground at least partly due to the kexec_load() restrictions, because some kernel developers did not want to see the useful kexec facility completely disabled. An alternative system call (kexec_file_load()) was added that will only allow booting signed kernels, which neatly solved things for both sides. This time, the main objection came from Alan Cox, who thought there was a fundamental piece missing:
Without that at least fixed I don't see the point in merging this. Either we don't do it (which given the level of security the current Linux kernel provides, and also all the golden key messups from elsewhere might be the honest approach), or at least try and do the job right.
Essentially, Cox is arguing that changing certain kernel module parameters before loading a (signed) module is yet another avenue for modifying the kernel image. His objection led Howells to quickly post an RFC patch of sorts that would restrict certain operations in drivers when kernel_is_locked_down() is true. While that was on the right track, Cox said, he would rather see a whitelist-based approach, rather than the blacklist-based one that Howells proposed.
All of that led to the module parameter annotation patch set that Howells posted on December 1. The idea is that all module parameters will be annotated to describe what kind of hardware resource they control (if any). That information can be used in a subsequent patch to restrict which can be used in a locked-down kernel.
The change was made by altering the module_param*() macros, which are helpers for modules that need to take parameters at load time. An argument was added for the "hardware type" and the macros were renamed to module_param_hw*(). As can be seen in the first patch of the series, the types are I/O port, I/O memory address, interrupt number, DMA channel, DMA address, or other. The change made in the second patch demonstrates the idea:
-module_param(mmio_address, ulong, 0);
+module_param_hw(mmio_address, ulong, iomem, 0);
The other 37 patches in the series annotate various module parameters
throughout the tree
(mostly, of course, in drivers/). But Greg Kroah-Hartman was not
particularly impressed ("ick ick ick
") with the idea. He
suggested that the secure boot patch set (i.e. kernel lockdown) was not
going anywhere, so there was no need for the annotations. Furthermore, he
was skeptical that stopping root users from setting these module parameters
was really going to help stop secure-boot abuses.
Garrett noted that the patch set was
currently being carried by "basically every single mainstream Linux distribution
", however.
This costs time and effort by the distributions to rebase the patches on newer
kernels. Beyond that, at least some of the module parameters can be used
to route around secure boot:
Given that distributions ship the lockdown patches, and that Cox has said that some way to disable module parameters should be part of that, Garrett argued that the annotations should be merged unless there were technical objections to the implementation. Kroah-Hartman was not buying that argument, though. Distributions are not shipping the annotations and the annotations patches don't actually disable anything, they just make it possible to do so, he said. He also suggested simply marking all module parameters (or any that touch the hardware) as "bad", rather than trying to pick and choose which were usable for a locked-down kernel.
But the annotations have other uses, Cox said. Locking down raw I/O access, even for systems that are not running under the secure-boot restrictions, is valuable. Right now, certain unrelated capabilities can be used to effectively get the CAP_SYS_RAWIO capability by loading modules with crafted parameters, but the kernel could eliminate that possibility by using the annotations.
Howells also wondered about the seemingly
contradictory ordering requirements from Kroah-Hartman and Cox: "for
Alan, I have
to fix the module parameter hole first; for you, I have to do the secure boot
support first
". Like Cox, though, Howells thinks the annotations
have value in their own right: "However, annotation [of] module parameters
to indicate hardware resource configuration seems potentially useful in its
own right - and lets the policy be decided later.
"
There does seem to be something of a rock and a hard place problem here. The kernel lockdown patches are not particularly popular with quite a few kernel developers (including Linus Torvalds, which makes things that much harder), but they are shipping. That would seem to indicate that they belong upstream or, at least, that something implementing that functionality belongs upstream.
The annotations, themselves, are relatively harmless (other than providing a bit of churn) and will allow Cox's module parameter issue to be addressed. That will lead to a more secure kernel, overall, with or without secure boot. Once the relevant maintainers have reviewed the patches (and those reviews are starting to trickle in), it would seem that the patches should be merged (though Torvalds would need to override Kroah-Hartman's NAK). The 4.10 merge window will be opening soon; it's likely too late for the annotation patches to make that cut but, with luck, they could make it for 4.11. The larger (and more invasive in the eyes of some) lockdown patch set would seem to have surmounted the known technical objections at that point; whether that paves the way for those to be merged remains to be seen.
Brief items
Security quotes of the week
BitUnmap: Attacking Android Ashmem (Project Zero blog)
Google's Project Zero blog has a detailed look at exploiting a vulnerability in Android's ashmem shared-memory facility. "The mismatch between the mmap-ed and munmap-ed length provides us with a great exploitation primitive! Specifically, we could supply a short length for the mmap operation and a longer length for the munmap operation - thus resulting in deletion of an arbitrarily large range of virtual memory following our bitmap object. Moreover, there’s no need for the deleted range to contain one continuous memory mapping, since the range supplied in munmap simply ignores unmapped pages. Once we delete a range of memory, we can then attempt to “re-capture” that memory region with controlled data, by causing another allocation in the remote process. By doing so, we can forcibly “free” a data structure and replace its contents with our own chosen data -- effectively forcing a use-after-free condition."
Google's OSS-Fuzz project
The Google security blog announces the OSS-Fuzz project, which performs continuous fuzz testing of free-software project repositories. "OSS-Fuzz has already found 150 bugs in several widely used open source projects (and churns ~4 trillion test cases a week). With your help, we can make fuzzing a standard part of open source development, and work with the broader community of developers and security testers to ensure that bugs in critical open source applications, libraries, and APIs are discovered and fixed."
Bottomley: Using Your TPM as a Secure Key Store
James Bottomley has posted a tutorial on using the trusted platform module to store cryptographic keys. "The main thing that came out of this discussion was that a lot of this stack complexity can be hidden from users and we should concentrate on making the TPM 'just work' for all cryptographic functions where we have parallels in the existing security layers (like the keystore). One of the great advantages of the TPM, instead of messing about with USB pkcs11 tokens, is that it has a file format for TPM keys (I’ll explain this later) which can be used directly in place of standard private key files."
New vulnerabilities
busybox: two vulnerabilities
| Package(s): | busybox | CVE #(s): | CVE-2016-2147 CVE-2016-2148 | ||||
| Created: | December 5, 2016 | Updated: | December 7, 2016 | ||||
| Description: | From the Gentoo advisory:
Multiple vulnerabilities have been discovered in BusyBox. A remote attacker could possibly execute arbitrary code with the privileges of the process, or cause a Denial of Service condition. | ||||||
| Alerts: |
| ||||||
calamares: encryption bypass
| Package(s): | calamares | CVE #(s): | |||||||||
| Created: | December 2, 2016 | Updated: | December 7, 2016 | ||||||||
| Description: | From the Fedora advisory:
A security update that fixes Calamares bug CAL-405: https://calamares.io/bugs/browse/CAL-405 When installing with a LUKS-encrypted `/` partition, Calamares was always creating a keyfile to decode `/` and storing it in the initramfs. It did that even with an unencrypted separate `/boot` partition. As a result, the keyfile would be stored in cleartext on the `/boot` partition, and it was possible to unlock the `/` partition without ever entering a passphrase. This completely defeated the security of LUKS. Please note that this only affects manual partitioning. The automatic partitioning never leaves `/boot` unencrypted (and it is, in fact, recommended to also always encrypt `/boot` when doing manual partitioning). This update fixes the `dracutlukscfg` module to not add the keyfile to `install_items` in the `dracut` configuration (so that `dracut` will not include it onto the initramfs) if `/boot` is separate and unencrypted. | ||||||||||
| Alerts: |
| ||||||||||
chromium: multiple vulnerabilities
| Package(s): | chromium | CVE #(s): | CVE-2016-5203 CVE-2016-5204 CVE-2016-5205 CVE-2016-5206 CVE-2016-5207 CVE-2016-5208 CVE-2016-5209 CVE-2016-5210 CVE-2016-5211 CVE-2016-5212 CVE-2016-5213 CVE-2016-5214 CVE-2016-5215 CVE-2016-5216 CVE-2016-5217 CVE-2016-5218 CVE-2016-5219 CVE-2016-5220 CVE-2016-5221 CVE-2016-5222 CVE-2016-5223 CVE-2016-5224 CVE-2016-5225 CVE-2016-5226 CVE-2016-9650 CVE-2016-9651 CVE-2016-9652 | ||||||||||||||||||||||||||||||||||||||||||||
| Created: | December 5, 2016 | Updated: | January 19, 2017 | ||||||||||||||||||||||||||||||||||||||||||||
| Description: | From the Arch Linux advisory:
- CVE-2016-5203 (arbitrary code execution): An use after free flaw was found in the PDFium component of the Chromium browser. - CVE-2016-5204 (cross-site scripting): An universal XSS flaw was found in the Blink component of the Chromium browser. - CVE-2016-5205 (cross-site scripting): An universal XSS flaw was found in the Blink component of the Chromium browser. - CVE-2016-5206 (same-origin policy bypass): A same-origin bypass flaw was found in the PDFium component of the Chromium browser. - CVE-2016-5207 (cross-site scripting): An universal XSS flaw was found in the Blink component of the Chromium browser. - CVE-2016-5208 (cross-site scripting): An universal XSS flaw was found in the Blink component of the Chromium browser. - CVE-2016-5209 (arbitrary code execution): An out of bounds write flaw was found in the Blink component of the Chromium browser. - CVE-2016-5210 (arbitrary code execution): An out of bounds write flaw was found in the PDFium component of the Chromium browser. - CVE-2016-5211 (arbitrary code execution): An use after free flaw was found in the PDFium component of the Chromium browser. - CVE-2016-5212 (arbitrary filesystem access): A local file disclosure flaw was found in the DevTools component of the Chromium browser. - CVE-2016-5213 (arbitrary code execution): An use after free flaw was found in the V8 component of the Chromium browser. - CVE-2016-5214 (insufficient validation): A file download protection bypass was discovered in the Chromium browser. - CVE-2016-5215 (arbitrary code execution): An use after free flaw was found in the Webaudio component of the Chromium browser. - CVE-2016-5216 (arbitrary code execution): An use after free flaw was found in the PDFium component of the Chromium browser. - CVE-2016-5217 (insufficient validation): An use of unvalidated data flaw was found in the PDFium component of the Chromium browser. - CVE-2016-5218 (content spoofing): An address spoofing flaw was found in the Omnibox component of the Chromium browser. - CVE-2016-5219 (arbitrary code execution): An use after free flaw was found in the V8 component of the Chromium browser. - CVE-2016-5220 (arbitrary filesystem access): A local file access flaw was found in the PDFium component of the Chromium browser. - CVE-2016-5221 (arbitrary code execution): An integer overflow flaw was found in the ANGLE component of the Chromium browser. - CVE-2016-5222 (content spoofing): An address spoofing flaw was found in the Omnibox component of the Chromium browser. - CVE-2016-5223 (arbitrary code execution): An integer overflow flaw was found in the PDFium component of the Chromium browser. - CVE-2016-5224 (same-origin policy bypass): A same-origin bypass flaw was found in the SVG component of the Chromium browser. - CVE-2016-5225 (access restriction bypass): A CSP bypass flaw was found in the Blink component of the Chromium browser. - CVE-2016-5226 (cross-site scripting): A limited XSS flaw was found in the Blink component of the Chromium browser. - CVE-2016-9650 (information disclosure): A CSP referrer disclosure vulnerability has been discovered in the Chromium browser. - CVE-2016-9651 (access restriction bypass): A private property access flaw was found in the V8 component of the Chromium browser. - CVE-2016-9652 (arbitrary code execution): Various fixes from internal audits, fuzzing and other initiatives. | ||||||||||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||||||||||
firefox: same-origin bypass
| Package(s): | firefox | CVE #(s): | CVE-2016-9078 | ||||||||||||||||
| Created: | December 1, 2016 | Updated: | December 7, 2016 | ||||||||||||||||
| Description: | From the Ubuntu advisory:
It was discovered that data: URLs can inherit the wrong origin after a HTTP redirect in some circumstances. An attacker could potentially exploit this to bypass same-origin restrictions. (CVE-2016-9078) | ||||||||||||||||||
| Alerts: |
| ||||||||||||||||||
firefox: code execution
| Package(s): | firefox-esr firefox thunderbird | CVE #(s): | CVE-2016-9079 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Created: | December 1, 2016 | Updated: | December 15, 2016 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Description: | From the Debian advisory:
A use-after-free vulnerability in the SVG Animation was discovered in the Mozilla Firefox web browser, allowing a remote attacker to cause a denial of service (application crash) or execute arbitrary code, if a user is tricked into opening a specially crafted website. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
GraphicsMagick: non-null null pointer
| Package(s): | GraphicsMagick imagemagick | CVE #(s): | CVE-2016-9559 | ||||||||||||||||||||
| Created: | December 6, 2016 | Updated: | December 7, 2016 | ||||||||||||||||||||
| Description: | From the openSUSE bug report:
A fuzz on an updated version with the undefined behavior sanitizer enabled, revealed a null pointer which is declared to never be null. | ||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||
gstreamer1-plugins-good: buffer overflow
| Package(s): | gstreamer1-plugins-good | CVE #(s): | CVE-2016-9808 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Created: | December 6, 2016 | Updated: | December 7, 2016 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Description: | From the Red Hat bugzilla:
A heap-based buffer overflow vulnerability was found in FLIC decoder in flx_decode_delta_fli() function. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
imagemagick: code execution
| Package(s): | imagemagick | CVE #(s): | CVE-2016-9556 | ||||||||||||||||||||||||||||||||||||
| Created: | December 1, 2016 | Updated: | December 7, 2016 | ||||||||||||||||||||||||||||||||||||
| Description: | From the Debian security tracker entry:
Heap buffer overflow in heap-buffer-overflow in IsPixelGray | ||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||
kernel: privilege escalation
| Package(s): | kernel | CVE #(s): | CVE-2016-9644 | ||||||||||||
| Created: | December 1, 2016 | Updated: | December 7, 2016 | ||||||||||||
| Description: | From the Ubuntu advisory:
It was discovered that the __get_user_asm_ex implementation in the Linux kernel for x86/x86_64 contained extended asm statements that were incompatible with the exception table. A local attacker could use this to gain administrative privileges. (CVE-2016-9644) | ||||||||||||||
| Alerts: |
| ||||||||||||||
kernel: denial of service
| Package(s): | kernel | CVE #(s): | CVE-2016-8650 | ||||||||||||||||||||||||||||||||||||||||
| Created: | December 6, 2016 | Updated: | December 7, 2016 | ||||||||||||||||||||||||||||||||||||||||
| Description: | From the Red Hat bugzilla:
A flaw was found in the Linux kernel key management subsystem in which a local attacker could crash the kernel or corrupt the stack and additional memory (denial of service) by supplying a specially crafted RSA key. This flaw panics the machine during the verification of the RSA key and is key-payload independent. This vulnerably can be triggered by any unprivileged user with a local shell account. | ||||||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||||||
kernel: privilege escalation
| Package(s): | kernel | CVE #(s): | CVE-2016-8655 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Created: | December 6, 2016 | Updated: | December 13, 2016 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Description: | From the Ubuntu advisory:
Philip Pettersson discovered a race condition in the af_packet implementation in the Linux kernel. A local unprivileged attacker could use this to cause a denial of service (system crash) or run arbitrary code with administrative privileges. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
kernel: denial of service
| Package(s): | kernel | CVE #(s): | CVE-2013-5634 | ||||
| Created: | December 6, 2016 | Updated: | December 7, 2016 | ||||
| Description: | From the CVE entry:
arch/arm/kvm/arm.c in the Linux kernel before 3.10 on the ARM platform, when KVM is used, allows host OS users to cause a denial of service (NULL pointer dereference, OOPS, and host OS crash) or possibly have unspecified other impact by omitting vCPU initialization before a KVM_GET_REG_LIST ioctl call. | ||||||
| Alerts: |
| ||||||
kernel: two vulnerabilities
| Package(s): | kernel | CVE #(s): | CVE-2016-8632 CVE-2016-9555 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Created: | December 7, 2016 | Updated: | February 3, 2017 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Description: | From the CVE entries:
The tipc_msg_build function in net/tipc/msg.c in the Linux kernel through 4.8.11 does not validate the relationship between the minimum fragment length and the maximum packet size, which allows local users to gain privileges or cause a denial of service (heap-based buffer overflow) by leveraging the CAP_NET_ADMIN capability. (CVE-2016-8632) The sctp_sf_ootb function in net/sctp/sm_statefuns.c in the Linux kernel before 4.8.8 lacks chunk-length checking for the first chunk, which allows remote attackers to cause a denial of service (out-of-bounds slab access) or possibly have unspecified other impact via crafted SCTP data. (CVE-2016-9555) | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
libctnative: SSL improvements
| Package(s): | libtcnative-1-0 | CVE #(s): | |||||
| Created: | December 2, 2016 | Updated: | December 7, 2016 | ||||
| Description: | From the openSUSE advisory:
* Unconditionally disable export Ciphers. | ||||||
| Alerts: |
| ||||||
libdwarf: multiple vulnerabilities
| Package(s): | libdwarf | CVE #(s): | CVE-2016-8679 CVE-2016-8680 CVE-2016-8681 CVE-2016-9275 CVE-2016-9276 CVE-2016-9480 CVE-2016-9558 | ||||
| Created: | December 5, 2016 | Updated: | December 7, 2016 | ||||
| Description: | From the Arch Linux advisory:
- CVE-2016-8679 (information disclosure): An out of bounds heap read vulnerability was found in _dwarf_get_size_of_val triggered by invoking dwarfdump command on crafted file. - CVE-2016-8680 (information disclosure): An out of bounds heap read vulnerability was found in _dwarf_get_abbrev_for_code triggered by invoking dwarfdump command on crafted file. - CVE-2016-8681 (information disclosure): An out of bounds heap read vulnerability was found in _dwarf_get_abbrev_for_code triggered by invoking dwarfdump command on crafted file. - CVE-2016-9275 (information disclosure): An out of bounds heap read was found in _dwarf_skim_forms in dwarf_macro5.c triggered by crafted input to dwarfdump utility. - CVE-2016-9276 (information disclosure): An out of bounds heap read was found in dwarf_get_aranges_list in dwarf_arrange.c triggered by crafted input to dwarfdump utility. - CVE-2016-9480 (information disclosure): libdwarf allows context-dependent attackers to obtain sensitive information or cause a denial of service by using the "malformed dwarf file" approach, related to a "Heap Buffer Over-read" issue affecting the dwarf_util.c component. - CVE-2016-9558 (denial of service): A negation overflow vulnerability was found in dwarf_leb.c triggered by crafted input to dwarfdump utility. | ||||||
| Alerts: |
| ||||||
mapserver: information leak
| Package(s): | mapserver | CVE #(s): | CVE-2016-9839 | ||||||||||||
| Created: | December 7, 2016 | Updated: | December 21, 2016 | ||||||||||||
| Description: | From the Debian LTS advisory:
It was discovered that there was an information leakage vulnerability in mapserver, a CGI-based framework for Internet map services. | ||||||||||||||
| Alerts: |
| ||||||||||||||
mozilla: file overwrites
| Package(s): | thunderbird firefox | CVE #(s): | CVE-2016-5294 | ||||||||||||||||
| Created: | December 6, 2016 | Updated: | December 7, 2016 | ||||||||||||||||
| Description: | From the Mageia advisory:
The Mozilla Updater can be made to choose an arbitrary target working directory for output files resulting from the update process. This vulnerability requires local system access. | ||||||||||||||||||
| Alerts: |
| ||||||||||||||||||
openafs: information leak
| Package(s): | openafs | CVE #(s): | CVE-2016-9772 | ||||||||||||
| Created: | December 5, 2016 | Updated: | February 3, 2017 | ||||||||||||
| Description: | From the Debian LTS advisory:
It was discovered that there was an information leak vulnerability in openafs, a distributed filesystem. Due to incomplete initialization or clearing of reused memory, OpenAFS directory objects are likely to contain 'dead' directory entry information. | ||||||||||||||
| Alerts: |
| ||||||||||||||
patch: denial of service
| Package(s): | patch | CVE #(s): | |||||
| Created: | December 5, 2016 | Updated: | December 7, 2016 | ||||
| Description: | From the Gentoo advisory:
Due to a flaw in Patch, the application can enter an infinite loop when processing a specially crafted diff file. A local attacker could pass a specially crafted diff file to Patch, possibly resulting in a Denial of Service condition. | ||||||
| Alerts: |
| ||||||
pecl-http: code execution
| Package(s): | pecl-http | CVE #(s): | CVE-2016-5873 | ||||
| Created: | December 7, 2016 | Updated: | December 7, 2016 | ||||
| Description: | From the Gentoo advisory:
A buffer overflow can be triggered in the URL parsing functions of the PECL HTTP extension. This allows overflowing a buffer with data originating from an arbitrary HTTP request. A remote attacker, through a specially crafted URI, could possibly execute arbitrary code with the privileges of the process. | ||||||
| Alerts: |
| ||||||
phpMyAdmin: multiple vulnerabilities
| Package(s): | phpMyAdmin | CVE #(s): | CVE-2016-4412 CVE-2016-9847 CVE-2016-9848 CVE-2016-9849 CVE-2016-9850 CVE-2016-9851 CVE-2016-9852 CVE-2016-9853 CVE-2016-9854 CVE-2016-9855 CVE-2016-9856 CVE-2016-9857 CVE-2016-9858 CVE-2016-9859 CVE-2016-9860 CVE-2016-9861 CVE-2016-9864 CVE-2016-9865 CVE-2016-9866 | ||||||||||||||||||||||||||||
| Created: | December 5, 2016 | Updated: | January 11, 2017 | ||||||||||||||||||||||||||||
| Description: | From the phpMyAdmin release announcement:
The phpMyAdmin project is pleased to announce the release of phpMyAdmin versions 4.6.5 (including bug and security fixes), 4.4.15.9 (security fixes), and 4.0.10.18 (security fixes). We recommend all users update their phpMyAdmin installations. The Red Hat bugzilla notes that these security fixes are advisories PMASA-2016-57 to PMASA-2016-71. The Mageia advisory adds lots more information, including CVE numbers: In phpMyAdmin before 4.4.15.9, when the user does not specify a blowfish_secret key for encrypting cookies, phpMyAdmin generates one at runtime. A vulnerability was reported where the way this value is created using a weak algorithm. This could allow an attacker to determine the user's blowfish_secret and potentially decrypt their cookies (CVE-2016-9847). In phpMyAdmin before 4.4.15.9, phpinfo.php shows PHP information including values of sensitive HttpOnly cookies (CVE-2016-9848). In phpMyAdmin before 4.4.15.9, it is possible to bypass AllowRoot restriction ($cfg['Servers'][$i]['AllowRoot']) and deny rules for username by using Null Byte in the username (CVE-2016-9849). In phpMyAdmin before 4.4.15.9, a vulnerability in username matching for the allow/deny rules may result in wrong matches and detection of the username in the rule due to non-constant execution time (CVE-2016-9850). In phpMyAdmin before 4.4.15.9, with a crafted request parameter value it is possible to bypass the logout timeout (CVE-2016-9851). In phpMyAdmin before 4.4.15.9, by calling some scripts that are part of phpMyAdmin in an unexpected way, it is possible to trigger phpMyAdmin to display a PHP error message which contains the full path of the directory where phpMyAdmin is installed. During an execution timeout in the export functionality, the errors containing the full path of the directory of phpMyAdmin is written to the export file (CVE-2016-9852, CVE-2016-9853, CVE-2016-9854, CVE-2016-9855). In phpMyAdmin before 4.4.15.9, several XSS vulnerabilities have been reported, including an improper fix for PMASA-2016-10 and a weakness in a regular expression using in some JavaScript processing (CVE-2016-9856, CVE-2016-9857). In phpMyAdmin before 4.4.15.9, with a crafted request parameter value it is possible to initiate a denial of service attack in saved searches feature (CVE-2016-9858). In phpMyAdmin before 4.4.15.9, with a crafted request parameter value it is possible to initiate a denial of service attack in import feature (CVE-2016-9859). In phpMyAdmin before 4.4.15.9, an unauthenticated user can execute a denial of service attack when phpMyAdmin is running with $cfg['AllowArbitraryServer']=true; (CVE-2016-9860). In phpMyAdmin before 4.4.15.9, due to the limitation in URL matching, it was possible to bypass the URL white-list protection (CVE-2016-9861). In phpMyAdmin before 4.4.15.9, with a crafted username or a table name, it was possible to inject SQL statements in the tracking functionality that would run with the privileges of the control user. This gives read and write access to the tables of the configuration storage database, and if the control user has the necessary privileges, read access to some tables of the mysql database (CVE-2016-9864). In phpMyAdmin before 4.4.15.9, due to a bug in serialized string parsing, it was possible to bypass the protection offered by PMA_safeUnserialize() function (CVE-2016-9865). In phpMyAdmin before 4.4.15.9, when the arg_separator is different from its default value of &, the token was not properly stripped from the return URL of the preference import action (CVE-2016-9866). | ||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||
virtualbox: code execution
| Package(s): | virtualbox | CVE #(s): | CVE-2016-6309 | ||||
| Created: | December 6, 2016 | Updated: | December 7, 2016 | ||||
| Description: | From the CVE entry:
statem/statem.c in OpenSSL 1.1.0a does not consider memory-block movement after a realloc call, which allows remote attackers to cause a denial of service (use-after-free) or possibly execute arbitrary code via a crafted TLS session. | ||||||
| Alerts: |
| ||||||
xen: multiple vulnerabilities
| Package(s): | xen | CVE #(s): | CVE-2016-9385 CVE-2016-9377 CVE-2016-9378 | ||||||||||||||||||||||||||||||||||||||||||||||||||||
| Created: | December 5, 2016 | Updated: | December 7, 2016 | ||||||||||||||||||||||||||||||||||||||||||||||||||||
| Description: | From the Red Hat bugzilla:
CVE-2016-9385: Both writes to the FS and GS register base MSRs as well as the WRFSBASE and WRGSBASE instructions require their input values to be canonical, or a #GP fault will be raised. When the use of those instructions by the hypervisor was enabled, the previous guard against #GP faults (having recovery code attached) was accidentally removed. A malicious guest administrator can crash the host, leading to a DoS. CVE-2016-9377, CVE-2016-9378:There are two closely-related bugs. When Xen emulates instructions which generate software interrupts it needs to perform a privilege check involving an IDT lookup. This check is sometimes erroneously conducted as if the IDT had the format for a 32-bit guest, when in fact it is in the 64-bit format. Xen will then read the wrong part of the IDT and interpret it in an unintended manner. When Xen emulates instructions which generate software interrupts, and chooses to deliver the software interrupt, it may try to use the method intended for injecting exceptions. This is incorrect, and results in a guest crash. These instructions are not usually handled by the emulator. Exploiting the bug requires ability to force use of the emulator. An unprivileged guest user program may be able to crash the guest. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||
Page editor: Jake Edge
Kernel development
Brief items
Kernel release status
The current development kernel is 4.9-rc8, released on December 4. Quoth Linus: "So if anybody has been following the git tree, it should come as no surprise that I ended up doing an rc8 after all: things haven't been bad, but it also hasn't been the complete quiet that would have made me go 'no point in doing another week'."
The December 4 regressions report shows eleven known problems in the 4.9 kernel.
Stable updates: 4.8.12 and 4.4.36 were released on December 2. The 4.8.13 and 4.4.37 updates are in the review process as of this writing; they can be expected on or after December 9.
Those looking for a fix for the easily exploitable race condition known as CVE-2016-8655 will have to wait a bit longer. The fix is in 4.9, but has not yet found its way into the stable update stream.
Quotes of the week
LSF/MM 2017: Call for Proposals
The 2017 Linux Storage, Filesystem, and Memory-Management Summit will be held March 20-21 in Cambridge, MA, colocated with the Linux Foundation's "Vault" event. The call for agenda items has gone out; interested attendees are encouraged to post their ideas for discussion. The deadline for submissions is January 15.
Kernel development news
Debating the value of XDP
All parts of the kernel are shaped by the need for performance and scalability, but some subsystems feel those pressures more than others. The networking subsystem, which can be asked to handle steady-state workloads involving millions of packets per second, has had to look at numerous technologies in its search for improved performance. One result of that work has been the "express data path" (or XDP) mechanism. Now, however, XDP is seeing some pushback from developers who see it as "pointless," a possible long-term maintenance problem, and a distraction from the work that networking developers need to be doing.The core idea behind XDP is optimizing cases where relatively simple decisions can be made about incoming packets. It allows the loading of a BPF program into the kernel; that program gets an opportunity to inspect packets before they enter the networking stack itself. The initial use case for XDP was to enable the quick dropping of unwanted packets, but it has since expanded to cover simple routing decisions and packet modifications; see this in-progress documentation for some more information on how it works.
The core benefit of XDP is that the system can make quick decisions about packets without the need to involve the rest of the networking code. Performance could possibly be further improved in some settings by loading XDP programs directly into the network interface, perhaps after a translation step.
Thus far, most of the public discussion about XDP has been focused on the
details of its implementation rather than on whether XDP is a good idea in
the first place. That came to an end at the beginning of December, though,
when Florian Westphal, in a posting
written with help from Hannes Frederic Sowa, let it be known that he disagrees:
"Lots of XDP related patches started to appear on netdev. I'd prefer
if it would stop...
" He would rather that developers turned away
from a "well meaning but pointless
" approach toward something
that, in his view, is better suited to the problems faced by the networking
subsystem.
That something, in short, is any of the mechanisms out there (such as the data plane development kit) that allow the networking stack to be bypassed by user-space code. These mechanisms can indeed yield improved performance in settings where a strictly defined set of functionality is needed and the benefits that come from a general-purpose network stack can be done without. Additionally, he said, some problems are best solved by utilizing the packet-filtering features implemented in the hardware.
XDP, Westphal said, is an inferior solution because it provides a poorer programming environment. Networking code done in user space can be written in any of a range of languages, has full debugging support available, and so on. BPF programs, instead, are harder to develop and much more limited in their potential functionality. Looking at a number of use cases for XDP, including routing, load balancing, and early packet filtering, he claims that there are better solutions for each.
Thomas Graf responded that he has a fundamental problem with user-space networking: as soon as a packet leaves the kernel, anything can happen to it and it is no longer possible to make security decisions about it. User-space code could be compromised and there is no way for the kernel to know. BPF code in the kernel, instead, should be more difficult to compromise since its freedom of action is much more restricted. He also said that load balancing often needs to be done within applications as well as across machines, and he would not want to see that done in user space.
Sowa, instead, questioned the early-drop use case, asking whether the focus was on additional types of protection or improved performance. Like Westphal, he suggested that this problem could be solved primarily with hardware-based packet dropping. The answer from Tom Herbert made it clear that he sees both flexibility and performance as being needed:
Networking maintainer David Miller also said that he sees XDP as being a good solution for packet dropping use cases. Hardware-based filters, he said, are not up to the job, and XDP looks to him to be the right approach.
Sowa's other concern was not so easily addressed, though. As XDP programs gain functionality, they will need access to increasingly sophisticated information from the rest of the networking stack. That information can be provided by way of functions callable from BPF, but those functions will likely become part of the kernel's user-space ABI. That, in turn, will limit the changes that the networking developers can make in the future. These concerns mirror the worries about tracepoints that have limited their use in parts of the kernel. Nobody in the discussion addressed the ABI problem; in the end, it will have to be handled like any other user-space interface, where new features are, hopefully, added with care and a lot of review.
In the end, the discussion probably changed few minds about the value of
XDP — or the lack thereof. Stephen Hemminger probably summarized things
best when he said that there is room for a
number of different approaches. XDP is better for "high speed packet
mangling
", while user-space approaches are going to be better for
the implementation of large chunks of networking infrastructure. The
networking world is complex, and getting more so; a variety of approaches
will be needed to solve all the problems that the kernel is facing in this
area.
Development statistics for 4.9
The 4.9 development cycle, expected to come to a close on December 11, will be by far the busiest cycle in the history of the kernel project. A look at the repository activity for this release, though, shows that it is, in many ways, just another typical cycle for the kernel community with one large exceptional factor.As of this writing, 16,150 non-merge changesets have been pulled into the mainline repository for 4.9. That far exceeds the 13,722 changesets pulled for 3.15 which was, until now, the busiest cycle ever. There is no doubt that the merging of the Greybus driver had a lot to do with that but, even if one subtracts the 2,388 changesets associated with Greybus, 4.9 would still (just barely) set a new record. Clearly there has been a lot going on in the last few months.
The changes in 4.9 were contributed by a total of 1,719 developers — also a new record. The most active of those developers were:
Most active 4.9 developers
By changesets Johan Hovold 599 3.7% Viresh Kumar 421 2.6% Greg Kroah-Hartman 376 2.3% Alex Elder 345 2.1% Chris Wilson 335 2.1% Mauro Carvalho Chehab 238 1.5% Kuninori Morimoto 204 1.3% Wolfram Sang 183 1.1% Wei Yongjun 175 1.1% David Howells 149 0.9% Arnd Bergmann 135 0.8% Bryan O'Donoghue 134 0.8% Ivan Safonov 134 0.8% Sergio Paracuellos 120 0.7% Bjorn Helgaas 117 0.7% Javier Martinez Canillas 108 0.7% Alex Deucher 107 0.7% Markus Elfring 106 0.7% Colin Ian King 105 0.7% Linus Walleij 95 0.6%
By changed lines Jes Sorensen 66930 7.0% Rex Zhu 58288 6.1% Laurent Pinchart 33602 3.5% Greg Kroah-Hartman 32002 3.3% Mauro Carvalho Chehab 14747 1.5% Chris Wilson 14661 1.5% Ken Wang 14556 1.5% Viresh Kumar 13389 1.4% Dom Cobley 12816 1.3% Christoph Hellwig 12459 1.3% Darrick J. Wong 11646 1.2% Hans Verkuil 11480 1.2% Johan Hovold 11465 1.2% Ram Amrani 11318 1.2% Alex Elder 11060 1.2% David Howells 11045 1.2% Netanel Belgazal 10858 1.1% Lijun Ou 10849 1.1% Huang Rui 10157 1.1% Maruthi Srinivas Bayyavarapu 10092 1.1%
The top four developers of the Greybus code, Johan Hovold, Viresh Kumar, Greg Kroah-Hartman, and Alex Elder, ended up as the top four changeset contributors overall. Between the four of them, they contributed 1,741 changesets in 4.9. The fifth developer in the by-changesets column, Chris Wilson, worked on the i915 graphics driver.
In the "lines changed" column, Jes Sorensen removed the rtl8723au network driver, which has been superseded by the rtl8xxxu driver. Rex Zhu worked on the AMD "PowerPlay" graphics driver, Laurent Pinchart worked on various media drivers (including the Greybus camera driver), Greg Kroah-Hartman added a lot of Greybus code, and Mauro Carvalho Chehab did a lot of work as the media subsystem maintainer and in support of the ongoing documentation transition to Sphinx.
228 companies (that we know about) supported work on the 4.9 kernel. The most active of those employers were:
Most active 4.9 employers
By changesets Linaro 1876 11.6% Intel 1869 11.6% (Unknown) 1293 8.0% (Consultant) 1055 6.5% (None) 924 5.7% Red Hat 916 5.7% 541 3.3% Samsung 535 3.3% AMD 476 2.9% IBM 401 2.5% Renesas Electronics 331 2.0% Huawei Technologies 274 1.7% Linux Foundation 262 1.6% Mellanox 237 1.5% SUSE 233 1.4% ARM 228 1.4% Oracle 207 1.3% Texas Instruments 167 1.0% BayLibre 146 0.9% Broadcom 136 0.8%
By lines changed AMD 105820 11.1% Red Hat 104492 10.9% Intel 89456 9.4% Linaro 75310 7.9% (Consultant) 61718 6.5% (Unknown) 45676 4.8% (None) 34464 3.6% 31054 3.2% Samsung 25438 2.7% Oracle 14425 1.5% Huawei Technologies 14220 1.5% IBM 13432 1.4% Raspberry PI 12816 1.3% QLogic 12439 1.3% Mellanox 12284 1.3% Cavium Networks 11741 1.2% NXP Semiconductors 11662 1.2% Cisco 11507 1.2% Linux Foundation 11182 1.2% Amazon 10858 1.1%
Linaro made it to the top of the "by changesets" column this time around by virtue of work on — wait for it — Greybus. Otherwise there is not a lot that is surprising in this table. As usual, quite a few companies work to support Linux, but the list of the most active of those tends not to change much from one cycle to the next.
Finally, it is worth noting that 301 developers made their first contribution to the kernel during the 4.9 development cycle. That, too, is arguably a record ("arguably" because 2.6.25 had 332 new developers but, since the tracking only begins with the Git era at 2.6.12, some of those "new" developers had certainly made contributions before). Let's look at first-time contributors since 3.0:
The plot shows a slight but observable upward slope, especially since the recent-period low of 183 during the 3.5 development cycle; the number of new contributors coming into the kernel community is growing over time. For all its faults, the kernel community is still able to attract new developers and, thus, continues to grow. That suggests that the pace of kernel development is unlikely to slow much in the near future.
Patches and updates
Kernel trees
Architecture-specific
Device drivers
Device driver infrastructure
Filesystems and block I/O
Memory management
Networking
Security-related
Miscellaneous
Page editor: Jonathan Corbet
Distributions
Maintainerless Debian?
The maintainer model is deeply ingrained into the culture of the free-software community; for any bit of code, there is usually a developer (or a small group of developers) charged with that code's maintenance. Good maintainers can help a project run smoothly, while poor maintainers can run things into the ground. What is to be done to save a project with the latter type of maintainer? Forking can be an option in some cases but, in many others, it's not a practical alternative. The Debian project is currently discussing its approach to bad maintainers — a discussion which has taken a surprising turn.A maintainer of a package within the Debian project cannot be circumvented by forking — unless one wishes to fork the entire distribution, an enterprise which tends not to end well. So, when maintainer problems arise, they must be solved in some other way. A recent problem of this type, involving the GNU GLOBAL package, came to a head in October, and is still, as of this writing, unresolved. The slow progress of this case, which played out for years before being referred to Debian's technical committee (TC) in October, has led longtime Debian contributor Ian Jackson to start pushing for changes.
Jackson noted that the Debian constitution only provides one practical method for the removal of a nonperforming maintainer: the technical committee. But, he said, the committee has never, in the entire history of the Debian project, actually forced a maintainer out of their position. That, he said, indicates a problem:
Jackson expressed mystification as to why the TC — an institution he designed and served on for years — has proved unable to deal with maintainership problems. He went on to propose a couple of methods for improving the situation.
Making it easier to depose maintainers
His first proposal was to create a new mechanism, separate from the technical committee, to handle maintainership disputes. After an initial mediation step failed, a "jury" of ten randomly chosen Debian developers would hear the arguments and, with a majority vote, decide whether the package should be given to a new maintainer or not. The jury idea comes from a desire to get away from an appointed panel like the TC; as prominent members of the project, committee members, he said, are more likely to side with maintainers than with those who are complaining about them.
He quickly moved away from this idea, though, toward a scheme that, he hoped, would make it easier for the TC to make maintainership decisions. He posted a draft for a general resolution that would, if adopted, offer "advice" to the TC. That advice says that, in disputes over a package, the maintainer's position should be given no more weight than that of any other project contributor. Each opinion should be considered on its own merits, regardless of the "authority" of the person expressing any given opinion.
This draft resolution has not been all that well received. It was simultaneously seen as a no-op resolution that made no changes to how the TC was supposed to operate and a statement of a lack of confidence in the current committee's judgment. Two TC members (Philip Hands and Didier Raboud) said that they would be likely to resign from the committee if the resolution were to pass. Those TC members also seemed to feel that, by pursuing this initiative at this time, Jackson was trying to force the committee's hand in the GLOBAL dispute. As of this writing the discussion is ongoing, but it seems unlikely that Jackson's proposal will make it to a vote in anything resembling its current form.
No more maintainers?
Back at the beginning, Jackson proposed another option, seemingly as an
exercise in extremes: abolish the concept of package maintainership
entirely. He quickly dismissed that option, saying that it would lead to
developers being "expected to play core wars in the
archive
" when they disagree over changes to a package. But some
other developers took the idea more seriously.
Former project leader Stefano Zacchiroli remarked that abolishing maintainership was
"the obviously right solution
". The maintainer model, he
said, gets in the way of effective collaborative development and should be
done away with. He proposed moving to a "weak code ownership" model
(without describing how that would work) along with tools that would make
it easier to integrate changes made by multiple developers.
Russ Allbery agreed that the maintainership model is a part of the problem, partly because involuntary changes in maintainership will be, by their nature, confrontational events. Rather than codify how those changes should be done, he said, it would be better to just weaken the notion of maintainership in general. Again, though, he didn't get into the details of how a weaker maintainer model might work.
Jackson disagreed with this conclusion; he does not believe that things can work well in the absence of a maintainer who can make decisions. Others felt the same way; Adam Borowski suggested that, as a thought experiment, one could consider what would happen if the init-system choice were open to modification by any interested party. Rather than abolish the maintainer role, Jackson said, it would be better to just make maintainership changes easier and more routine. Some developers clearly see some appeal in the no-maintainer idea, though. Lars Wirzenius suggested the creation of a list that maintainers could join if they were amenable to having their packages taken over if they weren't keeping up with them. That list was subsequently created by Laura Arjona Reina and now had a handful of names on it.
Jackson, meanwhile, has moved on to a new idea: require group maintainership for all packages. The worst problems, he said, have always involved single-maintainer packages. If a formal team does not exist for a Debian package, then, perhaps, the entire project should be seen as the team and empowered to make changes. Like some other projects (the kernel, for example), Debian has encouraged more group maintainership of packages, but has not required it. Perhaps strengthening that encouragement is all the change that is really needed.
The Debian maintainer model has endured since the beginning of the project with few changes. It is too early to tell but, just maybe, this discussion will lead to larger changes in how Debian packages are maintained, with a subsequent reduction in maintainer-related problems. One should remember that this is Debian, though, so there is certain to be a fair amount of further discussion to get through first.
Brief items
Distribution quotes of the week
I'll fix the man page and also add regression tests so that this does not get accidentally changed in the future due to wrong documentation.
Qubes 4.0 development status update
Marek Marczykowski-Górecki covers the status of Qubes 4.0 development. Topics include the rewrite of Qubes core management scripts, moving away from paravirtualization, Qubes Manager, a virtual machine administration API, and more.
Distribution News
Fedora
FESCo and Council elections - December 2016/January 2017
Nominations are open for Fedora Council and FESCo (Fedora Engineering Steering Committee). There are five seats open for FESCo and one seat open for the Council. Nominations close December 12.
Newsletters and articles of interest
Distribution newsletters
- DistroWatch Weekly, Issue 690 (December 5)
- Lunar Linux weekly news (December 2)
- openSUSE news (December 1)
- openSUSE news (December 5)
- openSUSE Tumbleweed – Review of the Week (December 2)
- Ubuntu Weekly Newsletter (December 4)
What's new in OpenStack in 2016: A look at the Newton release (Opensource.com)
Over at Opensource.com, Rich Bowen gives an overview of the changes in the OpenStack Newton release that was made in October. In it, he looks at each of sub-projects and highlights some of the changes for them that were in the release, which is also useful as a kind high-level guide to some of the various sub-projects and their roles. "With a product as large as OpenStack, summarizing what's new in a particular release is challenging. (See the full release notes for more details.) Each deployment of OpenStack might use a different combination of services and projects, and so will care about different updates. Added to that, the release notes for the various projects tend to be extremely technical in nature, and often don't do a great job of calling out the changes that will actually be noticed by either operators or users."
Parrot Security Could Be Your Next Security Tool (Linux.com)
Linux.com takes a look at Parrot Security, a distribution that includes software for cryptography, cloud, anonymity, digital forensics, and programming. "Beyond the testing, auditing, and programming tools, what you’ll find in the Parrot distribution is a rock solid system. Parrot is based on Debian 9 and includes a custom hardened Linux 4.6 kernel. This is a rolling release upgrade distribution that uses the MATE desktop and the Lightdm display manager...all presented with custom icons and wallpapers. It’s pretty and it’s powerful."
Page editor: Rebecca Sobol
Development
GStreamer and the state of Linux desktop security
Recently Chris Evans, an IT security expert currently working for Tesla, published a series of blog posts about security vulnerabilities in the GStreamer multimedia framework. A combination of the Chrome browser and GNOME-based desktops creates a particularly scary vulnerability. Evans also made a provocative statement: that vulnerabilities of this severity currently wouldn't happen in Windows 10. Is the state of security on the Linux desktop really that bad — and what can be done about it?
From Chrome via Tracker to GStreamer
The scenario Evans described allows exploiting security vulnerabilities in a wide variety of applications by drive-by downloads by the Chrome browser without any user interaction. By default, Chrome automatically saves downloaded files in the "Downloads" subdirectory of the user's home directory. This particular behavior is unique to Chrome. Other browsers show a dialog on downloads and let the user choose a location to save a file. With this behavior a web page can automatically cause downloads. No user interaction is required, even the click on a download link can be triggered via JavaScript. Chromium, which is the free-software variant of Chrome and is usually the one packaged by Linux distributions, behaves in the same way.
Chrome's behavior isn't unique to Linux, but in combination with another feature it becomes dangerous: on GNOME desktops a service called Tracker automatically indexes all new files. (KDE has an analogous tool called Baloo that has similar problems.) Tracker uses a variety of tools and libraries to extract metadata from files. For media files Tracker uses the GStreamer framework. Images are parsed with ImageMagick and PDF files with Poppler. Thus we end up in a situation where downloads automatically triggered by a potentially malicious web page are directly passed to various parsers of complex file formats. Most of these parsers are written in C, which makes them susceptible to the typical memory-corruption vulnerabilities. Tracker is enabled by default, though it can be controlled by running tracker-preferences.
GStreamer is a multimedia framework that includes support for a wide variety of file formats, including some obscure and rare ones. Evans pointed out bugs in the parsers for Nintendo Sound Files (from NES games), for a VMware screen-capture format, and for FLIC animations, the file format of the DOS Software Autodesk Animator, which had its last release 21 years ago. The codecs for the latter two are part of the default GNOME installations on both Fedora and Debian.
GStreamer plays almost every media file ever created
GStreamer has a noble goal: it tries to allow users to play almost every audio or video file ever created. From a usability perspective, this is certainly valuable and often considered a strength of free and open-source software. For almost every file format out there, there's some software that is able to access it. Having software available to play arcane media files also plays an important role in keeping the history of the digital age accessible. But these projects come with a cost when it comes to security. Parsing complex binary file types in a secure way is an incredibly difficult task. It's not surprising that many parsers have memory-corruption vulnerabilities.
Thus the decision of GNOME's Tracker software to use these parsers is a questionable design choice. GStreamer is not the only problematic software used by Tracker. ImageMagick has a purpose similar to that of GStreamer. It supports reading 177 different image formats and it has seen a constant flow of vulnerability reports over the years. Many other libraries that Tracker uses to identify ISO images, extract MP3 tags, or parse playlists look at least potentially problematic. Again, from a usability perspective, the choices made by Tracker make sense. For a desktop search, being able to parse the metadata of a wide variety of different file types is a desirable feature. But security-wise it looks like a recipe for disaster.
It is no secret that many libraries were never written with security in mind. In the past few years the availability of increasingly powerful fuzz testing tools like american fuzzy lop (afl) has shed a light on this. There's a small number of libraries where security vulnerabilities are relatively rare and hard to find. Usually these are the libraries used by the large browser vendors. The generous bug bounties paid by browser makers give security researchers an incentive to investigate vulnerabilities. Also the browser vendors themselves have an interest in securing the code they are exposing and often use their own infrastructure to test them or fund audits for key libraries.
The situation for other, less exposed libraries is much more dire. When looking at a random library that parses complex file formats, it is likely that one can find crashes caused by memory-corruption vulnerabilities within seconds. To help combat this problem, I started the Fuzzing Project two years ago. The Fuzzing Project is a concentrated effort to wipe out the many bugs that can easily be found with fuzz testing in popular free and open-source software.
In the case of GStreamer, it is likely that before Evans's reports nobody ever tried to use modern coverage-based fuzzing tools on the code base. After Evans's post, though, I ran afl on GStreamer and found several bugs that the GStreamer team quickly fixed. But even if the GStreamer code base could be secured, which itself is a huge challenge, the problem goes deeper: GStreamer doesn't only contain its own parser, it makes use of a wide variety of libraries for different file formats. Properly testing and fixing all of them is an enormous task.
Prevention techniques
The prevalence of memory-corruption vulnerabilities in software written in C and C++ has recently fueled a debate on whether a switch to safer programming languages could avoid many of these problems. The programming language Rust, which is developed by Mozilla, is often cited as the most likely alternative to C. GStreamer has recently started to investigate the ability to write plugins in Rust. While these are worthy efforts, rewriting software in another programming language takes time. Realistically, most of the software underpinning the Linux desktop will stay as C code for a very long time.
In the IT security world, most people agree that it's not realistic to avoid all security bugs. Therefore a lot of effort has been put into mitigation techniques that make it hard or impossible to exploit security vulnerabilities. Various sandboxing efforts try to separate processes from each other and limit the impact of security vulnerabilities.
According to Evans, a tool like Tracker is an almost ideal target for sandboxing:
On Linux, the generic sandboxing framework provided by the kernel is seccomp. There's an open bug in the GNOME bug tracker about sandboxing the Tracker data collection process. A preliminary patch using libseccomp to sandbox that process has been posted in the bug.
Operating systems and compilers also try to make exploitation harder with mitigation techniques that avoid typical exploit paths for memory-corruption vulnerabilities. Stack canaries and non-executable memory pages are widely used on all modern operating systems. However the Linux community has been slow in adopting one of the strongest mitigation techniques available: address space layout randomization or ASLR. Exploitation techniques like return-oriented programming (ROP) rely on the fact that an attacker can jump to existing code in memory. However, in order to do that the attacker needs to know the address of that code. Thus modern operating systems can place code at randomized addresses in memory.
ASLR was originally invented by the PaX project as a patch to the Linux
kernel in 2001. The mainstream kernel got ASLR support in version
2.6.12, released in 2005. However, to properly randomize all of the code, the
executable files need to be compiled as position-independent executables
using the -pie compiler flag. For a long time, most Linux distributions
didn't use position-independent executables by default. Fedora started
switching on -pie by default in version 23, released in 2015. Both
Debian
and Ubuntu still enable -pie only for selected packages. [Update: As pointed out in the comments, Ubuntu 16.10 does enable -pie by default for several architectures, including amd64.] However,
recently
there have been some
efforts to push
enabling the flag by default in Debian. Recent GCC
versions will make the transition to position-independent executables
easier. Since version 6,
GCC's configure script has an option --enable-default-pie that allows
changing the compiler's default. While this is welcome news, ASLR is no
panacea: Evans was able to write multiple exploits ([1],
[2])
for GStreamer bugs that work even on Fedora systems with ASLR enabled.
More advanced exploit mitigations are currently being discussed under the umbrella term of "control-flow integrity". Modern versions of the clang compiler support such features, but they haven't been widely adopted.
Has Linux desktop security rotted?
Evans had some unpleasant thoughts for the Linux community:
The free-software community is often proud that switching to a Linux desktop is an almost surefire way to avoid many of the security issues that plague users of the Windows operating system. Malware for Linux desktops practically doesn't exist. However a lot of that added security may simply be attributable to the fact that fewer people use Linux and thus it is a less attractive target for malware authors. If the Linux desktop wants to stay a secure choice for users it needs to strengthen its security. Bug finding techniques, safer programming languages, sandboxes, and additional exploit mitigation techniques can all play a role in this.
Brief items
Development quotes of the week
You can downvote them, disagree with them, glorify or vilify them. About the only thing you cannot do is ignore them. Because they ship your bug fixes.
While some may see them as the crazy ones, we see genius. Because people who are crazy enough to think that they can run Linux on the desktop, are the ones who change the world.
Ardour 5.5 released
Version 5.5 of the Ardour audio editor has been released. "Among the notable new features are support for VST 2.4 plugins on OS X, the ability to have MIDI input follow MIDI track selection, support for Steinberg CC121, Avid Artist & Artist Mix Control surfaces, 'fanning out' of instrument outputs to new tracks/busses and the often requested ability to do horizontal zoom via vertical dragging on the rulers."
WordPress 4.7
WordPress 4.7 “Vaughan” has been released. This version includes a new default theme, adds new features to the customizer, comes with REST API endpoints for posts, comments, terms, users, meta, and settings, and more. "To help give you a solid base to build from, individual themes can provide starter content that appears when you go to customize your brand new site. This can range from placing a business information widget in the best location to providing a sample menu with social icon links to a static front page complete with beautiful images. Don’t worry – nothing new will appear on the live site until you’re ready to save and publish your initial theme setup."
What’s New with Xen Project Hypervisor 4.8?
The Xen Project Blog has released the Xen Project Hypervisor 4.8. "As always, we focused on improving code quality, security hardening as well as enabling new features. One area of interest and particular focus is new feature support for ARM servers. Over the last few months, we’ve seen a surge of patches from various ARM vendors that have collaborated on a wide range of updates from new drivers to architecture to security."
Newsletters and articles
Development newsletters
- What's cooking in git.git (December 6)
- This week in GTK+ (December 5)
- Koha Community Newsletter (November)
- OCaml Weekly News (December 6)
- Perl Weekly (December 5)
- PostgreSQL Weekly News (December 4)
- Python Weekly (December 1)
- Ruby Weekly (December 1)
- This Week in Rust (December 6)
- Wikimedia Tech News (December 5)
Page editor: Rebecca Sobol
Announcements
Brief items
Trouble at Cyanogen
Cyanogen Inc. has put out a terse press release announcing the departure of founder (and CyanogenMod creator) Steve Kondik. See this rather less terse Android Police article for Kondik's view of the matter. The future of the CyanogenMod distribution seems unclear at this point; if it goes forward, it may have to do so with a different name.
Articles of interest
Free Software Supporter Issue 104, December 2016
This month the Free Software Foundation's newsletter covers fundraising, RMS receives the GNU Health Social Medicine Award, Licensing and Compliance Lab interviews Micah Lee, 2016 Ethical Tech Giving Guide, and much more.
New Books
Scratch Coding Cards--learn to program one card at a time
No Starch Press has released "Scratch Coding Cards" created by Natalie Rusk.
Calls for Presentations
CFP Deadlines: December 8, 2016 to February 6, 2017
The following listing of CFP deadlines is taken from the LWN.net CFP Calendar.
| Deadline | Event Dates | Event | Location |
|---|---|---|---|
| December 10 | February 21 February 23 |
Embedded Linux Conference | Portland, OR, USA |
| December 10 | February 21 February 23 |
OpenIoT Summit | Portland, OR, USA |
| December 31 | March 2 March 3 |
PGConf India 2017 | Bengaluru, India |
| December 31 | April 3 April 7 |
DjangoCon Europe | Florence, Italy |
| January 1 | April 17 April 20 |
Dockercon | Austin, TX, USA |
| January 3 | May 17 May 21 |
PyCon US | Portland, OR, USA |
| January 6 | July 16 July 23 |
CoderCruise | New Orleans et. al., USA/Caribbean |
| January 8 | March 11 March 12 |
Chemnitzer Linux-Tage | Chemnitz, Germany |
| January 11 | February 15 February 16 |
Prague PostgreSQL Developer Day 2017 | Prague, Czech Republic |
| January 13 | May 22 May 24 |
Container Camp AU | Sydney, Australia |
| January 14 | March 22 March 23 |
Vault | Cambridge, MA, USA |
| January 20 | March 17 March 19 |
FOSS Asia | Singapore, Singapore |
| January 31 | May 16 May 18 |
Open Source Data Center Conference 2017 | Berlin, Germany |
If the CFP deadline for your event does not appear here, please tell us about it.
Upcoming Events
Events: December 8, 2016 to February 6, 2017
The following event listing is taken from the LWN.net Calendar.
| Date(s) | Event | Location |
|---|---|---|
| December 10 December 11 |
SciPy India | Bombay, India |
| December 10 | Mini Debian Conference Japan 2016 | Tokyo, Japan |
| December 27 December 30 |
Chaos Communication Congress | Hamburg, Germany |
| January 16 January 20 |
linux.conf.au 2017 | Hobart, Australia |
| January 16 | Linux.Conf.Au 2017 Sysadmin Miniconf | Hobart, Tas, Australia |
| January 16 January 17 |
LCA Kernel Miniconf | Hobart, Australia |
| January 18 January 19 |
WikiToLearnConf India | Jaipur, Rajasthan, India |
| January 27 January 29 |
DevConf.cz 2017 | Brno, Czech Republic |
| February 2 February 3 |
Git Merge 2017 | Brussels, Belgium |
| February 4 February 5 |
FOSDEM 2017 | Brussels, Belgium |
If your event does not appear here, please tell us about it.
Page editor: Rebecca Sobol
