LWN.net Weekly Edition for October 27, 2016
Designing better kernel ABIs
Michael Kerrisk started his 2016 Kernel Recipes talk by noting that the man pages collection, which he maintains, now documents over 2,200 interfaces at the kernel level and above; it adds up to about 2,500 printed pages. In the course of writing and collecting all this documentation, he has seen a lot of ABIs. There are many lessons to be learned from this experience, he said, if we want to create better ABIs in the future.There is a whole set of obvious goals that one might want to meet when designing an ABI. It should, naturally, be bug-free and as simple as possible, while being extensible and maintainable. Compliance with existing standards is important as well. Presumably, he said, we all agree on these goals — but we repeatedly fail on all of them.
A history of failure
Failure comes in many forms. One of those, of course, is simple bugs. "Show me a new interface," Kerrisk said, "and I'll show you a bug." There is insufficient pre-release testing of new ABIs, meaning that too many bugs slip through. That, in turn, leads to pain for user space, which may need to carry special-case code to handle the bugs found in different kernel versions.
Interface inconsistencies find their way in at a number of levels, including at the design level; he pointed out that the kernel has about a half-dozen different architecture-dependent clone() interfaces. There are also behavioral inconsistencies that can create ongoing pain for application developers. Consider, for example, the mlock() system call:
int mlock(const void *addr, size_t len);
When mlock() is called, the addr argument will be rounded down to the nearest page boundary, while the end of the range (addr+len) will be rounded up. Thus, calling mlock(4000, 6000) will affect memory addresses zero through 12287. Now consider remap_file_pages():
int remap_file_pages(void *addr, size_t len, int prot,
size_t pgoff, int flags);
This system call rounds len downward, so a call to:
remap_file_pages(4000, 6000, ...);
will affect the range from zero to 4095. That sort of surprising behavioral difference, he said, is an ongoing source of bugs. Another breeding ground is the set of system calls that change the attributes of another process; these include setpriority(), ioprio_set(), migrate_pages(), and more. Each of these must check the credentials of the caller and decide when an unprivileged caller will be able to carry out the requested action; each interface has a different set of rules.
What about maintainability? System calls should normally include a flags argument, but that has often not been the case. As a result, the number of system calls seems to grow without bound. umount() lacked a flags argument, so now we have umount2(). For similar reasons, the kernel offers preadv2(), epoll_create1(), renameat2(), and more. In some cases, the original interface was a historical legacy, but others "we did to ourselves." A related problem is a failure to check for unknown flags, as seen in sigaction(), recv(), clock_nanosleep(), msgrcv(), semget(), semop(), open(), and more. Among other things, failure to return an error on an unknown flag means that user space cannot check whether a given feature is supported or not.
In short, he said, decentralized design as seen in the kernel community has its advantages, but it fails badly if the goal is a coherent design. As an example, consider capabilities. When a new privileged feature is added to the kernel, the question arises: should a new capability be added to control access to it? Nobody wants to see an explosion in the number of capabilities, so it is generally deemed preferable to use an existing one. In practice, the existing one is almost invariably CAP_SYS_ADMIN, which is tested for in some 40% of all cases. It is, he said, "the new root", and the goal of finer-grained privilege checking has not been achieved. The first version of the control-group ABI was also plagued by inconsistencies.
In the end, he said, "we are just traditionalists" following and upholding a long history of Unix ABI mess-ups. The problem, of course, is that interface design is hard, and errors normally cannot be fixed without breaking existing user-space programs. So thousands of programs have to live with the consequences of our ABI mistakes for decades. We really need to get better at getting things right the first time.
Avoiding failure
How do we do that? A lot of it comes down to review and testing. Unlike some other parts of the kernel, ABI design does not really lend itself to mechanical testing, though. New interfaces simply need a lot of human review. That said, there is a place for unit tests; the kernel has been slow to adopt them, but that is beginning to change. Unit tests can detect regressions and unexpected changes, and help to ensure that a new interface lives up to what it is supposed to do.
Consider, for example, the recvmmsg()
system call. Toward the end of the discussion before this call was merged,
it gained a new timeout parameter. The expectation was that this
timeout would apply to the call as a whole. In truth, it was only tested
after the receipt of a datagram; until something shows up, the
timeout has no effect at all. In other words, nobody bothered to test it
and, as a result, it was useless.
Once tests are written, where should they go? The Linux Test Project is the traditional home for such tests, but it is not ideal. It is an out-of-tree test suite, and new tests only show up there after the ABI they test has appeared in an official release. Test coverage is partial; in the end, it simply does not solve the problem. The kernel's self-testing facility is a better place; importantly, it has a paid maintainer. Those interested in working with kselftest can find more information on the kernel.org wiki.
There is only so much value to be had from testing, though, in the absence of a specification for how a new interface is expected to behave. In the case of recvmmsg(), nobody ever wrote that specification, so it was not possible to write a test for it. There are many benefits to written specifications; they serve as a target for the implementation and help those who write tests. A specification allows reviewers to critique the interface independently of the implementation, and increases the number of reviewers overall. This specification generally belongs in the changelog of the patch adding the new interface, though an even better approach is to send a man-page patch.
The best thing to do, though, is to write a real application that uses the new interface. A while back he decided to delve into the inotify interface in order to improve its documentation. It is, in many ways, a good interface, much better than its predecessor. But it could have been better yet. At one point he thought he understood it, so he tried to write a real application that used it; the result was this article series, among other things.
That application required 1,500 lines of C code to get its job done. The inotify interface leaves a lot of work for the application to do. For example, change notifications lack user and process-ID information, making it impossible for a monitoring application to know who made a change. Directory monitoring is not recursive; if an application wants to watch a directory tree, it must set a separate watch on every directory in that tree. That, he said, may be unavoidable in the end.
A problem that was avoidable, instead, has to do with the renaming of files. A rename will generally result in two events; a "rename from" event and a "rename to" event. Unfortunately, these two events are not guaranteed to be consecutive in the event stream. In fact, they are not even guaranteed to both exist: if a file is renamed into a directory that is not monitored, the "rename to" event will not be generated. So an application has no definitive way to know if it will ever receive a "rename to" event or not; the result is a series of racy workarounds in user space. Life would have been much simpler if the two events had simply been guaranteed to show up together.
The lesson, he said, is that the only way to find nontrivial ABI problems is to write real applications using the interface — before that interface is released to the world as a whole.
Another way to improve our interfaces is to write documentation, of course. Describing what you're doing makes you think more deeply about it, he said. It also makes the new interface easier for others to understand, lowering the barriers to participation. A well-written man page is one way to do this documentation, but not the only way.
Discovery and feedback
An ongoing problem area is discovery — there is no simple way to find out when a particular kernel ABI has changed. He doesn't have the time to follow everything on the linux-kernel list, and neither does anybody else. The linux-api list exists and should receive copies of patches that change interfaces, but that often fails to happen. So he relies on some scripts of his own to find changes, but they are imperfect. Often, interface changes are discovered by sheer luck when he stumbles across them. On rare occasion he'll actually get a man page for a new interface.
He is far from the only person interested in interface changes. Application developers, C library developers, the strace maintainers, the Linux Test Project, and more all want to know about them. But user-space developers are typically the last to learn about changes — except in the unfortunate cases where even the kernel developers don't know that they changed something. Some changes to POSIX message queues in 3.5 broke the interface, for example. 2.6.12 featured an unexpected change to fcntl(F_SETOWN) semantics. Nobody noticed until much later, at which point it was too late to fix things, since other programs depended on the new behavior. That is how we end up with options like F_SETOWN_EX, added in 2.6.32 in an attempt to fix the problems created by that change.
That last example highlights a problem in the kernel's feedback loops. There are generally at least six months between when a new interface is added to the kernel and when users actually see it. In the worst case, design bugs will only be discovered when users start to look closely at this interface; by then, it is usually too late to fix them. We really need to get feedback sooner, before the kernel is committed to a specific interface.
How do we get that feedback? Kerrisk's suggestions should not be surprising at this point: write a specification for the new feature, and write example programs that use it as well. Copy the patches liberally to the relevant mailing lists. Write documentation, or write an article for LWN. Don't just target kernel developers; publicize the details of the new interface broadly. Some developers, he said, have done all of these things, and the result has been far better interfaces. He called out Jeff Layton and his open file description locks (formerly file-private locks) work as an example of how to do it right.
Is all of this overkill? Maybe, but it results in making a lot of people's lives easier. Especially his, he allowed, but not only his. By doing this work, developers can help to get more people involved in the process of looking at a new interface; that is necessary, since he alone does not scale well. The original developer has all of the information needed to judge a new interface; by getting it out there, they can bring more eyeballs to bear and have a much better chance of getting the interface right the first time.
Slides from and video of the talk are available to those wanting more information.
[Your editor thanks Kernel Recipes for assisting with his travel to the event.]
Dirty COW and clean commit messages
We live in an era of celebrity vulnerabilities; at the moment, an unpleasant kernel bug called "Dirty COW" (or CVE-2016-5195) is taking its turn on the runway. This one is more disconcerting than many due to its omnipresence and the ease with which it can be exploited. But there is also some unhappiness in the wider community about how this vulnerability has been handled by the kernel development community. It may well be time for the kernel project to rethink its approach to serious security problems.Dirty COW (which, naturally, has its own logo and web page, though this one is a bit on the satirical side) is a race condition in the kernel's memory-management subsystem. By timing things right, a local attacker can exploit the copy-on-write mechanism to turn a read-only mapping of a file into a writable mapping; with that, a file that should not be writable can be written to. It doesn't take much imagination to see how the ability to overwrite files could be used to escalate privileges in any of a number of ways.
The core code with the vulnerability is present in every Linux system (excluding, perhaps, no-MMU systems, which lack the sort of protection that has been defeated here in the first place). The known exploit depends on access to /proc/self/mem, which is not universally available, but, even on systems that do not provide that access, there are other ways to exploit the bug. The exploit happily bypasses almost every hardening technique out there; strong sandboxing with mechanisms like seccomp() might slow it down, but counting on that is probably not wise either. The only real protection is to upgrade to a kernel containing the fix.
Given that the nature of the vulnerability was known when the fix was
applied, one might think that the development community would go out of its
way to inform its users of the nature of the problem. The actual fix, when
applied to the mainline, was grouped with a set of "cleanups"; Linus's merge
commit for that set mentioned this fix at the very end:
"Additionally, there's a
fix for an ancient bug related to FOLL_FORCE and FOLL_WRITE by me.
"
The fix
itself is not much more illuminating:
The fix was (properly) rushed out into a set of stable updates, but there was no mention of the vulnerability there either. So users of the kernel are entirely reliant on others to inform them that an important vulnerability has been fixed and that they should really be updating their systems.
There is nothing new about this practice; Linus and others have long had a habit of, at best, neglecting to mention vulnerabilities that have been fixed in released kernels. There are a number of reasons given for operating this way, starting with a general disdain for the "security circus" and the industry that lives on responding to yesterday's vulnerabilities. Every kernel release fixes a great many serious bugs, they say, some of which certainly have security implications that nobody has (publicly) noticed yet. Highlighting specific vulnerabilities only draws attackers' attention to them while glossing over the fact that the only way to get all of the important fixes is to run the latest releases. Security bugs are just bugs, and we fix them like every other bug.
Everything expressed in that viewpoint could be said to be entirely true (though it glosses over the new vulnerabilities that can also come with the latest releases), but it is, in your editor's view, insufficient to justify this practice.
One of the core tenets of kernel development is that every change must describe the problem that is being solved and what the impact of the change will be. The core SubmittingPatches document, to which all developers are referred, says:
Core reviewer Andrew Morton asks developers to add information about user-visible impacts so often (example) that one assumes he has it bound to a special key sequence. The idea that a patch should describe what it does, and why, is deeply ingrained in the community; it's the only way that we can judge patches properly, and it's the only way that developers will know why a change was made when they encounter it in the repository years later. It's also the only way for users and distributors to know how important a particular patch is known to be. Failing to document the security impact of patches violates that rule at a time when that information is needed the most.
When, for example, a kernel bug can cause random corruption and loss of data within a filesystem, the fix describes the problem and its consequences. That way, users and distributors understand the importance of the change. Security bugs are bugs too, and their consequences should be spelled out as well. The fact that developers are not always aware that a bug has security implications does not excuse omitting that information when it is available.
It has been a long time since all but the laziest of attackers grepped the changelogs for explicit mention of vulnerabilities. As a rule, they are far more interested in the changes that introduce the vulnerabilities in the first place, or those that quietly fix vulnerabilities that the original developer might not even realize are there. Suppressing information about security fixes may keep vulnerabilities out of the awareness of those who are not actively looking for them — the users who may be exploited via those vulnerabilities — but it doesn't fool the exploit creators.
What suppressing security-related information does do is this: it calls into question the community's seriousness about dealing with security issues in general. When users are not told about the known effects of bugs, they rightly assume that they will be unaware of other silently fixed problems. So they can never know that they have the fixes for all known security issues unless they run the absolute latest releases. Staying on the leading edge is not an option for a huge portion of our user base, so they are forced to live with the worry that important fixes have been hidden from them. That is not a situation that engenders confidence in the kernel in general.
To summarize: the suppression of information about the security implications of a change runs counter to the kernel community's principles, makes life harder for our users and distributors, does little or nothing to slow down attackers, and hurts the kernel's reputation in general. A practice that might have made sense (but probably didn't) against script kiddies in the early 1990s is clearly out of place in 2016. Your editor humbly suggests to the kernel development community that the time has long since come to reconsider how security-related patches are handled.
Security
Qubes OS and colored-border spoofing
A bug in the graphical user interface (GUI) of the security-focused
Qubes OS distribution
("A reasonably secure operating system
") would allow malicious
applications to fool
users—just the kind of thing that the OS focuses on preventing. While the
bug itself is fairly run of the mill, its existence is more evidence, if
any is really needed, that security is hard—and that there are plentiful
pitfalls for distributions of this type.
The idea behind Qubes OS has not changed much since we first looked at it back in 2010: isolate different applications (and groups of applications) into separate security domains using Xen virtual machines (VMs). That way, a compromise in a web browser, say, cannot access programs, devices (e.g. webcams or microphones), or data that is not sharing the same VM with the compromised program.
In order to help users differentiate the programs running in different domains, Qubes OS colors the window borders of each application with a color that indicates which "qube" it belongs to. In the default install, three qubes are created: work, personal, and untrusted. Programs running within each get the same color; the Get Started document suggests using green for a trusted qube, red for untrusted, and yellow/orange for those in between. An example from that page (seen at right) shows a word processor with a green border and a web browser with red.
But applications control the contents of their windows and can create their own
"windows" that have any border color
they choose, as long as those windows are completely within the main
application window. As the Security
Guidelines document says: "Remember that a 'red' Firefox, can
always draw a 'green' password prompt box, and you don’t want to enter your
password there!
" It suggests using desktop effects (such as Alt-Tab
or
"Present Windows" in KDE/Plasma) or moving suspect windows to the trusted
background wallpaper. That way, Qubes OS can show the window with its
proper
border color or otherwise indicate which VM the window belongs to.
Qubes OS does all of this by running the desktop environment and main X server in the privileged domain (i.e. dom0). A GUI protocol is used to communicate from the qubes to the X server. There is an X server and GUI client running in each qube that communicate with a GUI daemon running in dom0. That is what allows Qubes OS to enforce its rules on windows that are created in the qubes.
However a bug reported in mid-September provides another way that a malicious application could fool users into entering sensitive information into untrusted qubes. X11 has an override_redirect flag that can be set by applications for windows that are not meant to be handled by the window manager (typically for UI elements like tooltips or menus). The Qubes OS GUI component uses the flag to determine if it should manage the window (to draw small colored borders around menus/tooltips and to ensure that those borders are visible on the screen) or if the window manager will draw the window decorations that include the larger colored border.
Applications are allowed to change the value of the override_redirect property and Qubes OS will track that change. But, due to the bug, it only tracks it internally and does not actually make the change to the window using the X API. That means a window that disables override_redirect will be treated by Qubes OS as if it is being managed by the window manager, but the window manager will not be informed that it should be doing so. That will allow malicious applications to spoof windows and confuse users.
Using the Alt-Tab window switcher or "Expose-like" effects (as suggested before inputting sensitive information) in the desktop environment may help users see that something strange is going on. But that adds another "ease of use" barrier for users—even if it has been the recommended practice all along.
While spoofing in the GUI is clearly the biggest threat, there is a kind of
denial-of-service (DoS) attack that a malicious application can perform.
Creating large windows that are not managed can completely obscure the rest
of the desktop and, since there is no window manager placing controls on
the windows for closing or minimizing them, the user will have no easy way
to get rid of them. The advisory does list some ways around that problem,
which may involve blindly typing into a privileged terminal
application—something that seems a bit worrisome. This GUI DoS
attack has been known for some time and the advisory notes that "Qubes should offer a more user-friendly solution to deal
with such GUI DoS attacks
".
The problem has existed since the initial commit of the GUI daemon back in 2010. It was fixed in mid-September and is available in version 3.2.5 and higher of the qubes-gui-dom0 package (or 3.1.5 and higher for those running Qubes OS 3.1).
The bug in some ways highlights the difficulties in providing a more secure environment for users—and in how to display that information in a cohesive and comprehensible way. Qubes OS has done an admirable job of trying to make it easier for users but, as always, bugs will creep in. Part of the problem may be the need to rely on applications that have been built on top of protocols and libraries that long pre-date the security needs of today's users. The window handling in X11 leads to the suggested Alt-Tab dance, for example.
One wonders if Wayland, which was designed with security more in mind, will eventually help here. It will seemingly be a while (still) before Wayland-native applications are commonplace and, of course, bugs will still be present, but a security-focused design may eventually lead to better desktop security for Qubes OS and others—or not, only time will truly tell.
Brief items
Security quotes of the week
More information about Dirty COW (aka CVE-2016-5195)
The security hole fixed in the 4.8.3, 4.7.9, and 4.4.26 stable kernel updates has been dubbed Dirty COW (CVE-2016-5195) by a site devoted to the kernel privilege escalation vulnerability. There is some indication that it is being exploited in the wild. Ars Technica has some additional information. The Red Hat bugzilla entry and advisory are worth looking at as well.
New vulnerabilities
asterisk: two vulnerabilities
| Package(s): | asterisk | CVE #(s): | CVE-2016-2232 CVE-2016-7551 | ||||||||
| Created: | October 26, 2016 | Updated: | October 26, 2016 | ||||||||
| Description: | From the CVE entry:
Asterisk Open Source 1.8.x, 11.x before 11.21.1, 12.x, and 13.x before 13.7.1 and Certified Asterisk 1.8.28, 11.6 before 11.6-cert12, and 13.1 before 13.1-cert3 allow remote authenticated users to cause a denial of service (uninitialized pointer dereference and crash) via a zero length error correcting redundancy packet for a UDPTL FAX packet that is lost. (CVE-2016-2232) From the Debian advisory: Multiple vulnerabilities have been discovered in Asterisk, an open source PBX and telephony toolkit, which may result in denial of service or incorrect certificate validation. | ||||||||||
| Alerts: |
| ||||||||||
bind: denial of service
| Package(s): | bind | CVE #(s): | CVE-2016-2848 | ||||||||||||||||||||||||||||||||||||||||||||||||||||
| Created: | October 21, 2016 | Updated: | October 26, 2016 | ||||||||||||||||||||||||||||||||||||||||||||||||||||
| Description: | From the Red Hat advisory:
A denial of service flaw was found in the way BIND handled packets with malformed options. A remote attacker could use this flaw to make named exit unexpectedly with an assertion failure via a specially crafted DNS packet. (CVE-2016-2848) | ||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||
graphicsmagick: multiple vulnerabilities
| Package(s): | graphicsmagick | CVE #(s): | CVE-2016-6823 CVE-2016-7101 CVE-2016-7515 CVE-2016-7517 CVE-2016-7519 CVE-2016-7522 CVE-2016-7524 CVE-2016-7528 CVE-2016-7529 CVE-2016-7531 CVE-2016-7533 CVE-2016-7537 | ||||||||||||||||||||||||||||||||||||
| Created: | October 26, 2016 | Updated: | October 26, 2016 | ||||||||||||||||||||||||||||||||||||
| Description: | From the openSUSE advisory:
- security update: * CVE-2016-7529 [boo#1000399] * CVE-2016-7528 [boo#1000434] * CVE-2016-7515 [boo#1000689] * CVE-2016-7517 [boo#1000693] * CVE-2016-7519 [boo#1000695] * CVE-2016-7522 [boo#1000698] * CVE-2016-7524 [boo#1000700] * CVE-2016-7531 [boo#1000704] * CVE-2016-7533 [boo#1000707] * CVE-2016-7537 [boo#1000711] * CVE-2016-6823 [boo#1001066] * CVE-2016-7101 [boo#1001221] * do not divide by zero in WriteTIFFImage [boo#1002206] * fix buffer overflow [boo#1002209] | ||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||
graphicsmagick: multiple vulnerabilities
| Package(s): | graphicsmagick | CVE #(s): | CVE-2015-8957 CVE-2015-8958 CVE-2016-7516 CVE-2016-7526 CVE-2016-7527 | ||||||||||||||||||||
| Created: | October 26, 2016 | Updated: | October 26, 2016 | ||||||||||||||||||||
| Description: | From the openSUSE advisory:
- CVE-2015-8957: Buffer overflow in sun file handling (bsc#1000690) - CVE-2015-8958: Potential DOS in sun file handling due to malformed files (bsc#1000691) - CVE-2016-7516: Out of bounds problem in rle, pict, viff and sun files (bsc#1000692) - CVE-2016-7526: out-of-bounds write in ./MagickCore/pixel-accessor.h (bsc#1000702) - CVE-2016-7527: out of bound access in wpg file coder: (bsc#1000436) | ||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||
graphicsmagick: three vulnerabilities
| Package(s): | graphicsmagick | CVE #(s): | CVE-2016-8682 CVE-2016-8683 CVE-2016-8684 | ||||||||||||||||||||||||||||||||||||
| Created: | October 26, 2016 | Updated: | October 26, 2016 | ||||||||||||||||||||||||||||||||||||
| Description: | From the Mageia advisory:
Stack-based buffer overflow in ReadSCTImage (CVE-2016-8682). Memory allocation failure in ReadPCXImage (CVE-2016-8683). Memory allocation failure in MagickMalloc (CVE-2016-8684). | ||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||
kdump: denial of service
| Package(s): | kdump | CVE #(s): | CVE-2016-5759 | ||||
| Created: | October 24, 2016 | Updated: | October 26, 2016 | ||||
| Description: | From the openSUSE advisory:
CVE-2016-5759: Use full path to dracut as argument to bash. See the bug report for more information. | ||||||
| Alerts: |
| ||||||
kernel: local privilege escalation (Dirty COW)
| Package(s): | kernel | CVE #(s): | CVE-2016-5195 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Created: | October 26, 2016 | Updated: | November 1, 2016 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Description: | The so-called "Dirty COW" vulnerability is a race condition in the kernel's memory-management code that is readily exploitable by a local attacker to run code in kernel mode. The bug is several years old, and numerous exploits exist. Fixes were shipped in the 4.8.3, 4.7.9, and 4.4.26 stable updates and will appear in the mainline in 4.9. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
kernel: three vulnerabilities
| Package(s): | kernel | CVE #(s): | CVE-2016-0823 CVE-2016-6327 CVE-2016-7117 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Created: | October 26, 2016 | Updated: | February 15, 2017 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Description: | From the CVE entries:
The pagemap_open function in fs/proc/task_mmu.c in the Linux kernel before 3.19.3, as used in Android 6.0.1 before 2016-03-01, allows local users to obtain sensitive physical-address information by reading a pagemap file, aka Android internal bug 25739721. (CVE-2016-0823) drivers/infiniband/ulp/srpt/ib_srpt.c in the Linux kernel before 4.5.1 allows local users to cause a denial of service (NULL pointer dereference and system crash) by using an ABORT_TASK command to abort a device write operation. (CVE-2016-6327) Use-after-free vulnerability in the __sys_recvmmsg function in net/socket.c in the Linux kernel before 4.5.2 allows remote attackers to execute arbitrary code via vectors involving a recvmmsg system call that is mishandled during error processing. (CVE-2016-7117) | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
kernel: denial of service
| Package(s): | kernel | CVE #(s): | CVE-2016-8658 | ||||||||||||||||||||||||||||||||||||||||||||||||||||
| Created: | October 24, 2016 | Updated: | October 26, 2016 | ||||||||||||||||||||||||||||||||||||||||||||||||||||
| Description: | From the openSUSE advisory:
Stack-based buffer overflow in the brcmf_cfg80211_start_ap function in drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c in the Linux kernel allowed local users to cause a denial of service (system crash) or possibly have unspecified other impact via a long SSID Information Element in a command to a Netlink socket (bnc#1004462). | ||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||
kernel: multiple vulnerabilities
| Package(s): | kernel | CVE #(s): | CVE-2015-8956 CVE-2016-7042 CVE-2016-7425 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Created: | October 20, 2016 | Updated: | December 1, 2016 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Description: | From the Debian advisory:
CVE-2015-8956: It was discovered that missing input sanitising in RFCOMM Bluetooth socket handling may result in denial of service or information leak. CVE-2016-7042: Ondrej Kozina discovered that incorrect buffer allocation in the proc_keys_show() function may result in local denial of service. CVE-2016-7425: Marco Grassi discovered a buffer overflow in the arcmsr SCSI driver which may result in local denial of service, or potentially, arbitrary code execution. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
libX11: denial of service
| Package(s): | libX11 | CVE #(s): | CVE-2016-7942 CVE-2016-7943 | ||||||||||||||||
| Created: | October 24, 2016 | Updated: | October 27, 2016 | ||||||||||||||||
| Description: | From the openSUSE advisory:
insufficient validation of data from the X server allowed out of boundary memory read (bsc#1002991) | ||||||||||||||||||
| Alerts: |
| ||||||||||||||||||
mozilla: two vulnerabilities
| Package(s): | firefox seamonkey | CVE #(s): | CVE-2016-5287 CVE-2016-5288 | ||||||||||||||||||||
| Created: | October 26, 2016 | Updated: | November 9, 2016 | ||||||||||||||||||||
| Description: | From the openSUSE advisory:
* CVE-2016-5287: Crash in nsTArray_base (bsc#1006475) * CVE-2016-5288: Web content can read cache entries (bsc#1006476) | ||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||
mysql: multiple unspecified vulnerabilities
| Package(s): | mysql | CVE #(s): | CVE-2016-5584 CVE-2016-7440 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Created: | October 25, 2016 | Updated: | November 16, 2016 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Description: | From the Ubuntu advisory:
Multiple security issues were discovered in MySQL and this update includes new upstream MySQL versions to fix these issues. MySQL has been updated to 5.5.53 in Ubuntu 12.04 LTS and Ubuntu 14.04 LTS. Ubuntu 16.04 LTS and Ubuntu 16.10 have been updated to MySQL 5.7.16. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
nginx: privilege escalation
| Package(s): | nginx | CVE #(s): | CVE-2016-1247 | ||||||||||||||||||||||||||||
| Created: | October 26, 2016 | Updated: | January 16, 2017 | ||||||||||||||||||||||||||||
| Description: | From the Debian advisory:
Dawid Golunski reported the nginx web server packages in Debian suffered from a privilege escalation vulnerability (www-data to root) due to the way log files are handled. This security update changes ownership of the /var/log/nginx directory root. In addition, /var/log/nginx has to be made accessible to local users, and local users may be able to read the log files themselves local until the next logrotate invocation. | ||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||
nspr, nss: information disclosure
| Package(s): | nspr nss | CVE #(s): | |||||||||
| Created: | October 26, 2016 | Updated: | October 26, 2016 | ||||||||
| Description: | From the Debian LTS advisory:
The Network Security Service (NSS) libraries uses environment variables to configure lots of things, some of which refer to file system locations. Others can degrade the operation of NSS in various ways, forcing compatibility modes and so on. Previously, these environment variables were not ignored SUID binaries. This version of NetScape Portable Runtime Library (NSPR) introduce a new API, PR_GetEnVSecure, to address this. | ||||||||||
| Alerts: |
| ||||||||||
openslp: code execution
| Package(s): | openslp | CVE #(s): | CVE-2016-7567 | ||||||||
| Created: | October 21, 2016 | Updated: | October 26, 2016 | ||||||||
| Description: | From the Mageia advisory:
A memory corruption bug was present in openslp due to lack of bounds checking in SLPFoldWhiteSpace() (CVE-2016-7567). | ||||||||||
| Alerts: |
| ||||||||||
perl-Image-Info: information disclosure
| Package(s): | perl-Image-Info | CVE #(s): | CVE-2016-9181 | ||||||||
| Created: | October 26, 2016 | Updated: | November 4, 2016 | ||||||||
| Description: | From the Red Hat bugzilla:
The Image::Info package makes no precautions against external entity expansion in SVG files. A crafted file could cause information disclosure or denial of service. See also the CVE assignment email. | ||||||||||
| Alerts: |
| ||||||||||
php: multiple vulnerabilities
| Package(s): | php | CVE #(s): | CVE-2016-9137 | ||||||||||||||||||||||||||||||||||||
| Created: | October 24, 2016 | Updated: | November 21, 2016 | ||||||||||||||||||||||||||||||||||||
| Description: | PHP 5.6.27 fixes multiple vulnerabilities. See the PHP changelog for details.
One of these issues was assigned CVE-2016-9137. | ||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||
php-pecl-zip: multiple vulnerabilities
| Package(s): | php-pecl-zip | CVE #(s): | |||||||||
| Created: | October 24, 2016 | Updated: | October 26, 2016 | ||||||||
| Description: | From the Fedora advisory:
**Version 1.13.5** - Fixed bug php#72660 (NULL Pointer dereference in zend_virtual_cwd). (Laruence) - Fixed bug php#68302 (impossible to compile php with zip support). (cmb) - Fixed bug php#70752 (Depacking with wrong password leaves 0 length files). (cmb) | ||||||||||
| Alerts: |
| ||||||||||
potrace: multiple vulnerabilities
| Package(s): | potrace | CVE #(s): | CVE-2016-8694 CVE-2016-8695 CVE-2016-8696 CVE-2016-8697 CVE-2016-8698 CVE-2016-8699 CVE-2016-8700 CVE-2016-8701 CVE-2016-8702 CVE-2016-8703 | ||||
| Created: | October 26, 2016 | Updated: | October 26, 2016 | ||||
| Description: | From the Debian LTS advisory:
CVE-2016-8694, CVE-2016-8695, CVE-2016-8696: Multiple NULL pointer dereferences in bm_readbody_bmp. This bug was discovered by Agostino Sarubbo of Gentoo. CVE-2016-8697: Division by zero in bm_new. This bug was discovered by Agostino Sarubbo of Gentoo. CVE-2016-8698, CVE-2016-8699, CVE-2016-8700, CVE-2016-8701, CVE-2016-8702, CVE-2016-8703: Multiple heap-based buffer overflows in bm_readbody_bmp. This bug was discovered by Agostino Sarubbo of Gentoo. | ||||||
| Alerts: |
| ||||||
qemu: denial of service
| Package(s): | qemu | CVE #(s): | CVE-2016-7155 | ||||||||||||
| Created: | October 24, 2016 | Updated: | October 26, 2016 | ||||||||||||
| Description: | From the SUSE advisory:
In the VMWARE PVSCSI paravirtual SCSI bus a OOB access and/or infinite loop issue could have allowed a privileged user inside guest to crash the Qemu process resulting in DoS (bsc#997858) | ||||||||||||||
| Alerts: |
| ||||||||||||||
qemu: three vulnerabilities
| Package(s): | qemu | CVE #(s): | CVE-2016-8577 CVE-2016-8578 CVE-2016-8669 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Created: | October 26, 2016 | Updated: | October 26, 2016 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Description: | From the Debian LTS advisory:
CVE-2016-8577: Quick Emulator (Qemu) built with the virtio-9p back-end support is vulnerable to a memory leakage issue. It could occur while doing a I/O read operation in v9fs_read() routine. CVE-2016-8578: Quick Emulator (Qemu) built with the virtio-9p back-end support is vulnerable to a null pointer dereference issue. It could occur while doing an I/O vector unmarshalling operation in v9fs_iov_vunmarshal() routine. CVE-2016-8669: Quick Emulator (Qemu) built with the 16550A UART emulation support is vulnerable to a divide by zero issue. It could occur while updating serial device parameters in 'serial_update_parameters'. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
virtualbox: multiple unspecified vulnerabilities
| Package(s): | virtualbox | CVE #(s): | CVE-2016-5501 CVE-2016-5538 CVE-2016-5605 CVE-2016-5608 CVE-2016-5610 CVE-2016-5611 CVE-2016-5613 | ||||||||||||||||||||
| Created: | October 25, 2016 | Updated: | January 24, 2017 | ||||||||||||||||||||
| Description: | From the NVD entries:
CVE-2016-5501: Unspecified vulnerability in the Oracle VM VirtualBox component before 5.0.28 and 5.1.x before 5.1.8 in Oracle Virtualization allows local users to affect confidentiality, integrity, and availability via vectors related to Core, a different vulnerability than CVE-2016-5538. CVE-2016-5538: Unspecified vulnerability in the Oracle VM VirtualBox component before 5.0.28 and 5.1.x before 5.1.8 in Oracle Virtualization allows local users to affect confidentiality, integrity, and availability via vectors related to Core, a different vulnerability than CVE-2016-5501. CVE-2016-5605: Unspecified vulnerability in the Oracle VM VirtualBox component before 5.1.4 in Oracle Virtualization allows remote attackers to affect confidentiality and integrity via vectors related to VRDE. CVE-2016-5608: Unspecified vulnerability in the Oracle VM VirtualBox component before 5.0.28 and 5.1.x before 5.1.8 in Oracle Virtualization allows local users to affect availability via vectors related to Core, a different vulnerability than CVE-2016-5613. CVE-2016-5610: Unspecified vulnerability in the Oracle VM VirtualBox component before 5.0.28 and 5.1.x before 5.1.8 in Oracle Virtualization allows local users to affect confidentiality, integrity, and availability via vectors related to Core. CVE-2016-5611: Unspecified vulnerability in the Oracle VM VirtualBox component before 5.0.28 and 5.1.x before 5.1.8 in Oracle Virtualization allows local users to affect confidentiality via vectors related to Core. CVE-2016-5613: Unspecified vulnerability in the Oracle VM VirtualBox component before 5.0.28 and 5.1.x before 5.1.8 in Oracle Virtualization allows local users to affect availability via vectors related to Core, a different vulnerability than CVE-2016-5608. | ||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||
Page editor: Jake Edge
Kernel development
Brief items
Kernel release status
The current development kernel is 4.9-rc2, released on October 23. Linus is asking for people to test one feature in particular: "My favorite new feature that I called out in the rc1 announcement (the virtually mapped stacks) is possibly implicated in some crashes that Dave Jones has been trying to figure out, so if you want to be helpful and try to see if you can give more data, please make sure to enable CONFIG_VMAP_STACK."
The latest 4.9 regression report shows 14 known problems.
Stable updates: 4.8.3, 4.7.9, and 4.4.26, containing the "Dirty COW" fix, were released on October 20. 4.8.4, 4.7.10, and 4.4.27 followed on October 22. Note that 4.7.10 is the end of the 4.7.x series.
The 4.8.5 (140 changes) and 4.4.28 (112 changes) updates are in the review process as of this writing; they can be expected on or after October 28.
Quotes of the week
If we try to make the page cache the "one true IO optimisation source" then we're screwing ourselves because the incoming IO technologies simply don't require it anymore.
The initial bus1 patch posting
The bus1 message-passing mechanism is the successor to the "kdbus" project; it was covered here in August. The patches have now been posted for review. "While bus1 emerged out of the kdbus project, bus1 was started from scratch and the concepts have little in common. In a nutshell, bus1 provides a capability-based IPC system, similar in nature to Android Binder, Cap'n Proto, and seL4."
Kernel development news
Making swapping scalable
The swap subsystem is where anonymous pages (those containing program data not backed by files in the filesystem) go when memory pressure forces them out of RAM. A widely held view says that swapping is almost always bad news; by the time a Linux system gets to the point where it is swapping out anonymous pages, the performance battle has already been lost. So it is not at all uncommon to see Linux systems configured with no swap space at all. Whether the relatively poor performance of swapping is a cause or an effect of that attitude is a matter for debate. What is becoming clearer, though, is that the case for using swapping is getting stronger, so there is value in making swapping faster.Swapping is becoming more attractive as the performance of storage devices — solid-state storage devices (SSDs) in particular — increases. Not too long ago, moving a page to or from a storage device was an incredibly slow operation, taking several orders of magnitude more time than a direct memory access. The advent of persistent-memory devices has changed that ratio, to the point where storage speeds are approaching main-memory speeds. At the same time, the growth of cloud computing gives providers a stronger incentive to overcommit the main memory on their systems. If swapping can be made fast enough, the performance penalty for overcommitting memory becomes insignificant, leading to better utilization of the system as a whole.
As Tim Chen noted in a recently posted patch set, the kernel currently imposes a significant overhead on page faults that must retrieve a page from swap. The patch set addresses that problem by increasing the scalability of the swap subsystem in a few ways.
In current kernels, a swap device (a dedicated partition or a special file within a filesystem) is represented by a swap_info_struct structure. Among the many fields of that structure is swap_map, a pointer to a byte array, where each byte contains the reference count for a page stored on the swap device. The structure looks vaguely like this:
Some of the swap code is quite old; a fair amount dates back to the beginning of the Git era. In the early days, the kernel would attempt to concentrate swap-file usage toward the beginning of the device — the left end of the swap_map array shown above. When one is swapping to rotating storage, this approach makes sense; keeping data in the swap device together should minimize the amount of seeking required to access it. It works rather less well on solid-state devices, for a couple of reasons: (1) there is no seek delay on such devices, and (2) the wear-leveling requirements of SSDs are better met by spreading the traffic across the device.
In an attempt to perform better on SSDs, the swap code was changed in 2013 for the 3.12 release. When the swap subsystem knows that it is working with an SSD, it divides the device into clusters, as shown below:
The percpu_cluster pointer points to a different cluster for each CPU on the system. With this arrangement, each CPU can allocate pages from the swap device from within its own cluster, with the result that those allocations are spread across the device. In theory, this approach is also more scalable, except that, in current kernels, much of the scalability potential has not yet been achieved.
The problem, as is so often the case, has to do with locking. CPUs do not have exclusive access to any given cluster (even the one indicated by percpu_cluster), so they must acquire the lock spinlock in the swap_info_struct structure before any changes can be made. There are typically not many swap devices on any given system — there is often only one — so, when swapping is heavy, that spinlock is heavily contended.
Spinlock contention is not the path to high scalability; in this case, that contention is not even necessary. Each cluster is independent and can be allocated from without touching the others, so there is no real need to wait on a single global lock. The first order of business in the patch set is thus to add a new lock to each entry in the cluster_info array; a single-bit lock is used to minimize the added memory consumption. Now, any given CPU can allocate pages from (or free pages into) its cluster without contending with the others.
Even so, there is overhead in taking the lock, and there can be cache-line contention when accessing the lock in other CPUs' clusters (as can often happen when pages are freed, since nothing forces them to be conveniently within the freeing CPU's current cluster). To minimize that cost, the patch set adds new interfaces to allocate and free swap pages in batches. Once a CPU has allocated a batch of swap pages, it can use them without even taking the local cluster lock. Freed swap pages are accumulated in a separate cache and returned in batches. Interestingly, freed pages are not reused by the freeing CPU in the hope that freeing them all will help minimize fragmentation of the swap space.
There is one other contention point that needs to be addressed. Alongside the swap_info_struct structure, the swap subsystem maintains an address_space structure for each swap device. This structure contains the mapping between pages in memory and their corresponding backing store on the swap device. Changes in swap allocation require updating the radix tree in the address_space structure, and that radix tree is protected by another lock. Since, once again, there is typically only one swap device in the system, that is another global lock for all CPUs to contend for.
The solution in this case is a variant on the clustering approach. The address_space structure is replicated into many structures, one for each 64MB of swap space. If the swap area is sized at (say) 10GB, the single address_space will be split 160 ways, each of which has its own lock. That clearly reduces the scope for contention for any individual lock. The patch also takes care to ensure that the initial allocation of swap clusters puts each CPU into a separate address_space, guaranteeing that there will be no contention at the outset (though, once the system has been operating for a while, the swap patterns will become effectively random).
According to Chen, current kernels add about 15µs of overhead to every page fault that is satisfied by a read from a solid-state swap device. That, he says, is comparable to the amount of time it takes to actually read the data from the device. With the patches applied, that overhead drops to 4µs, a significant improvement. There have been no definitive comments on the patch set as of this writing, but it seems like the sort of improvement that the swap subsystem needs to work well with contemporary storage devices.
A report from the documentation maintainer
It is now nearly exactly two years since my ill-advised decision to accept the role of the maintainer of the kernel's documentation collection. After a bit of a slow start, things have started to happen in the documentation area. As part of the preparation exercise for an upcoming Kernel Summit session on documentation, here is a report on where things stand and where they are going.The biggest overall change, of course, is the transition away from a homebrew DocBook-based toolchain to a formatted documentation setup based on the Sphinx system, as was described in this article last July. The transition made some waves when it hit; in the 4.8-rc1 announcement, Linus noted that a full 20% of the patch set was documentation updates. It is fair to say that kernel developers do not ordinarily put that much effort into documentation. Much of the credit for this work goes to Daniel Vetter and Mauro Carvalho Chehab, who worked hard to transition the GPU and media subsystem documentation, respectively, along with Jani Nikula and Markus Heiser, who made the Sphinx-based plumbing work.
Perhaps unsurprisingly, there have been places where Sphinx has not worked out quite as well as desired. Perhaps the biggest initial disappointment was PDF output. The original plan was to use rst2pdf, a relatively simple tool that offered the possibility of creating PDF files without a heavy toolchain. It does indeed create pretty output for simple input files, but it falls over completely with more complex documents; after a while, it became clear that it was not going to meet the kernel community's needs.
That means falling back to LaTeX in 4.9; LaTeX works, but is not without its drawbacks. LaTeX is not a small system; the basic install on my openSUSE Tumbleweed system was over 1,700 packages. The base Fedora installation is much smaller, but that is not necessarily better. There, getting the documentation built requires executing a seemingly endless loop of "which .sty file is missing now, and which package provides it?" work. Part of the idea behind switching to Sphinx was to make setting up the toolchain easier; that goal has still been met for those who are happy with HTML or EPUB output, but remains elusive for PDF output.
After 4.8
The 4.7 kernel contains 34 "template" files that are processed by the DocBook-based toolchain; that number is down to 30 in the 4.9-rc kernels. The conversion of the remaining template files continues; eventually they will all be done and the DocBook dependency can be removed. The conversion is generally easy to do (there is a script included in the kernel source that helps), but making it all look nice can take a little longer. And updating some of the kernel's ancient documentation to match current reality may take longer yet.
A few dozen template files are one thing, but what about the various
plain-text files scattered around the documentation directory? There are
over 2,000 of these (not counting the device-tree files), some rather more
helpful than others. Very little
organizational thought has been applied to this directory. As former
documentation maintainer Rob Landley put it in 2007,
"Documentation/* is a gigantic mess, currently organized based on
where random passers-by put things down last
". It has improved
little since then.
Now we are trying to improve it by applying some structure to the directory and by bringing the plain-text files into the growing body of Sphinx-based documentation. The latter task is easy — most of the plain-text files are almost in the reStructuredText format used by Sphinx already, so only minor tweaks are required. The organizational task is harder.
The 4.9 kernel will contain a couple of new sub-manuals in the Sphinx-based documentation. One of them, called dev-tools, is a collection of the plain-text documents about tools that can be used in kernel development. The other, driver-api, gathers information of interest to device-driver developers. Both of these books are works in progress, they exist in their current form mostly to show the way forward.
In 4.10, the chances are good that three more major sub-manuals will put in an appearance. One of them, tentatively called core-api, will be a collecting point for documentation about the core-kernel interfaces. That information is currently widely distributed among plain-text files and kerneldoc comments within the source itself; it will be good to have it together in one place — sometime well in the future, when the process of creating this manual has run its course.
Next, the process book will hold our (fairly extensive) documentation on how to work with the kernel development community. It includes the often-cited SubmittingPatches document (now process/submitting-patches.rst), along with information on coding style, email client configuration, and more. This work (done by Mauro) was ready in time for 4.9, but I put the brakes on it out of fear that moving files like SubmittingPatches would leave a lot of dangling links in the brains of the development community. Various discussions over the past month have failed to turn up even a single developer who was unhappy about it, though, so the current plan is for this work to proceed for 4.10.
The last proposed book recognizes that there are multiple audiences for the kernel's documentation; it will (probably) be called admin-guide and will be aimed at system administrators, users, and others who are trying to figure out how to get the kernel to do what they want. Much of our documentation covers module parameters, tuning knobs, and user-space APIs; collecting and organizing it should make it more accessible for our users.
Open issues
As this work proceeds, a number of issues have come up that are still in need of resolution; many of them come down to a tradeoff between simplicity and functionality. On the simplicity side, it is desirable to keep the documentation toolchain as simple and easy to set up as possible so that anybody can build the docs. On the other hand, making use of more functionality (and thus adding to the toolchain's dependencies) enables the creation of more expressive documentation.
One such issue is the use of the Sphinx math extension, which supports the formatting of mathematical expressions using the LaTeX syntax. As of 4.9, the media documentation is using this extension, but there is a cost: it forces the use of LaTeX even to build the HTML documentation. The hope is to find an easy way to fall back gracefully when LaTeX is unavailable in order to soften this dependency.
A deeper question has to do with the automatic generation of reStructuredText documentation from other files in the kernel tree. That is already done with the in-source kerneldoc comments, of course, but there is interest in pulling in a number of other types of information as well. That extends as far as reformatting the MAINTAINERS file as part of the documentation build process. There are patches circulating to allow, to a varying extent, the running of arbitrary programs during the documentation build to do this generation; these patches run into concerns about security and maintainability. The form of the solution to this problem is not yet entirely clear.
Interestingly, there is significant disagreement over the removal of ancient, obsolete documentation. Do we really need, say, documentation from 1996 describing how to manually bisect bugs in the 1.3.x kernel? Resistance to removing such cruft usually comes in the form of "but it might be useful to somebody someday." But we do not retain unused code on that basis; we recognize that there is a cost to carrying such code in the kernel. There is, likewise, a cost to carrying old, obsolete documentation, paid by both the documentation maintainers and the users the documentation is meant to help. In my opinion, some spring cleaning is in order, even if spring is a distant prospect in the northern hemisphere.
One other possibly contentious change has been suggested by a few people now. Documentation/ is a long name, and is the only top-level directory in the kernel starting with a capital letter. One can joke that this distinction highlights the importance of documentation, but it's also a lot for people to type. So I've been asked a few times if it could be renamed to something like "docs". That, I think, is a question for the Kernel Summit.
Finally, it should be said that much of the above consists of a rearrangement of a bunch of kernel documentation that is of varying quality and is not all current. It makes the documentation prettier and, hopefully, easier to find, but does not yet turn it into a coherent body of accessible and useful material. There is a good case for doing the organizational work first, as long as we don't forget that there is a lot more to be done.
Despite the disagreements over how to proceed in some of these areas, and despite the magnitude of the task, there is a broad consensus that the time has come to improve the kernel's documentation. More effort is going into this part of the kernel than has been seen for some years. With any luck, kernel developers, distributors, and users will all be the beneficiaries of this work. For anybody who is looking for a way to help with kernel development, there are plenty of opportunities in the documentation area; we would be happy to hear from you. The linux-doc list at vger.kernel.org is a relatively calm place to work on documentation without subjecting oneself to the linux-kernel firehose. We look forward to your patches.
Patches and updates
Kernel trees
Architecture-specific
Build system
Core kernel code
Device drivers
Device driver infrastructure
Documentation
Filesystems and block I/O
Memory management
Networking
Security-related
Virtualization and containers
Miscellaneous
Page editor: Jonathan Corbet
Distributions
Debian's "global" package visits the Technical Committee
Debian package maintainers have famously wide latitude in making technical decisions about their packages. There are few checks on that power, though the distribution's Technical Committee (TC) has some rarely used abilities to rein in maintainers who are not acting in the best interests of Debian and its users. A recent request that the TC help resolve a dispute with the maintainer of the package for the GNU GLOBAL source-code tagging system shows just how knotty solving that kind of problem can be.
The Debian "global" package is based on version 5.7.1 of GLOBAL, which was released in 2008 and is seriously obsolete according to the upstream project. Back in 2010, a bug was filed that asked package maintainer Ron Lee to update to the then-current 5.9.2 release. Over the next several years, others piled on, asking that the Debian package be updated, with no response from Lee. But, when GLOBAL developer Shigio Yamaguchi orphaned the package in 2013, Lee did respond; he pointed out that Debian was then frozen for the "Wheezy" (Debian 7) release, but also noted that there is a fundamental problem with part of the newer versions of GLOBAL:
Lee had patched around that problem (with the "htags" feature) in the versions of the package he released from 1999 to 2009; he also suggested a fix to the upstream project that was apparently rejected by Yamaguchi. The htags feature provides a way to browse the source code and tags using CGI scripts that are generated from the source. Those must be installed in system-wide location, which requires root privileges.
In response to Lee, Yamaguchi noted that any bug reports or feature requests for GLOBAL should be posted to the bug-global mailing list. Lee argued that nothing had changed since it had been discussed earlier (evidently in private email), however. Over the next year or so, there were some others asking about updating the Debian global package in the bug, once again without any response.
In March 2014, though, a different tack was tried. Punit Agrawal posted about some progress he had made in creating an updated global package. He had not encountered the problem with the htags feature, since he doesn't use it. That feature is believed to be little-used, though there is no real data on exactly how little-used it is.
What is clear is that Lee is frustrated with how GLOBAL is developed and the choices the project has made in how it gets installed, while some of the users of GLOBAL are frustrated with the way-old version of it that is packaged for Debian. Lee described the problems he sees in response to Agrawal's efforts to package the newer version:
A generated script that the user is required to run as root, or making a privileged system directory 777 writable is not such an interface.
The obvious solution (to some, at least) is to simply patch out the htags
piece, but Lee was opposed to that option as well: "Saying 'I don't
need this, so I'm just going to remove it for
everyone else to rush out the bits that _I_ want' is not an acceptable
solution.
" The security issue that Lee is raising is serious, but
tangential to the main use of the tool, at least for those posting in the
bug. Yamaguchi's suggestion
to create a "global6" package that removed the CGI piece was also shot
down by Lee. Instead, he argued, any bugs in the existing global
package should be identified and fixed.
That's where things stood until Wei Liu filed another bug in March 2016 asking that the global package be updated to 6.5.2. Several others agreed that an update would be useful, but Lee did not reply. By October, though, Vincent Bernat, who also participated in the first bug report, was seemingly fed up; he suggested that if another maintainer could be found, this problem could be taken to the TC for resolution. Agrawal said that he was willing to take the global package over and would prefer that Lee be given the chance to reconsider, rather than simply routing around him by creating a global6 package.
Enter the TC
To that end, Bernat filed the TC bug on October 19. In it, he asked that the TC either find a way to convince Lee to update the package or overrule his decision not to do so; if being overruled is not acceptable to Lee, he asked that the maintainership be handed off to Agrawal. Wookey agreed that the problem appears to be intractable at this point:
1) upload a new package,
2) allow an upload which removes the offending CGI bit (which users don't really care
about anyway)
3) write something to change the local behaviour to be satisfactory.
Upstream has been clear that they are not going to change how it works, but they don't care if debian omits that bit or changes it locally, so it seems to me that a maintainer has to do one of the above three things. 5 years of this is more than long enough to conclude that they are not doing their job adequately, and because of our strong maintainership culture, are preventing other people from doing that job.
As might be guessed, Lee sees things rather differently. In his lengthy reply (one of several long responses to the TC bug), it is clear that he believes the fault lies with the upstream GLOBAL project being unwilling to change or drop the htags feature so that it can be installed sanely on Debian. In fact, he said, the project has changed the interfaces that he and Yamaguchi worked out in 1999 such that his changes to GLOBAL to package it for Debian no longer work. Thus, he is between a rock and a hard place:
He and Bernat went back and forth a bit, but it is clear that no one is changing any minds and progress is not being made. Many of the participants obviously have their heels dug in—Yamaguchi on htags staying in the upstream version, Lee on the insecurity of the htags, Bernat and others on the global version currently packaged. Removing htags from the Debian package would seem an option, but Lee is leery of doing that without getting more information on who is actually using the feature; he is worried that any htags users will be unhappy with a new version without that feature:
TC member Tollef Fog Heen tried to find some middle ground. He agreed with Lee that making uninformed decisions is not sensible, but wondered if this had all been going on too long:
I'm leaning towards dropping htags, since that seems to have problems security-wise (the idea of generated CGIs don't fill me with joy, at least, and hopefully not many others either), and also has a lot less value today than it used to back in the days.
It turns out that Lee may be coming around to that conclusion as well. The changes made to GLOBAL in the interim have made it even less palatable to ship the htags piece and he had suggested several times that Doxygen could easily replace htags for those that do actually need to browse the source code and tags from the web. He also noted that patching the existing version of the Debian global package to fix the problems that people are running into might be possible as well.
It isn't exactly clear which path Lee will take, but he seems to be backing down, at least a bit, on the requirement to keep the htags piece in the Debian package. But, what is clear is that it took an escalation to the TC to even get Lee to engage about the problem; progress may be made soon, but it seems somewhat counterproductive to force an escalation to make that happen. As Wookey put it:
Certainly, blocking any updates for nearly a decade over a dispute about how a certain feature should work—even because of security concerns—is not helping anyone. If Lee lost confidence in the upstream project, as it appears he has, then allowing someone else to take over the maintainership would seem the right course. Protecting existing users from having their workflow interrupted by removal of an ancillary feature is to be lauded at some level, but keeping up with the "new shiny" is part of the maintainer's responsibility as well. It can be a difficult balance to strike.
No formal resolution of the TC bug has occurred; the members may be hoping to see the problem resolved without actually having to take that step. When pressed, Lee can seemingly muster up strong and voluminous arguments for his course of action, but it is a bit hard for outsiders to see things his way. While he may be skeptical that the "new shiny" adds any real value, many users of the package have effectively routed around him to simply take the upstream version because the ancient Debian version does not work for them. That is an unfortunate outcome—and one to be avoided if possible.
Brief items
Distribution quotes of the week
At LinuxCon Europe, you can still meet people who have never heard of Fedora and get reactions such as, “Oh, so Fedora is a variant of Linux, right?”
FOSDEM 2017 - Distributions Devroom
The Distributions devroom at FOSDEM will take place February 4 in Brussels, Belgium. The call for participation deadline is November 25.
Newsletters and articles of interest
Distribution newsletters
- DistroWatch Weekly, Issue 684 (October 24)
- Lunar Linux weekly news (October 21)
- openSUSE news (October 26)
- openSUSE Tumbleweed – Review of the Week (October 21)
Open Source Operating Systems for IoT (Linux.com)
Linux.com takes a look at several operating systems for the Internet of Things. "Keep in mind that almost all OSes these day are claiming some IoT connection, so the list is somewhat arbitrary. The contenders here fulfill most of the following properties: low memory footprint, high power efficiency, a modular and configurable communication stack, and strong support for specific wireless and sensor technologies. Some projects emphasize IoT security, and many of the non-Linux OSes focus on real-time determinism, which is sometimes a requirement in industrial IoT."
Page editor: Rebecca Sobol
Development
Dealing with automated SSH password-guessing
Just about everyone who runs a Unix server on the internet uses SSH for remote access, and almost everyone who does that will be familiar with the log footprints of automated password-guessing bots. Although decently-secure passwords do much to harden a server against such attacks, the costs of dealing with the continual stream of failed logins can be considerable. There are ways to mitigate these costs.
We've probably all seen thousands of lines like this in our system logs:
Oct 21 22:49:13 wibbly sshd[20474]: Failed password for user from 192.168.1.42 port 52498 ssh2
Generally, these are the signatures of self-propagating password-guessing
bots that try to crack passwords by brute-force; if they succeed, they
copy themselves onto the destination system and start scanning for remote
systems to password-guess from there.
They also often signal home to some control organization that an account has
been cracked. There are a lot of bots out there: for this article, I
spun up a completely fresh VPS with one of my providers, and the first
such log entries appeared 71 minutes after the machine first booted.
Let's take it for granted that all LWN readers have sensible policies regarding password complexity that are rigorously and automatically enforced on all our users, and so we're unlikely to fall victim to these bots. Nevertheless both SSH connection setup and teardown, and the business of validating a password, are computationally expensive. If the internet has free rein to ask you to do this, the CPU costs in particular can be significant. I found that until I took measures in mitigation, my sshd was constantly using about 20% of a CPU to evaluate and reject login attempts.
Change the port number
It's a really simple fix, but changing the port number away from the default of 22 can be surprisingly effective. My wife and I both run VPSes with the same provider, both of which have been up and running for several years, identical in most respects save that she runs her sshd on a non-standard port number in the mid-hundreds. In the last four complete weeks of logs, my VPS saw a total of slightly over 23,000 rejections; hers saw three.It's not an infallible technique on its own; recently, someone found her alternate port, and tried several hundred different user/password combinations before getting bored and moving on. It can also work badly with some clients that can't easily be configured to use a non-standard port number.
But changing the port number is probably the lightest-weight mitigation technique in this article; it is a simple matter of changing a line in /etc/ssh/sshd_config (or your distribution's configuration file) from Port 22 to Port 567, for example. You will need to pick a port without a current listener to avoid clashes; netstat can be helpful for this. Moreover, you will need to add permission to your firewall rules for the new port number before you restart the service, then take out the old port 22 permission after the restart and confirmation that everything works. I advise against just changing the existing firewall permission from port 22 to the new port; it's easy to get the sequencing wrong, and lock yourself out of your remote server that way.
Fail2ban
Fail2ban is an elegant piece of software that watches log files for evidence of abuse of network services, then adjusts firewall rules in realtime to ban access from the offending IP addresses to the service in question. It's pretty easy to find; CentOS users will find it in the EPEL repository, Ubuntu seems to have it in the standard repositories, and it shouldn't be hard to find for most other major distributions.The project has a good page of HOWTOs, which include an excellent guide to using it on SSH. Although I disagree with banning IP addresses for a whole day — I think it's much too easy to lock oneself out of one's own systems — I found it otherwise easy to follow, and it works. On my CentOS system, I configured in the EPEL repository, did a "yum install fail2ban", and put the following into /etc/fail2ban/jail.local:
[ssh-iptables]
bantime = 600
enabled = true
filter = sshd
action = iptables[name=SSH, port=ssh, protocol=tcp]
maxretry = 5
logpath=/var/log/secure
After that, doing a "service fail2ban start" was enough to lock out a test
client system after five failed SSH attempts.
iptables rate limiting
One of the shortcomings of Fail2ban is a strong IPv4 focus. The huge size of the IPv6 address space makes scanning for SSH servers much more expensive, but my servers are already seeing non-service-specific IPv6 port scans most days, and the problem will only grow. As the Internet of Things comes along, and more houses have routed IPv6, there will be a lot more SSH servers to find. It's one thing having my colocated server being password-guessed by random servers in North Korea; it is quite another having all my kitchen appliances being password-guessed by half a million badly-configured light bulbs. So I do my rate-limiting with iptables (and ip6tables). The following lines of iptables commands log and reject the fourth and subsequent new connections in a rolling 60-second window:
iptables -A INPUT -p tcp -m state --state NEW --dport 22 -m recent \
--name sshrate --set
iptables -A INPUT -p tcp --dport 22 -m state --state NEW -m recent \
--name sshrate --rcheck --seconds 60 --hitcount 4 -j LOG \
--log-prefix "SSH REJECT: "
iptables -A INPUT -p tcp --dport 22 -m state --state NEW -m recent \
--name sshrate --rcheck --seconds 60 --hitcount 4 -j REJECT \
--reject-with tcp-reset
These lines need to come near the beginning of your iptables rules; specifically,
before any blanket ACCEPT for any ESTABLISHED connections, and before any
general permit for port 22 (or your sshd port of choice). You can use
identical rules for ip6tables, and if your router is running Linux,
you can add them to the FORWARD chain as well, protecting all the
devices inside your network.
This is an extremely inexpensive way of rejecting over-frequent attempts to authenticate via SSH. Looking at the last complete week of logs on my colocated server, iptables refused over 153,000 connection attempts. Only 9,800 connection attempts were permitted to talk to sshd, and refused for having invalid passwords. 94% of malicious traffic was rejected quite cheaply, by iptables.
Although the ban period is only 60 seconds, if the banned address keeps trying to connect during the period of the ban — as bots tend to do — the ban continually renews itself. Only when the connection attempts stop can the 60-second window clear out and allow new connection attempts to succeed. Thus, this approach tends to impact bots more than humans, which is good. On the odd occasions I'm using a tool that needs to make large numbers of frequent connections, I can always make a manual SSH connection in SSH master mode, then tell the tool to use SSH slave mode and feed it the relevant ControlPath.
Public-key authentication
Another simple but powerful approach is to disallow password-based authentication in favor of public-key-based authentication. Changing the PasswordAuthentication parameter from yes to no in the sshd configuration file, will cause sshd to refuse attempts to authenticate via password. Your system still pays the setup and teardown cost of each SSH connection, but it doesn't have the expense of validating a password each time.I can't quantify the utility of this approach, because my users wouldn't put up with it, but with Amazon EC2, nearly all new instances come out-of-the-box configured to refuse password-based authentication. I suspect this has done a great deal to decrease the number of systems on the internet that are vulnerable to automated password-guessing.
Two-factor authentication
As mentioned above, my users needed the ability to authenticate via password from time to time; for example, sometimes they needed to log in from a work site, where they were unwilling to leave a trusted key pair installed for a long time.Two-factor authentication (2FA) is an excellent way to handle this; the recent rise in popularity of Google Authenticator has done much to introduce people to the idea, and to demystify it. Since Google Authenticator is simply a slightly idiosyncratic implementation of OATH Open Authentication, much can be done with this infrastructure without coupling oneself permanently to Google. Note that, as described below, this does not affect public-key-based SSH access; that can continue without the need for a 2FA exchange. Only password-based access requires 2FA, thus completely frustrating the bots.
On my CentOS 6 system, I had to do:
# yum install pam_oath
# ln -s /usr/lib64/security/pam_oath.so /lib64/security/
Then I
created a the /etc/users.oath file with mode 600 and owner root:root, and
populated it with lines like:
#type username pin start seed
HOTP yatest - 123a567890123b567890123c567890123d567890
where that start seed is a 40-hex-digit string of great randomness; I
generated it with:
# dd if=/dev/random bs=1k count=1|sha1sum
This seed should also be fed to an OATH HOTP
software token. Google Authenticator, which is one such, suffers from the
idiosyncrasy that it wants its seed in base-32 encoding, rather than hex.
You can convert it yourself, or use a friendlier software OATH token generator,
such as Android
Token. Once this is done, the token will produce a stream
of seemingly-random six-digit numbers that will be exactly what your server
is expecting.
Then, in the file /etc/pam.d/sshd, immediately before the first line containing password-auth I added the line:
auth required pam_oath.so usersfile=/etc/users.oath window=10
and in /etc/ssh/sshd_config ensured I had the lines:
UsePAM yes
PasswordAuthentication no
ChallengeResponseAuthentication yes
Once I had restarted sshd, I found that when SSHing in, I was asked first for the OATH six-digit number from the token, secondly for my password, and then granted access.
As an aside, do not succumb to the temptation to produce more than one or two token numbers right away. The function of HOTP OATH is to produce a sequence of numbers that are completely predictable if you know the secret seed (but not predictable if you only know previous numbers in the sequence). The window=10 parameter in the /etc/pam.d/sshd entry means that the server should accept not only the next expected number in the sequence, but any of the nine numbers after that as well. If you keep pressing the button on your software token, and run through (say) the next 20 numbers, you will get further through the sequence than the server expects you to be. Once server and token get out of sync, best practice is to reset both server and token, and to change the shared secret.
Note that you should keep a live SSH session, authenticated as root, throughout this process, so you don't accidentally lock yourself out of your own system. Better still, be within six feet of the system console when trying it for the first time.
In conclusion, if you don't like this incessant password-guessing stressing your system, you have options. Work out which one(s) best fit your needs, and give them a go.
Brief items
Development quotes of the week
Flatpak 0.6.13
Flatpak 0.6.13 has been released. Major changes include a change in command line arguments for install/update/uninstall, application runtime dependencies are checked/downloaded, remote-add and install --from now supports uris, flatpak run can now launch a runtime directly, and more.'tsshbatch' Server Automation Tool Version 1.317 Released
'tsshbatch' 1.317 has been released. "'tsshbatch' is a server automation tool to enable you to issue commands to many servers without having to log into each one separately. When writing scripts, this overcomes the 'ssh' limitation of not being able to specify the password on the command line." This version is a major update with important bug fixes and improvements.
Valgrind-3.12.0 is available
Valgrind 3.12.0 has been released. "3.12.0 is a feature release with many improvements and the usual collection of bug fixes. This release adds support for POWER ISA 3.0, improves instruction set support on ARM32, ARM64 and MIPS, and provides support for the latest common components (kernel, gcc, glibc). There are many smaller refinements and new features. The release notes below give more details." There will be a Valgrind developer room at FOSDEM in Brussels, Belgium, on February 4, 2017. The call for participation is open until December 1.
Google Code-in 2016 now accepting organization applications
Google Code-in, a program for students ages 13-17, is seeking applications from open source projects interested in being a part of Google Code-in. "Working with young students is a special responsibility and each year we hear inspiring stories from mentors who participate. To ensure these new, young contributors have a great support system, we select organizations that have gained experience in mentoring students by previously taking part in Google Summer of Code."
Newsletters and articles
Development newsletters
- Emacs News (October 24)
- What's cooking in git.git (October 20)
- What's cooking in git.git (October 24)
- This Week in GTK (October 24)
- Libre Music Production (October)
- Perl Weekly (October 24)
- PostgreSQL Weekly News (October 23)
- Python Weekly (October 20)
- Ruby Weekly (October 20)
- This Week in Rust (October 25)
- Wikimedia Tech News (October 24)
Ranking the Web With Radical Transparency (Linux.com)
Linux.com interviews Sylvain Zimmer, founder of the Common Search project, which is an effort to create an open web search engine. "Being transparent means that you can actually understand why our top search result came first, and why the second had a lower ranking. This is why people will be able to trust us and be sure we aren't manipulating results. However for this to work, it needs to apply not only to the results themselves but to the whole organization. This is what we mean by 'radical transparency.' Being a nonprofit doesn't automatically clear us of any ulterior motives, we need to go much further. As a community, we will be able to work on the ranking algorithm collaboratively and in the open, because the code is open source and the data is publicly available. We think that this means the trust in the fairness of the results will actually grow with the size of the community."
Page editor: Rebecca Sobol
Announcements
Brief items
The Linux Foundation Technical Advisory Board election
The Linux Foundation's Technical Advisory Board provides the development community (primarily the kernel development community) with a voice in the Foundation's decision-making process. Among other things, the TAB chair holds a seat on the Foundation's board of directors. The next TAB election will be held on November 2 at the Kernel Summit in Santa Fe, NM; five TAB members (½ of the total) will be selected there. The nomination process is open until voting begins; anybody interested in serving on the TAB is encouraged to throw their hat into the ring.
New Books
Scratch Programming Playground--learn to code by making cool games
No Starch Press has released "Scratch Programming Playground" by Al Sweigart.
Calls for Presentations
CfP for FOSDEM 2017 security devroom
There will be a Security devroom at FOSDEM on February 4th or 5th, in Brussels, Belgium. The call for participation ends December 1.CFP Deadlines: October 27, 2016 to December 26, 2016
The following listing of CFP deadlines is taken from the LWN.net CFP Calendar.
| Deadline | Event Dates | Event | Location |
|---|---|---|---|
| October 28 | November 25 November 27 |
Pycon Argentina 2016 | Bahía Blanca, Argentina |
| October 30 | February 17 | Swiss Python Summit | Rapperswil, Switzerland |
| October 31 | February 4 February 5 |
FOSDEM 2017 | Brussels, Belgium |
| November 11 | November 11 November 12 |
Linux Piter | St. Petersburg, Russia |
| November 11 | January 27 January 29 |
DevConf.cz 2017 | Brno, Czech Republic |
| November 13 | December 10 | Mini Debian Conference Japan 2016 | Tokyo, Japan |
| November 15 | March 2 March 5 |
Southern California Linux Expo | Pasadena, CA, USA |
| November 15 | March 28 March 31 |
PGConf US 2017 | Jersey City, NJ, USA |
| November 18 | February 18 February 19 |
PyCaribbean | Bayamón, Puerto Rico, USA |
| November 20 | December 10 December 11 |
SciPy India | Bombay, India |
| November 21 | January 16 | Linux.Conf.Au 2017 Sysadmin Miniconf | Hobart, Tas, Australia |
| November 21 | January 16 January 17 |
LCA Kernel Miniconf | Hobart, Australia |
| November 28 | March 25 March 26 |
LibrePlanet 2017 | Cambridge, MA, USA |
| December 1 | April 3 April 6 |
‹Programming› 2017 | Brussels, Belgium |
| December 10 | February 21 February 23 |
Embedded Linux Conference | Portland, OR, USA |
| December 10 | February 21 February 23 |
OpenIoT Summit | Portland, OR, USA |
If the CFP deadline for your event does not appear here, please tell us about it.
Upcoming Events
LPC: Things to remember about the altitude in Santa Fe
Linux Plumbers Conference starts November 1 in Santa Fe, NM, which is at a higher altitude than many people are used to. The organizers have a few tips for people who are not used to high altitudes.Events: October 27, 2016 to December 26, 2016
The following event listing is taken from the LWN.net Calendar.
| Date(s) | Event | Location |
|---|---|---|
| October 25 October 28 |
OpenStack Summit | Barcelona, Spain |
| October 26 October 27 |
All Things Open | Raleigh, NC, USA |
| October 27 October 28 |
Rust Belt Rust | Pittsburgh, PA, USA |
| October 28 October 30 |
PyCon CZ 2016 | Brno, Czech Republic |
| October 29 October 30 |
PyCon.de 2016 | Munich, Germany |
| October 29 October 30 |
PyCon HK 2016 | Hong Kong, Hong Kong |
| October 31 | PyCon Finland 2016 | Helsinki, Finland |
| October 31 November 1 |
Linux Kernel Summit | Santa Fe, NM, USA |
| October 31 November 2 |
O’Reilly Security Conference | New York, NY, USA |
| November 1 November 4 |
Linux Plumbers Conference | Santa Fe, NM, USA |
| November 1 November 4 |
PostgreSQL Conference Europe 2016 | Tallin, Estonia |
| November 3 | Bristech Conference 2016 | Bristol, UK |
| November 4 November 6 |
FUDCon Phnom Penh | Phnom Penh, Cambodia |
| November 5 | Barcelona Perl Workshop | Barcelona, Spain |
| November 5 November 6 |
OpenFest 2016 | Sofia, Bulgaria |
| November 7 November 9 |
Velocity Amsterdam | Amsterdam, Netherlands |
| November 9 November 11 |
O’Reilly Security Conference EU | Amsterdam, Netherlands |
| November 11 November 12 |
Seattle GNU/Linux Conference | Seattle, WA, USA |
| November 11 November 12 |
Linux Piter | St. Petersburg, Russia |
| November 12 November 13 |
PyCon Canada 2016 | Toronto, Canada |
| November 12 November 13 |
T-Dose | Eindhoven, Netherlands |
| November 12 November 13 |
Mini-DebConf | Cambridge, UK |
| November 13 November 18 |
The International Conference for High Performance Computing, Networking, Storage and Analysis | Salt Lake City, UT, USA |
| November 14 November 18 |
Tcl/Tk Conference | Houston, TX, USA |
| November 14 | The Third Workshop on the LLVM Compiler Infrastructure in HPC | Salt Lake City, UT, USA |
| November 14 November 16 |
PGConfSV 2016 | San Francisco, CA, USA |
| November 16 November 18 |
ApacheCon Europe | Seville, Spain |
| November 16 November 17 |
Paris Open Source Summit | Paris, France |
| November 17 | NLUUG (Fall conference) | Bunnik, The Netherlands |
| November 18 November 20 |
GNU Health Conference 2016 | Las Palmas, Spain |
| November 18 November 20 |
UbuCon Europe 2016 | Essen, Germany |
| November 19 | eloop 2016 | Stuttgart, Germany |
| November 21 November 22 |
Velocity Beijing | Beijing, China |
| November 24 | OWASP Gothenburg Day | Gothenburg, Sweden |
| November 25 November 27 |
Pycon Argentina 2016 | Bahía Blanca, Argentina |
| November 29 November 30 |
5th RISC-V Workshop | Mountain View, CA, USA |
| November 29 December 2 |
Open Source Monitoring Conference | Nürnberg, Germany |
| December 3 | NoSlidesConf | Bologna, Italy |
| December 3 | London Perl Workshop | London, England |
| December 6 | CHAR(16) | New York, NY, USA |
| December 10 | Mini Debian Conference Japan 2016 | Tokyo, Japan |
| December 10 December 11 |
SciPy India | Bombay, India |
If your event does not appear here, please tell us about it.
Page editor: Rebecca Sobol
