LWN.net Weekly Edition for October 22, 2015
Finding inactive openSUSE members
Projects organize their governance in different ways; often that governance depends on a definition, or formal recognition, of a "member" of the project. Members can generally vote on the membership of boards and committees, sometimes on technical or policy questions, and on changes to the governance itself. Typically, membership is granted to those who are active in the project, but members can become less active (or completely inactive) over time. What to do about inactive members is a question that the openSUSE project has been struggling with recently.
The openSUSE Members wiki page provides information on how to become a member of the project and what the benefits are. Continued and substantial contributions to the project are the criteria for membership; the benefits include an "@opensuse.org" email address, an IRC cloak, some blogging privileges, eligibility for the board, and voting rights. There is a proposed entry on that page about how to maintain membership, but the only way listed to lose membership status is to resign or be kicked out by the board for repeated violations of the guiding principles.
Some would like to establish a way to remove inactive members from the project. It has been discussed on the opensuse-project mailing list for some time—starting back in June in this iteration—but there have been earlier discussions as well. As a result of those mid-year discussions, Jean-Daniel Dodin (also known as "jdd") proposed a rather complicated manual method to scan for project activity to try to narrow down which openSUSE members are active and which might be contacted to try to better determine their status. In response, there were suggestions of ways to better automate measurement of the activity level of members, but there were also complaints about the whole idea.
Cornelius Schumacher took exception with
expending any real effort in trying to find active versus inactive
members. He called it "very creepy
" to scan for members'
activity, which "has more potential to destroy community than to
build community
". One of the attributes of a volunteer community
is that people can drift in and out of active participation without being
removed from the community, he said. Furthermore:
Several followed up with support for Schumacher's statement, but others are concerned that having a large pool of voters that do not vote makes it appear that there is less support for proposals that pass than there really is. Richard Brown, who is the chair of the openSUSE Board, noted that any changes to governance or membership rules would require a vote of the members:
We don't want a situation, as we've had before, where the results are cast doubt upon due to low turnout.
But Schumacher remained unconvinced.
Inactive people don't vote and, in general, aren't affected by the outcome
of the votes,
he said, so the number of inactive members doesn't really matter. Board
member Robert Schweikert called that
"a bit too simplistic
"; since the inactive members
could vote, they might have an undue influence given their status.
In addition, without knowing how many inactive members there are, there is
no way to distinguish between active members that choose not to vote versus
those who are inactive and didn't vote.
Schumacher thought the idea of inactive members voting was purely a theoretical concern. He reiterated his belief that it is much better to spend time and energy on the active people and delivering openSUSE to its users. But Dodin pointed out that it would be useful to know why members become inactive in case their inactivity points to problems that the project could try to address. Schumacher agreed with that point.
The project board exists to "provide guidance and support existing governance structures, but shouldn't
direct or control development
", Schumacher said, quoting the guiding
principles. So the board does not really "influence the direction of
the project
", thus voting for the board is not as consequential as
some would have it. Both Brown and Schweikert, who are on the board,
disagreed with that, however.
Brown stated: "The Board today is involved in
far more decisions and influence of the Project than the Boards when
those Principles were laid out
". He also noted that the
boards for KDE e.V. and the GNOME Foundation are both elected from the
members of those projects, requiring a quorum of member votes, and having
requirements for members to maintain their status. Those are all things
that might make sense for openSUSE as well, he said, but for now the focus
should be
on getting a level set on where things stand:
Schweikert also raised another concern. There is a quorum of sorts required for calling early elections for the board:
He corrected the figure to 20% in another post, but the point is still valid. At some point, the number of inactive members may reach a level where it is impossible to change the board in an early election, which certainly seems suboptimal.
The thrust of Schumacher's argument is that the project should not spend time and energy on more formal governance structures and the like, but should instead focus on delivering openSUSE Leap and Tumbleweed. Others have a somewhat different, but not entirely incompatible, view. Overall, the project has gone through some changes lately, so it is not really surprising that there might be some differences of opinion on some of the steps moving forward. The good news is that those differences have been discussed openly and without rancor—which bodes well for everything resolving amicably. So far, at least, what that resolution might be is up in the air.
Developing an inexpensive, visible-light network
Most of us are accustomed to networks that run over radio links or via electrical signals on a physical wire, but those are not the only options. At Embedded Linux Conference Europe 2015 in Dublin, Ireland, Stefan Schmid presented a new networking technology, developed at Disney Research, that operates in the visible light spectrum and uses commodity hardware. The effort is still considered a research project, but once it matures, it may open the door to a variety of new short-range networking uses that will be well-served by staying out of the increasingly crowded unlicensed radio spectrum bands.
For background, Schmid reminded the audience that there are already other optical network technologies, including fiber optics and the direct laser signaling used between some satellites. There are even IP-based network stacks using visible or infrared light, such as Li-Fi. All have advantages—like requiring no spectrum licensing and the ability for users to see when an attacker attempts to intercept or man-in-the-middle a transmission. Disney's new technique is more modest than the others in what it attempts; it uses a bit rate that is slow by wired or radio-based signaling standards and focuses on point-to-point links. On the up side, it has the advantage of using inexpensive components and it can be deployed anywhere there is a standard light socket.
Disney first became interested in visible-light networking for use in toys, where it can provide advantages over Bluetooth or WiFi for communications. LEDs are inexpensive compared to radio modules, and many manufacturers (as well as retailers) remain concerned about the health effects that radio exposure has on young children.
Called visible-light communication (which Disney abbreviates to VLC; no relation to the media player), the technique encodes information in light pulses that are too rapid for the human eye to detect—indeed, the pulses are shorter than the pulse-width modulation (PWM) cycles typically used to set the perceived brightness level of an LED, so the information can be inserted into the PWM pattern without affecting brightness. So far, two hardware options have been explored in the project. The first uses the same directly attached LED to both send and (with some clever trickery) receive signals. The second uses separate LEDs and photosensors. Initially, the team used Arduino boards to control the LEDs, and accessed the Arduino from a Linux box over USB. Later, it began working with ARM-based single-board computers (SBCs) running Linux instead to simplify deployment.
The single-LED method requires the LED to be connected to an analog I/O pin. For transmitting data, the LED is simply pulsed on or off. But most people do not realize that LEDs can be used to detect light as well, Schmid said. They have the same properties as a photosensing diode, but with lower sensitivity. To read from an LED, he explained, one charges the LED and then measures the subsequent voltage drop-off. If a photon strikes the LED, the discharge rate is accelerated. By polling regularly during the LED's "off" cycle, one can decide if a bit has been received.
Disney's VLC system is based on a physical network layer that can transmit about 1 Kbps, using differential encoding to send each bit in a separate frame. These frames are transmitted during the PWM "off" cycle, during which the LED would normally be inactive. Each VLC frame is split into six intervals:
- A sync interval, S1. The sync intervals are used to keep multiple devices in clock synchronization. Both devices are off and measuring light during the sync intervals. If both intervals are equal to the ambient-light measurement, then the devices are in sync. If either interval is too high, then too much light is hitting during one end of the frame, so the two devices have drifted out of synchronization. Re-synchronizing the devices is handled higher up in the network stack, at the MAC layer.
- A guard interval, G, during which the LED is off, in order to avoid light leakage between slots.
- The first half of the differential bit pair, D1.
- A second guard interval.
- The second half of the bit pair, D2.
- Another guard interval.
- The final sync interval, S2.
To send a zero, a transmitter turns on the LED during D1 and turns it off during D2. To send a one, it turns the LED on in D2 and off in D1. This differential encoding allows the receiver to detect the bit regardless of what other ambient light is also present in the room; if D1 is greater than D2, the bit is 0. If the reverse is true, the bit is 1.
The last piece of the puzzle is how to account for the extra illumination produced by the VLC signaling when the LED is supposed to be in its PWM "off" cycle. The solution is to compensate by dimming the LED by the equivalent amount during the subsequent PWM "on" cycle. With the total amount of light produced thus equalized, the LED should appear just as bright, to the unaided eye, as a normal LED.
The Disney team has designed this protocol with a nominal frame size of 500 microseconds. The range is on the order of a few meters, although it depends quite a bit on the equipment used. Both the physical-layer VLC protocol and the MAC layer were implemented entirely in software, though commercial products would presumably require hardware NICs many times cheaper than Arduinos and Linux SBCs. The MAC layer reuses many of the same techniques used in WiFi (such as Request to Send / Clear to Send) for basic features like collision avoidance. Although VLC-enabled toys do not require a full IP stack, the team used the standard Linux networking stack to develop the protocols and to explore other use cases.
Schmid played video demonstrations of several toys using the single-LED-and-microcontroller implementation, such as a toy car that receives commands from a light bulb and an LED-equipped magic wand that can activate other toys with VLC signals.
But Disney believes the fundamental idea has applications beyond entertainment, Schmid said. The team's more recent work involved modifying standard socket-sized LED light bulbs to embed a Linux-based control board, so that the light bulbs can communicate directly to one another. Such light bulbs could obviously be used in Internet-of-Things "smart home" systems, Schmid said, but they could have other uses, too. For example, they could permit geolocation indoors where GPS is unusable. Each bulb can transmit an ID that can be mapped to a location in the building, with finer granularity than WiFi-based location.
The light bulb implementation necessitates using separate LEDs and photosensors, for two reasons. First, "white" LED bulbs actually use blue LEDs that excite a phosphorescent coating, and that coating interferes with the ability to measure the LED's discharge voltage. Second, the bulbs are usually coated with a diffusion material that shortens the effective range of the VLC signal significantly. For the bulb project, the team initially implemented VLC as an Ethernet device, but Schmidt said they were now exploring 6LoWPAN and other ideas. He showed a demonstration video of a series of light bulbs relaying a "turn off" message as one might use in a smart-home system.
In response to the first audience question, Schmid said that he did not expect Disney to release its code implementation as open-source software, since it is an internal research project. That elicited quite a bit of grumbling from the attendees in the rather packed room. But Schmid added that Disney is interested in turning the VLC protocol into an ISO or IEEE standard if there is sufficient interest, and pointed audience members to the VLC project site, which includes all of the group's published papers and extensive documentation.
Hearing that no code was available put a distinctly down note on the end of a presentation that, up until that point, had engaged the ELCE crowd. On the other hand, if the scheme—or something like it—becomes an open standard, there may be plenty of Linux and free-software developers eager to work with it.
[The author would like the thank the Linux Foundation for travel assistance to attend Embedded Linux Conference Europe.]
3D video and device mediation with GStreamer
When GStreamer 1.6 was released in September, the list of new features was lengthy enough that it could be a bit overwhelming at first. Such is an unfortunate side effect of a busy project coupled with a lengthy development cycle. Fortunately, the 2015 GStreamer Conference provided an opportunity to hear about several of the new additions in detail. Among the key features highlighted at the event are 3D video support and a GStreamer-based service to mediate access to video hardware on desktop Linux systems.
Entering the third dimension
Jan Schmidt of Centricular presented a session about the new stereoscopic 3D support in GStreamer 1.6. The term "stereoscopic," he said, encompasses any 3D encoding that sends separate signals to each eye and relies on the user's brain to interpret the depth information. That leaves out exotic techniques like volumetric displays, but it still includes a wide array of ways that the two video signals can be arranged in the container file.
There could be a single video signal that is simply divided in half, so that left and right images are in every frame; this is called "frame-packed" video. Or the stream could alternate left and right images with every frame, which is called "frame-by-frame" video. There could also be two separate video streams—which may not be as simple as it sounds. Schmidt noted that 3D TV broadcasts often use an MPEG-2 stream for one eye and an H.264 stream for the other. Finally, so-called "multi-view" video also needs to be supported. This is a scheme that, like 3D, sends two video signals together—but multi-view streams are not meant to be combined; they contain distinct streams such as alternate camera angles.
GStreamer 1.6 supports all of the 3D and multi-view video modes in a single API, which handles 3D input, output, and format conversion. That means it can separate streams for playback on 3D-capable display hardware, combine two video streams into a 3D format, and convert content from one format to another. Schmidt demonstrated this by converting 3D video found on YouTube between a variety of formats, and by converting a short homemade video captured with two webcams into a stereoscopic 3D stream.
GStreamer does its 3D processing using OpenGL, so it is fast on modern hardware. There are three new elements provided: gstglviewconvert rewrites content between the formats, gstglstereoview splits the two signals into separate streams, and gstglstereomix combines two input streams into a single 3D stream. For display purposes, 3D support was also added to the existing gstglimagesink element. In response to an audience question, Schmidt said the overhead of doing 3D conversion was negligible: one extra copy is performed at the OpenGL level, which is not noticeable.
Most of the video processing involved is backward-compatible with existing GStreamer video pipelines (although a filter not intended for 3D streams may not have the desired effect). The metadata needed to handle the 3D stream—such as which arrangement (left/right, top/bottom, interleaved, etc.) is used in a frame-packed video—is provided in capabilities, Schmidt said. GStreamer's most-used encoder, decoder, and multiplexing elements are already 3D-aware; most other elements just need to pass the capabilities through unaltered for a pipeline to work correctly. And one of the supported output formats is red-green anaglyph format, which may be the easiest for users to test since the equipment needed (i.e., plastic 3D glasses) is cheap.
Multi-view support is not as well-developed as 3D support, he said; it works fine for two-stream multi-view, but there are few test cases to work with. The technique has some interesting possibilities, he added, such as the potential for encoded multi-view streams to share inter-frame prediction data, but so far there is not much work in that area.
Don't call it PulseVideo
GStreamer founder Wim Taymans, now working at Red Hat, introduced his work on Pinos, a new Linux system service designed to mediate and multiplex access to Video4Linux2 (V4L2) hardware. The concept is akin to what PulseAudio does for sound cards, he said, although the developers chose to avoid the name "PulseVideo" for the new project since it might incorrectly lead to users assuming there was a connection between the projects.
The initial planning for Pinos began in 2014, when developer William Manley needed a way to share access to V4L2 hardware between a GStreamer testing framework and the application being tested. Around the same time, the GNOME app-sandboxing project was exploring a way to mediate access to V4L2 devices (specifically, webcams) from sandboxed apps. The ideas were combined, and the first implementation written by Taymans and several other GStreamer and GNOME developers in April 2015.
Pinos runs as a daemon and uses D-Bus to communicate with client applications. Using D-Bus, the clients can request access to camera hardware, negotiate the video format they need, and start or stop streams. GStreamer provides the media transport. Initially, Taymans said, they tried using sockets to transfer the video frames themselves, but that proved too slow to be useful. Ultimately, they settled on exchanging the media with file descriptors, since GStreamer, V4L2 hardware, and OpenGL could all already use file descriptors. A socket is still used to send each client its file descriptor, as well as timestamps and other metadata.
The implementation uses several new elements. The most important are pinossrc and pinossink, which capture and send Pinos video, respectively, gstmultisocketsink, which is used by the daemon to pass data to clients, and gstpinospay, which converts a video stream into the Pinos format. Taymans said he tried to make the client-side API as simple as possible, then rewrote it to be even simpler. A client only needs to send a connection request to the Pinos daemon, wait to receive a file descriptor in return, then open a file-descriptor source element with the file descriptor handed back by the Pinos daemon. At that point, the client can send the start command and begin reading video frames from the file descriptor, and send the pause or stop commands as needed. Frame rates, supported formats, and other details can be negotiated with the V4L2 device through the daemon.
Basic camera access is already working; as a test case, the desktop webcam application Cheese was rewritten to use Pinos, and the new elements worked "out of the box." The Pinos branch is expected to be the default in the next Cheese release. At that point, Pinos will need to be packaged and shipped by distributions. The sandboxed-application use case, however, still requires more work, since the security policy needed by the sandbox has not been defined yet. It is also not yet been decided how best to handle microphone access—which may fall under Pinos's purview because many webcams have built-in microphones. And there are other ideas still worth exploring, Taymans said, such as allowing Pinos clients to send video as well as receive it. He speculated that the service could also be used to take screenshots of a Wayland desktop, which is a feature that has been tricky to handle in the Wayland security model.
Looking even further out, Taymans noted that because GStreamer handles audio and video, Pinos could even replace PulseAudio for many application use cases. It may make sense, after all, to only worry about managing one connection for both the audio and video. He quickly added, however, that this concept did not mean that Pinos was going to replace PulseAudio as a standard system component.
Pinos support and stereoscopic 3D support are both available in GStreamer 1.6. In both cases, it may still be some time before the new features are accessible to end users. Taymans noted at the end of his talk that packaging Pinos for Fedora was on the short list of to-do items. Experimenting with 3D video requires 3D content and hardware, which can be pricey and hard to locate. But, as Schmidt demonstrated, GStreamer's ability to combine two camera feeds into a single 3D video is easy to use—perhaps easy enough that some users will begin working with it as soon as they install GStreamer 1.6.
[The author would like the thank the Linux Foundation for travel assistance to attend GStreamer Conference.]
Security
Looking at a few recent kernel security holes
The Linux kernel is the source of far more CVE numbers than any other component in the system; even wireshark doesn't really come close. To an extent, that is one of the hazards of kernel programming: errors that would simply be bugs in user space become vulnerabilities in the kernel realm. Still, there is always room to wonder if the kernel community could be doing better than it is in this regard. One way to try to answer such a question is to look at what types of vulnerabilities are being discovered to see what patterns emerge. Thus, this brief survey, which looks at a few recent issues.
Buffer overflows and more
CVE-2015-5156 is, at its core, a buffer overflow in the virtio network driver subsystem. This driver sets the NETIF_F_FRAGLIST flag on its devices, indicating that it can handle packets that have been split into multiple fragments. When it gets an actual packet, it calls skb_to_sgvec() to turn that list of fragments into a scatter/gather I/O list. Unfortunately, the size of the scatterlist array it allocates for the fragment list is insufficient; in some circumstances, there can be more fragments than can fit into the scatter/gather list. The result is that skb_to_sgvec() writes beyond the end of the list, corrupting a random range of memory.
The problem was "fixed" by removing the NETIF_F_FRAGLIST flag. As a minimal fix for stable kernels, this change probably makes sense. But one could argue that fixing it properly would involve either (1) sizing the scatterlist array properly in virtio, or, better, (2) passing the length of the list to skb_to_sgvec() so it cannot be overrun. Without that latter fix, skb_to_sgvec() behaves much like strcpy(), and this type of overrun could easily happen again.
CVE-2015-2925 is a vulnerability that allows a process to escape from a mount namespace if it can create a bind mount within that namespace. In practice, it means that processes can get out of a container and access the entire host-system filesystem. It was reported in April and was the subject of a long series of discussions. The proposed fixes were complex (to say the least) and ran into some opposition. In the end, a simpler fix was merged for 4.3.
This bug came about because nobody thought about the effects that a rename() call outside of a bind mount might have on processes whose current working directory lies within that mount. In short, a process following ".." out of a directory is normally stopped at the root of the filesystem it is in, but, if a directory can be moved out of a bind mount, a process within that directory can move up without ever encountering that root; it will thus never be stopped. Intersections of security domains will often be fraught with this kind of problem. The issue is fixed, but it is hard to believe that there won't be others like it.
CVE-2015-5257 is a null pointer dereference in the USB WhiteHEAT driver. These bugs can be used to cause a kernel oops; in some cases they can be exploited for privilege escalation, though most distributions should be configured to defeat such exploits. The source of the problem here is clear: the driver trusted the hardware to behave as expected. If somebody shows up with a purpose-built USB device, they can trigger the bug.
This particular vulnerability has few people worried. But the vulnerability of the kernel to malicious hardware in general is worrisome indeed. Such hardware is increasingly easy to make, and it can often create conditions that developers have never thought about or tested for. We will almost certainly see more vulnerabilities of this nature.
Initialization failures of various types
CVE-2015-7613
is a failure to properly initialize memory. In particular, the user and
group IDs associated with a System V shared-memory object are not set before
the object is exposed to the wider world, meaning that authentication
checks can be done against random data. At a minimum, this bug can be
exploited to gain access to shared-memory segments that should be
inaccessible. But, as the Red Hat bugzilla
entry notes: "It is almost certain that this vulnerability can be
used to gain arbitrary code execution in the kernel.
"
The good news here is that, in KASan, we
have a tool that can detect use of uninitialized memory in the kernel.
Indeed, it was KASan that flagged this particular problem. The not-so-good
news is that, as Linus Torvalds noted in the
changelog to the fix, this problem had already been found and fixed in
the System V semaphore code (for 3.18). It would have been good to fix all
three types of
System V IPC (message queues are vulnerable too), but, as Linus notes, "we
clearly forgot about msg and shm
". The lessons seem clear: tools
are invaluable, but, as Al Viro once said:
"Bugs are like mushrooms - found one, look around for more.
"
Initialization-related race conditions are fairly common; another example can be seen in CVE-2015-5283. In a modular system, the module for the SCTP network protocol will not be loaded until a user requests an SCTP socket. The initialization code in the SCTP module registers its installed protocols before it is fully initialized; that opens a window within which another process can attempt to open sockets while the module is in a half-baked state. Good things rarely come from such situations.
Almost any kernel module, be it a driver, a network protocol, or something else, must generally initialize a long list of resources and make them available to the rest of the system. It is easy to create a situation where some resources become visible before the module is fully prepared to manage them. An interrupt handler may be registered before the data structures the handler needs are ready. A sysfs file could show up before the driver is ready. Or an SCTP protocol can appear before the module is ready to handle it. These problems manifest themselves as difficult-to-find race conditions; they are hard to test for. So they will probably continue to pop up.
CVE-2015-5697 is an information-leak vulnerability. The MD (RAID) system implements an ioctl() operation called GET_BITMAP_FILE, which returns the name of the external bitmap file associated with a specific device. Should that device not actually have an external bitmap file, though, the ioctl() will copy 4096 bytes of uninitialized kernel memory to user space after having set just the first byte to zero. The remaining 4095 bytes could contain pretty much anything. An attacker could scan this data for specific patterns and possibly obtain kernel addresses or private data.
The fix is straightforward enough: allocate the space for the file name with kzalloc() instead of kmalloc(). But, once again, this is an easy sort of error to make; it is hard to ensure that all data copied to user space is initialized in all paths through the code. There has been a push over the years to use functions like kzalloc() everywhere, but there is resistance to doing so, especially in hot-path code where the developer is certain that the memory will be properly initialized. In any case, the GET_BITMAP_FILE ioctl() is not one of those hot paths, so there is no reason not to be sure in this case.
These examples were all taken from vulnerabilities that were fixed in distributor updates over the last month or so. Needless to say, it is not an exhaustive list. But it does show a few of the numerous ways in which security-related bugs can be introduced into the kernel. Kernel programming requires great care, an extreme distrust of the environment in which the code is running, and, whenever possible, good testing tools. The kernel community has gotten better with all of these over the years, but there is clearly a lot of ground to be covered still.
Brief items
Security quotes of the week
Would this be worth it for an intelligence agency? Since a handful of primes are so widely reused, the payoff, in terms of connections they could decrypt, would be enormous. Breaking a single, common 1024-bit prime would allow NSA to passively decrypt connections to two-thirds of VPNs and a quarter of all SSH servers globally. Breaking a second 1024-bit prime would allow passive eavesdropping on connections to nearly 20% of the top million HTTPS websites. In other words, a one-time investment in massive computation would make it possible to eavesdrop on trillions of encrypted connections.
How a few legitimate app developers threaten the entire Android userbase (Ars Technica)
Ars Technica reports that a handful of app distributors are putting many Android users at risk by bundling root exploits with their wares. "It took just one month of part-time work for the computer scientists to reverse engineer 167 exploits from a single provider so they could be reused by any app of their choosing. Ultimately, the researchers concluded that the providers, by providing a wide array of highly customized exploits that are easy to reverse engineer and hard to detect, are putting the entire Android user base at increased risk."
New vulnerabilities
click: privilege escalation
| Package(s): | click | CVE #(s): | |||||
| Created: | October 16, 2015 | Updated: | October 21, 2015 | ||||
| Description: | From the Ubuntu advisory:
It was discovered that click did not properly perform input sanitization during click package installation. If a user were tricked into installing a crafted click package, a remote attacker could exploit this to escalate privileges by tricking click into installing lenient security policy for the installed application. | ||||||
| Alerts: |
| ||||||
docker-engine: two vulnerabilities
| Package(s): | docker-engine | CVE #(s): | CVE-2014-8178 CVE-2014-8179 | ||||||||||||||||||||||||
| Created: | October 15, 2015 | Updated: | February 8, 2016 | ||||||||||||||||||||||||
| Description: | From the Oracle advisory:
[1.8.3]
| ||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||
kernel: code execution
| Package(s): | kernel | CVE #(s): | CVE-2015-7312 | ||||||||||||||||||||||||
| Created: | October 20, 2015 | Updated: | October 21, 2015 | ||||||||||||||||||||||||
| Description: | From the Ubuntu advisory:
Ben Hutchings discovered that the Advanced Union Filesystem (aufs) for the Linux kernel did not correctly handle references of memory mapped files from an aufs mount. A local attacker could use this to cause a denial of service (system crash) or possibly execute arbitrary code with administrative privileges. | ||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||
lxdm: two vulnerabilities
| Package(s): | lxdm | CVE #(s): | |||||||||||||
| Created: | October 19, 2015 | Updated: | October 26, 2015 | ||||||||||||
| Description: | From the Red Hat bugzilla:
1268900: X server in F22 allows X clients to connect even when they have no valid MIT-MAGIC authentication cookie. Connections are accepted from different users (i.e. are not related to 'xhost +si:localuser:`id -un`'). I could reproduce this with both X session started from *dm (lxdm in my case) as well as X server started manually from the text console. Besides Xorg, I quickly tested with Xephyr and Xnest - they also seem affected in the same way. 846086: lxdm leaks open file descriptors to user sessions. Looking at the processes started from the xfce4 session menus, lot of them have /var/log/lxdm.log opened as fd 1, allowing user to write to the file that is root:root 640. | ||||||||||||||
| Alerts: |
| ||||||||||||||
mbedtls: code execution
| Package(s): | mbedtls | CVE #(s): | CVE-2015-5291 | ||||||||||||||||||||||||||||||||
| Created: | October 15, 2015 | Updated: | February 8, 2016 | ||||||||||||||||||||||||||||||||
| Description: | From the Arch Linux advisory:
When the client creates its ClientHello message, due to insufficient bounds checking it can overflow the heap-based buffer containing the message while writing some extensions. Two extensions in particular could be used by a remote attacker to trigger the overflow: the session ticket extension and the server name indication (SNI) extension. Starting with PolarSSL 1.3.0 which added support for session tickets, any server the client connects to can send an overlong session ticket which will cause a buffer overflow if and when the client attempts to resume the connection with the server. Clients that disabled session tickets or never attempt to reconnect to a server using a saved session are not vulnerable to this attack vector. Starting with PolarSSL 1.0.0, this overflow could also be triggered by an attacker convincing a client to use an overlong hostname for the SNI extension. The hostname needs to be almost as long at SSL_MAX_CONTENT_LEN, which as 16KB by default, but could be smaller if a custom configuration is used. Clients that do not accept hostnames from untrusted parties are not vulnerable to this attack vector. A malicious server could cause a denial of service or execute arbitrary code on a vulnerable client by sending an overlong session ticket. An attacker could cause a denial of service or execute arbitrary code on a vulnerable client by convincing it to connect to an overlong hostname. | ||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||
miniupnpc: code execution
| Package(s): | miniupnpc | CVE #(s): | CVE-2015-6031 | ||||||||||||||||||||||||
| Created: | October 19, 2015 | Updated: | November 23, 2015 | ||||||||||||||||||||||||
| Description: | From the Arch Linux advisory:
An exploitable buffer overflow vulnerability exists in the XML parser functionality of the MiniUPnP library. A specially crafted XML response can lead to a buffer overflow on the stack resulting in remote code execution. An attacker can set up a server on the local network to trigger this vulnerability. A remote attacker is able to create a specially crafted XML response on a server set up on the local network to execute arbitrary code on the client. | ||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||
mozilla: information disclosure
| Package(s): | firefox thunderbird seamonkey | CVE #(s): | CVE-2015-7184 | ||||||||||||||||||||||||
| Created: | October 16, 2015 | Updated: | October 26, 2015 | ||||||||||||||||||||||||
| Description: | From the
Security researcher Abdulrahman Alqabandi reported that the fetch() API did not correctly implement the Cross-Origin Resource Sharing (CORS) specification, allowing a malicious page to access private data from other origins. Mozilla developer Ben Kelly independently reported the same issue. A remote attacker can bypass the cross-origin resource sharing policy to access sensitive information. | ||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||
openstack-glance: two vulnerabilities
| Package(s): | openstack-glance | CVE #(s): | CVE-2015-5251 CVE-2015-5286 | ||||
| Created: | October 16, 2015 | Updated: | October 21, 2015 | ||||
| Description: | From the Red Hat advisory:
A flaw was discovered in the OpenStack Image service where a tenant could manipulate the status of their images by submitting an HTTP PUT request together with an 'x-image-meta-status' header. A malicious tenant could exploit this flaw to reactivate disabled images, bypass storage quotas, and in some cases replace image contents (where they have owner access). Setups using the Image service's v1 API could allow the illegal modification of image status. Additionally, setups which also use the v2 API could allow a subsequent re-upload of image contents. (CVE-2015-5251) A race-condition flaw was discovered in the OpenStack Image service. When images in the upload state were deleted using a token close to expiration, untracked image data could accumulate in the back end. Because untracked data does not count towards the storage quota, an attacker could use this flaw to cause a denial of service through resource exhaustion. (CVE-2015-5286) | ||||||
| Alerts: |
| ||||||
openstack-neutron: ACL bypass
| Package(s): | openstack-neutron | CVE #(s): | CVE-2015-5240 | ||||
| Created: | October 16, 2015 | Updated: | October 21, 2015 | ||||
| Description: | From the Red Hat advisory:
A race-condition flaw leading to ACL bypass was discovered in OpenStack Networking. An authenticated user could change the owner of a port after it was created but before firewall rules were applied, thus preventing firewall control checks from occurring. All OpenStack Networking deployments that used either the ML2 plug-in or a plug-in that relied on the security groups AMQP API were affected. | ||||||
| Alerts: |
| ||||||
openstack-nova: denial of service
| Package(s): | openstack-nova | CVE #(s): | CVE-2015-3280 | ||||
| Created: | October 16, 2015 | Updated: | October 21, 2015 | ||||
| Description: | From the Red Hat advisory:
A flaw was found in the way OpenStack Compute handled the resize state. If an authenticated user deleted an instance while it was in the resize state, it could cause the original instance to not be deleted from the compute node it was running on, allowing the user to cause a denial of service. | ||||||
| Alerts: |
| ||||||
openstack-swift: information disclosure
| Package(s): | openstack-swift | CVE #(s): | CVE-2015-5223 | ||||||||
| Created: | October 16, 2015 | Updated: | October 21, 2015 | ||||||||
| Description: | From the Red Hat advisory:
A flaw was found in the OpenStack Object storage service (swift) tempurls. An attacker in possession of a tempurl key with PUT permissions may be able to gain read access to other objects in the same project. | ||||||||||
| Alerts: |
| ||||||||||
owncloud: multiple vulnerabilities
| Package(s): | owncloud | CVE #(s): | CVE-2015-4716 CVE-2015-5953 CVE-2015-5954 CVE-2015-7699 | ||||
| Created: | October 19, 2015 | Updated: | October 21, 2015 | ||||
| Description: | From the Debian advisory:
Multiple vulnerabilities were discovered in ownCloud, a cloud storage web service for files, music, contacts, calendars and many more. These flaws may lead to the execution of arbitrary code, authorization bypass, information disclosure, cross-site scripting or denial of service. | ||||||
| Alerts: |
| ||||||
oxide-qt: code execution
| Package(s): | oxide-qt v8 | CVE #(s): | CVE-2015-7834 | ||||||||||||||||
| Created: | October 21, 2015 | Updated: | October 21, 2015 | ||||||||||||||||
| Description: | From the Ubuntu advisory:
Multiple security issues were discovered in V8. If a user were tricked in to opening a specially crafted website, an attacker could potentially exploit these to read uninitialized memory, cause a denial of service via renderer crash or execute arbitrary code with the privileges of the sandboxed render process. | ||||||||||||||||||
| Alerts: |
| ||||||||||||||||||
postgresql: two vulnerabilities
| Package(s): | postgresql-9.1, postgresql-9.3, postgresql-9.4 | CVE #(s): | CVE-2015-5288 CVE-2015-5289 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Created: | October 16, 2015 | Updated: | November 23, 2015 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Description: | From the Ubuntu advisory:
Josh Kupershmidt discovered the pgCrypto extension could expose several bytes of server memory if the crypt() function was provided a too-short salt. An attacker could use this flaw to read private data. (CVE-2015-5288) Oskari Saarenmaa discovered that the json and jsonb handlers could exhaust available stack space. An attacker could use this flaw to perform a denial of service attack. This issue only affected Ubuntu 14.04 LTS and Ubuntu 15.04. (CVE-2015-5289) | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
sssd: memory leak
| Package(s): | sssd | CVE #(s): | CVE-2015-5292 | ||||||||||||||||||||||||||||||||
| Created: | October 20, 2015 | Updated: | December 22, 2015 | ||||||||||||||||||||||||||||||||
| Description: | From the Red Hat CVE entry:
It was found that SSSD's Privilege Attribute Certificate (PAC) responder plug-in would leak a small amount of memory on each authentication request. A remote attacker could potentially use this flaw to exhaust all available memory on the system by making repeated requests to a Kerberized daemon application configured to authenticate using the PAC responder plug-in. | ||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||
wireshark: denial of service
| Package(s): | wireshark | CVE #(s): | CVE-2015-7830 | ||||||||||||||||
| Created: | October 16, 2015 | Updated: | October 21, 2015 | ||||||||||||||||
| Description: | From the Mageia advisory:
In Wireshark before 1.12.8, the pcapng file parser could crash while copying an interface filter. It may be possible to make Wireshark crash by injecting a malformed packet onto the wire or by convincing someone to read a malformed packet trace file. | ||||||||||||||||||
| Alerts: |
| ||||||||||||||||||
Page editor: Jake Edge
Kernel development
Brief items
Kernel release status
The current development kernel is 4.3-rc6, released on October 18. "Things continue to be calm, and in fact have gotten progressively calmer. All of which makes me really happy, although my suspicious nature looks for things to blame. Are people just on their best behavior because the Kernel Summit is imminent, and everybody is putting their best foot forward?"
Stable updates: none have been released since October 3. The large 3.10.91, 3.14.55, 4.1.11, and 4.2.4 updates are in the review process as of this writing; they can be expected at any time.
Quotes of the week
Linux-next takes a break
Linux-next maintainer Stephen Rothwell has announced that there will be no new linux-next releases until November 2. As a result, code added to subsystem maintainer trees after October 22 will not show up in linux-next before the (probable) opening of the 4.4 merge window. There are, he said, about 8500 commits in linux-next now, so he expects there is a fair amount of 4.4 work that hasn't showed up there yet.
Kernel development news
The return of simple wait queues
A "wait queue" in the kernel is a data structure that allows one or more processes to wait (sleep) until something of interest happens. They are used throughout the kernel to wait for available memory, I/O completion, message arrival, and many other things. In the early days of Linux, a wait queue was a simple list of waiting processes, but various scalability problems (including the thundering herd problem highlighted by the infamous Mindcraft report in 1999) have led to the addition of a fair amount of complexity since then. The simple wait queue patch set is an attempt to push the pendulum back in the other direction.Simple wait queues are not new; we looked at them in 2013. The API has not really changed since then, so that discussion will not be repeated here. For those who don't want to go back to the previous article, the executive summary is that simple wait queues provide an interface quite similar to that of regular wait queues, but with a lot of the bells and whistles removed. Said bells (or perhaps they are whistles) include exclusive wakeups (an important feature motivated by the aforementioned Mindcraft report), "killable" waits, high-resolution timeouts, and more.
There is value in simplicity, of course, and the memory saved by switching to a simple wait queue is welcome, even if it's small. But that, alone, would not be justification for the addition of another wait-queue mechanism to the kernel. Adding another low-level scheduling primitive like this increases the complexity of the kernel as a whole and makes ongoing maintenance of the scheduler harder. It is unlikely to happen without a strong and convincing argument in its favor.
In this case, the drive for simple wait queues is (as is the code itself) coming from the realtime project. The realtime developers seek determinism at all times, and, as it turns out, current mainline wait queues get in the way.
The most problematic aspect of ordinary wait queues appears to be the ability to add custom wakeup callbacks. By default, if one of the various wake_up() functions is called to wake processes sleeping on a wait queue, the kernel will call default_wake_function(), which simply wakes these waiting processes. But there is a mechanism provided to allow specialized users to change the wake-up behavior of wait queues:
typedef int (*wait_queue_func_t)(wait_queue_t *wait, unsigned mode,
int flags, void *key);
void init_waitqueue_func_entry(wait_queue_t *q, wait_queue_func_t func);
This feature is only used in a handful of places in the kernel, but they are important uses. The I/O multiplexing system calls (poll(), select(), and epoll_wait()) use it to turn specific device events into poll events for waiting processes. The userfaultfd() code (added for the 4.3 release) has a wake function that only does a wakeup for events in the address range of interest. The exit() code similarly uses a custom wake function to only wake processes that have an interest in the exiting process. And so on. It is a feature that cannot be removed.
The problem with this feature, from the realtime developers' point of view, is that they have no control over how long the custom wake function will take to run. This feature thus makes it harder for them to provide response-time guarantees. Beyond that, these callbacks require that the wait-queue structure be protected by an ordinary spinlock, which is a sleeping lock in the realtime tree. That, too, gets in the way in the realtime world; it prevents, for example, the use of wake_up() in hard (as opposed to threaded) interrupt handlers.
Simple wait queues dispense with custom callbacks and many other wait-queue features, allowing the entire data structure to be reduced to:
struct swait_queue_head {
raw_spinlock_t lock;
struct list_head task_list;
};
struct swait_queue {
struct task_struct *task;
struct list_head task_list;
};
The swait_queue_head structure represents the wait queue as a whole, while struct swait_queue represents a process waiting in the queue. Waiting is just a matter of adding a new swait_queue entry to the list, and wakeups are a simple traversal of that list. Regular wait queues, instead, may have to search the list for specific processes to wake. The lack of custom wakeup callbacks means that the time required to wake any individual process on the list is known (and short), so a raw spinlock can be used to protect the whole thing.
This patch set has been posted by Daniel Wagner, who has taken on the challenge of getting it into the mainline, but the core wait-queue work was done by Peter Zijlstra. It has seen a few revisions in the last few months, but comments appear to be slowing down. One never knows with such things (the patches looked mostly ready in 2013 as well), but it seems like there is not much keeping this work from going into the 4.4 kernel.
Other approaches to random number scalability
Back in late September, we looked at a patch to improve the scalability of random number generation on Linux systems—large NUMA systems, in particular. While the proposed change solved the immediate scalability problem, there were some downsides to that approach, in terms of both complexity and security. Some more recent discussion has come up with other possibilities for solving the problem.
The original idea came from Andi Kleen; it changed the kernel's non-blocking random number pool into a set of pools, one per NUMA node. That would prevent a spinlock on a single pool from becoming a bottleneck. But it also made the kernel's random number subsystem more complex. In addition, it spread the available entropy over all of the pools, effectively dividing the amount available to users on any given node by the number of pools.
But, as George Spelvin noted, the entropy
in a pool is "not located in any
particular bit position
", but is distributed throughout the
pool—entropy is a "holographic property of the pool
", as he
put it. That means that multiple readers do not need to be serialized by a
spinlock as long as each gets a unique salt value
that ensures
that the random numbers produced are different. Spelvin suggested using
the CPU ID for the salt; each reader hashes the salt in with the pool to
provide a unique random number even if the pool is in the same state for
each read.
Spelvin provided a patch using that approach along with his comments.
Random number subsystem maintainer Ted Ts'o
agreed with Spelvin about how the entropy
is distributed, but had some different ideas on how to handle mixing the
random numbers generated back into the pool. He also provided a patch and
asked
Kleen to benchmark his approach. "I really hope it will be good
enough, since besides
using less memory, there are security advantages in not spreading the
entropy across N pools.
"
Either approach would eliminate the lock contention (and cache-line bouncing of the lock), but there still may be performance penalties for sharing the pool among multiple cores due to cache coherency. The non-blocking pool changes frequently, either as data gets mixed in from the input pool (which is shared with the blocking pool) or as data that is read from the pool gets mixed back in to make it harder to predict its state. The cache lines of the pool will be bounced around between the cores, which may well be less than desirable.
As it turned out, when Kleen ran his micro-benchmark, both patch sets performed poorly in comparison to the multi-pool approach. In fact, for reasons unknown, Spelvin's was worse than the existing implementation.
Meanwhile, while the benchmarking was taking place, Ts'o pointed out that it may just make sense to recognize when a process is "abusing" getrandom() or /dev/urandom and to switch it to using its own cryptographic-strength random number generator (CSRNG or CRNG) seeded from the non-blocking pool. That way, uncommon—or, more likely, extremely rare—workloads won't force changes to the core of the Linux random number generator. Ts'o is hoping to not add any more complexity into the random subsystem:
The CRNG would be initialized from the non-blocking pool, and is reseeded after, say, 2**24 cranks or five minutes. It's essentially an OpenBSD-style arc4random in the kernel.
Spelvin was concerned that the CSRNG
solution would make long-running servers susceptible to backtracking: using
the current state of the generator to determine random numbers that have
been produced earlier. If backtracking protection can be discarded, there
can be even simpler solutions, he said, including: "just have *one* key for the kernel, reseeded more
often, and a per-thread nonce and stream position.
" But Ts'o said that anti-backtracking was not being
completely abandoned, just relaxed: "We are
discarding backtracking protection between successive reads from a
single process, and even there we would be reseeding every five
minutes (and this could be tuned), so there is *some*
anti-backtracking protection.
"
Furthermore, he suggested that perhaps real abusers could get their own CSRNG output, while non-abusers would still get output from the non-blocking pool:
Spelvin had suggested adding another random "device" (perhaps /dev/frandom) to provide the output of a CSRNG directly to user space, because he was concerned about changing the semantics of /dev/urandom and getrandom() by introducing the possibility of backtracking. But he agreed that changing the behavior for frequent heavy readers/callers would not change the semantics since the random(4) man page explicitly warns against that kind of usage:
Spelvin posted another patch set that pursues his ideas on improving the scalability of generating random numbers. It focuses on the reducing the lock contention when the output of the pool is mixed back into the pool to thwart backtracking (known as a mixback operation). If there are multiple concurrent readers for the non-blocking pool, Spelvin's patch set ensures that one of them causes a mixback operation; others that come along while a mixback lock is held simply write their data into a global mixback buffer, which then gets incorporated into the mixback operation that is done by the lock holder when releasing the lock.
There has been no comment on those patches so far, but one gets the sense
that Ts'o (or someone) will try to route around the whole scalability
problem with a separate CSRNG for abusers. That would leave the current
approach intact, while still providing a scalable solution for those who
are, effectively, inappropriately using the non-blocking pool. Ts'o seemed
strongly in favor of that approach, so it seems likely to prevail.
Kleen has
asked that his multi-pool approach be
merged, since "it works and is actually scalable
and does not require any new 'cryptographic research' or other
risks
". But it is not clear that the complexity and (slightly) reduced
security of that approach will pass muster.
Rich access control lists
Linux has had support for POSIX access control lists (ACLs) since the 2.5.46 development kernel was released in 2002 — despite the fact that POSIX has never formally adopted the ACL specification. Over time, POSIX ACLs have been superseded by other ACL mechanisms, notably the ACL scheme adopted with the NFSv4 protocol. Linux support for NFSv4 ACLs is minimal, though; there is no support for them at all in the virtual filesystem layer. The Linux NFS server supports NFSv4 ACLs by mapping them, as closely as possible, to POSIX ACLs. Chances are, that will end with the merging of Andreas Gruenbacher's RichACLs patch set for the 4.4 kernel.The mode ("permissions") bits attached to every file and directory on a Linux system describe how that object may be accessed by its owner, by members of the file's group, and by the world as a whole. Each class of user has three bits regulating write, read, and execute access. For many uses, that is all the control a system administrator needs, but there are times where finer-grained access control is useful. That is where ACLs come in; they allow the specification of access-control policies that don't fit into the nine mode bits. There are different types of ACLs, reflecting their evolution over time.
POSIX ACLs
POSIX ACLs are clearly designed to fit in with traditional Unix-style permissions. They start by implementing the mode bits as a set of implicit ACL entries (ACEs), so a file with permissions like:
$ ls -l foo
-rw-rw-r-- 1 linus penguins 804 Oct 18 09:40 foo
Has a set of implicit ACEs that looks like:
$ getfacl foo
user::rw-
group::rw-
other::r--
The user and group ACEs that contain empty name fields ("::") apply to the owner and group of the file itself. The administrator can add other user or group ACEs to give additional permissions to named users and groups. The actual access control is implemented in a way similar to how the mode bits are handled. If one of the user entries matches, the associated permissions are applied. Otherwise, if one of the group entries matches, that entry is used; failing that, the other permissions are applied.
There is one little twist: the traditional mode bits still apply as well. When ACLs are in use, the mode bits define the maximum permissions that may be allowed. In other words, ACLs cannot grant permissions that would not be allowed by the mode bits. The reason for this behavior is to avoid unpleasant surprises for applications (and users) that do not understand ACLs. So a file with mode 0640 (rw-r-----) would not allow group-write access, even if it had an ACE like:
group::rw-
If a particular process matches a named ACE (by either user or group name), that process is in the group class and is regulated by the group mode bits on the file. The owning group itself can be given fewer permissions than the mode bits would otherwise allow. See this article for a detailed description of how it all works.
NFSv4 ACLs
When the NFS community took on the task of defining an ACL mechanism for the NFS protocol, they chose not to start with POSIX ACLs; instead, they started with something that looks a lot more like Windows ACLs. The result is a rather more expressive and flexible ACL mechanism. With one obscure exception, all POSIX ACLs can be mapped onto NFSv4 ACLs, but the reverse is not true.
NFSv4 ACLs do away with the hardwired evaluation order used by POSIX ACLs. Instead, ACEs are evaluated in the order they are defined. Thus, for example, a group ACE can override an owner ACE if the group ACE appears first in the list. NFSv4 ACEs can explicitly deny access to a class of users. Permissions bits are also additive in NFSv4 ACLs. As an example of this, consider a file with these ACLs:
group1:READ_DATA:ALLOW
group2:WRITE_DATA:ALLOW
These ACEs allow read access to members of group1 and write access to members of group2. If a process that is a member of both groups attempts to open this file for read-write access, the operation will succeed. When POSIX ACLs are in use, instead, the requested permissions must all be allowed by a single ACE.
NFSv4 ACLs have a lot more permissions that can be granted and denied. Along with "read data," "write data," and "execute," there are independent permissions bits allowing append-only access, deleting the file (regardless of permissions in the containing directory), deleting any file contained within a directory, reading a file's metadata, writing the metadata (changing the timestamps, essentially), taking ownership of the file, and reading and writing a file's ACLs.
There is a set of bits controlling how ACLs are inherited from their containing directories. ACEs on directories can be marked as being inheritable by files within those directory; there is also a bit to mark an ACE that should only propagate a single level down the hierarchy. When a file is created within the directory, it will be given the ACLs that are marked as being inheritable in its parent directory. This behavior conflicts with POSIX, which requires that any "supplemental security mechanisms" be disabled for new files.
ACLs can have an "automatic inheritance" flag set. When an ACL change is made to a directory, that change will be propagated to any files or directories underneath that have automatic inheritance enabled — unless the "protected" flag is also set. Setting the "protected" flag happens whenever the ACL or mode of the file have been set explicitly; that keeps inheritance from overriding permissions that have been intentionally set to something else. The interesting twist here is that there is no way in Linux for user space to create a file without explicitly setting its mode, so the "protected" bit will always be set on new files and automatic inheritance simply won't work. NFS does have a way to create files without specifying the permissions to use, though, so automatic inheritance will work in that case.
NFSv4 ACLs also differ in how permissions are applied to the world as a whole. The "other" class is called "EVERYONE@", and it means truly everyone. In normal POSIX semantics, if a process is in the "user" or "group" class, the "other" permissions will not even be consulted; that allows, for example, a specific group to be blocked from a file that is otherwise world accessible. If a file is made available to everyone in an NFSv4 ACL, though, it is truly available to everyone unless a specific "deny" ACE earlier in the list says otherwise.
RichACLs
The RichACLs work tries to square NFSv4 ACLs with normal POSIX expectations. To do so, it applies the mode bits in the same way that POSIX ACLs do — the mode specifies the maximum access that is allowed. Since there are far more access types in NFSv4 ACLs than there are mode bits, a certain amount of mapping must be done. So, for example, if the mode denies write access, that will be translated to a denial of related capabilities like "create file," "append data," "delete child," and more.
The actual relationship between the ACEs and the mode is handled via a set of three masks, corresponding to owner, group, and other access. If a file's mode is set to deny group-write access, for example, the corresponding bits will be cleared from the group mask in the ACL. Thereafter, no ACE will be able to grant write access to a group member. The original ACEs are preserved when the mode is changed, though; that means that any additional access rights will be returned if the mode is made more permissive again. The masks can be manipulated directly, giving more fine-grained control over the maximum access allowed to each class; tweaking the masks can cause the file's mode to be adjusted to match.
There are some interesting complications in the relationship between the ACEs, the masks, and the actual file mode. Consider an example (from this document) where a file has this ACL:
OWNER@:READ_DATA::ALLOW
EVERYONE@:READ_DATA/WRITE_DATA::ALLOW
This ACL gives both read and write access to the owner. If, however, the file's mode is set to 0640, the mask for EVERYONE@ will be cleared, denying owner-write access even though there is nothing in the permissions that requires that. Fixing this issue requires a special pass through the ACL to grant the EVERYONE@ flags to other classes where the mode allows it.
A similar problem comes up when an EVERYONE@ ACE grants access that is denied by the owner or group mode bits. Handling this case requires inserting explicit DENY ACEs for OWNER@ (or GROUP@) ahead of the EVERYONE@ ACE.
The RichACLs patch set implements all of this and more. See this page and the richacl.7 man page for more details. As of this writing, though, there is still an open question: how to handle RichACL support in Linux filesystems.
At the implementation level that question is easily answered; RichACLs are stored as extended attributes, just like POSIX ACLs or SELinux labels. The problem is one of backward compatibility: what happens when a filesystem containing RichACLs is mounted by a kernel that does not implement them? Older kernels will not corrupt the filesystem (or the ACLs) in this case, but neither will they honor the ACLs. That can result in access being granted that would have been denied by an ACL; it also means that ACL inheritance will not be applied to new files.
To prevent such problems, Andreas requested that a feature flag be added to the ext4 filesystem; that flag would prevent the filesystem from being mounted by kernels that do not implement RichACLs. There was some discussion about whether this made sense; ext4 maintainer Ted Ts'o felt that the feature flags were there to mark metadata changes that the e2fsck utility needed to know about to avoid corrupting the filesystem. RichACLs do not apply, since filesystems don't pay attention to the contents of extended attributes.
Over the course of the conversation, though, a consensus seemed to form around the idea that the use of RichACLs is a fundamental filesystem feature. So it appears that once they are enabled for an ext4 filesystem (either at creation time, or via tune2fs), that filesystem will be marked as being incompatible with kernels that don't implement RichACLs. Something similar will likely be done for XFS.
If things go as planned, this work will be mainlined during the 4.4 merge window. At that point, NFS servers should be able to implement the full semantics of NFSv4 ACLs; the feature should also be of use to people running Samba servers. This patch set, the culmination of several years' work, should provide a useful capability to server administrators who need fully supported access control lists on Linux.
Patches and updates
Kernel trees
Architecture-specific
Core kernel code
Development tools
Device drivers
Device driver infrastructure
Filesystems and block I/O
Memory management
Networking
Security-related
Virtualization and containers
Miscellaneous
Page editor: Jonathan Corbet
Distributions
Ansible, Python 3, and distributions
The switch to Python 3 has been long—and is certainly far from over—but progress is being made in the form of Linux distributions that are switching their default. There are a number of high-profile projects with large, existing code bases (Mercurial and PyPy, for example) that have no imminent plans to switch or have not yet gotten around to it. Ansible is one of those projects, which makes it harder to support in distributions that have moved to Python 3 by default.
Fedora 23 is one such distribution. Problems with its Ansible support were raised by Dusty Mabe on the fedora-devel mailing list. He noted that a simple Ansible playbook required him to install four separate packages (python for Python 2 and three others for the modules used by the playbook) in order to get it to run on a Fedora client (i.e. managed system). As he pointed out, the list of extra packages will vary for each user depending on the Ansible modules used in their playbooks, so there is no one-size-fits-all solution for the problem.
Further down the thread, Bill Nottingham gave a nice overview of how Ansible works and why it needs to lag in supporting Python 3.
Since Ansible needs to work with a lot of systems that only have
Python 2 available (including RHEL 5 and derivatives that use
Python 2.4), it needs to support the older Python versions. The bulk
of the systems managed by Ansible are not on the more cutting edge community
distributions, as Nottingham pointed out: "The percentage of
people using Ansible to manage Fedora (and other python3-using-distros)
doesn't justify moving Ansible to python3 at this time.
"
As far as the "pain" that Mabe reported, Adam Williamson suggested that Ansible playbooks could ensure the required dependencies were present on the remote system. It would be difficult or impossible for some kind of Ansible runtime meta-package to depend on all of the different Python 2 packages that might be needed, as was suggested by Orion Poplawski, since the list of possibilities is quite long—there is no list of "standard" modules, either.
For example, any Ansible client installation will need the python-dnf package in order to have Python 2 bindings for the DNF package manager to install packages. The rest of the system uses the python3-dnf package to do the same job, but for Python 3, of course. Other dependencies are essentially similar, though playbooks don't typically specify their Python dependencies.
While the PyPy and Mercurial packages (and others based on Python 2) can simply list the dependencies they need (including the python package for Python 2), Ansible has another dimension to deal with. It has an entire ecosystem of Ansible modules, which adds a layer of complexity.
Predictably, there were calls for Ansible to get "out of the stone
age
", as Kevin Kofler put it. He
suggested a fork to make that happen, but it seems to be a fundamental
misunderstanding of where Ansible is targeted—and used. There are really
two Ansible installations to consider, as Florian Weimer pointed out:
The Ansible bug
report for Python 3 support has a comment
in the locked thread that mentions progress in the support for running the
controlling (or
managing) host using Python 3. In the comment, Toshio Kuratomi said
that the Ansible core is moving toward a single code base that can support
Python 2.6, 2.7, and 3.4. "We don't anticipate this to allow
running ansible on python3 in the near future but we are not hostile to
changes that help get us to this goal.
"
But support for RHEL 5 and its Python 2.4 interpreter is still a problem area. There are techniques to maintain code that needs to run with multiple Python versions, including 2.x and 3.x, but they do not support Python versions earlier than 2.6.
There are other distributions defaulting to Python 3 at this point, notably Arch Linux. The instructions for Ansible on Arch note the problem (and point to an Ansible FAQ entry), but do not directly address the module issue. Arch users have to be fairly savvy, though, so this may be less of a problem than it might be for some Fedorans—newbies in particular.
For Fedora 23, there isn't much the distribution can do. As Williamson said: "it's just...how ansible
works?
". Some information will be added to the release notes,
pointing out the need for the python-dnf package for clients in
order to be
able to install anything using Ansible.
These kinds of hiccups are to be expected as distributions shift to Python 3. When the decision was made—many moons ago—to break backward compatibility for Python 3, it set all of this in motion. It has taken most of a decade to encounter the problems (Python 3.0 was released in December 2008), but they haven't actually been too daunting, at least so far. One suspects that other distributions will have an even easier ride as Arch and Fedora find (and fix) these rough spots.
Brief items
Distribution quotes of the week
de Raadt: It was twenty years ago you see...
Theo de Raadt is celebrating the twentieth anniversary of the creation of the OpenBSD source tree. "Chuck [Cranor] and I also worked on setting up the first 'anoncvs' to make sure noone was ever cut out from 'the language of diffs' again. I guess that was the precursor for the github concept these days :-)"
OpenBSD 5.8 released
OpenBSD 5.8 has been released. This version features significant improvements, including new features, throughout the system.
Distribution News
Ubuntu family
Ubuntu Online Summit: Get your sessions in
The next Ubuntu Online Summit will take place November 3-5 on Google Hangouts. Now is the best time to get your sessions in.
Newsletters and articles of interest
Distribution newsletters
- DistroWatch Weekly, Issue 632 (October 19)
- 5 things in Fedora this week (October 15)
- openSUSE weekly review (October 16)
- Tumbleweed - Review of the Week (October 16)
Shuttleworth: X marks the spot
Mark Shuttleworth introduces the next Ubuntu release, 16.04 LTS.
What fortunate timing that our next LTS should be X, because “xenial” means “friendly relations between hosts and guests”, and given all the amazing work going into LXD and KVM for Ubuntu OpenStack, and beyond that the interoperability of Ubuntu OpenStack with hypervisors of all sorts, it seems like a perfect fit.
And Xerus, the African ground squirrels, are among the most social animals in my home country. They thrive in the desert, they live in small, agile, social groups that get along unusually well with their neighbours (for most mammals, neighbours are a source of bloody competition, for Xerus, hey, collaboration is cool). They are fast, feisty, friendly and known for their enormous… courage. That sounds just about right. With great… courage… comes great opportunity!
openSUSE Leap: Middle ground between cutting edge and conservative (The Register)
The Register takes a look at openSUSE Leap 42.1 beta. A release candidate was made available October 15 and the final release is due November 4. "It's the combination of these things (the powerful tools in YaST, the stability of SUSE Linux Enterprise, the latest packages from Tumbleweed) that make Leap compelling. It will likely be especially compelling to enterprise deployments. The only thing that feels missing from openSUSE at this point is the widespread adoption from users. Which is to say, that the community just doesn't feel as big as what you'll find in the Debian/Ubuntu or even Arch worlds. That may well change with Leap, though. If you've ever been on the fence about openSUSE, I strongly suggest giving Leap a try."
Page editor: Rebecca Sobol
Development
Upcoming work in and around Gstreamer
The 2015 GStreamer Conference included several talks that introduced new in-development features or ideas that are experimental in nature. They include a complete redesign of the automatic-decoding element, support for distributing broadcast television, and support for the WebRTC streaming protocol. Some of the features are being developed by members of the core GStreamer development team, while others originate in outside projects. As it frequently does, though, the conference allowed for these distinct teams to put their heads together.
decodebin3
Edward Hervey of Centricular presented his plans for a significant rewrite of the decodebin element, to get feedback from the GStreamer community before proceeding further. In GStreamer, the -bin suffix is used to designate an "auto-plugging" element that tries to recursively construct a processing pipeline to do the right thing for a common task. The decodebin element takes a media stream as input, figures out the stream type, and sets up the correct demultiplexing, parsing, and decoding elements for the content within the stream.
The trouble is that the current implementation, decodebin2, was written in 2006—when, Hervey said, "I didn't know the MPEG spec backward." So there are inefficiencies, such as the fact that decodebin2 decodes every media stream in the container file, even if only a subset of the streams are actually being played (say, one out of four available audio tracks). And switching between streams was convoluted. To continue the multiple-language-track example from above, switching from the English to the French track required setting up a new set of connections for the French audio and the video, emitting a signal to stop the current video and English audio streams, then restarting playback with the new setup. Worse still, if the user tried to do a seek event while this swap-over was being performed, they would jump to a position with no media available at all.
But these problems are magnified by more recent streaming protocols like HLS/DASH. A server might offer a selection of dozens of audio streams at a single URL; decodebin2 provides no way to inspect what each stream contains (even simple attributes like language tags), and decoding every stream would likely mean excessive overhead.
Hervey's plan is to write a new decodebin centered around a new, generic "stream" API. First, there will be a high-level GstStream object that makes important metadata (like video resolution or language tags) queryable. Next, multiple streams will be collected in an immutable GstStreamCollection object that elements can inspect to access complete information about every stream contained within.
The new decodebin3 element (or perhaps "decodebin_enterprise" or "decodebin_ultimate," he mused) can then incorporate a GstStreamCollection-processing stage before any decoding takes place and before any playback connections are constructed. That way, the application can present a useful choice of streams to the user, but the playback pipeline can also be smart about switching between streams. It can decode only the requested stream, and when a stream-switching event occurs, it can then set up a new playback connection, unlink the old one, and emit a "reconfigure" event to start decoding the newly activated stream.
This approach will also allow GStreamer to handle situations where a new stream appears suddenly during playback—as can happen with supplementary audio tracks in a broadcast television MPEG stream. And it will allow decodebin to detect and handle stream types that decodebin2 is unequipped for, such as subtitle tracks. Hervey speculated that the design could also be used to add features like gapless playback of a sequence of audio files—although that will require a bit more work. He has done some of the work on this decodebin replacement already and thinks it is viable, but asked the audience to provide feedback. In general, it seems like the GStreamer community agrees that the approach is right, so decodebin3 may be coming to GStreamer sooner rather than later.
DVB and WebRTC streams
Two presentations during the conference described work being done outside the GStreamer core project to implement support for important new media types. Romain Picard introduced the HeliosTv project (developed at the company of the same name) that handles Digital Video Broadcasting (DVB) streams, and Miguel Paris introduced his company's implementation of the WebRTC real-time communication protocol.
Picard explained that the desire for DVB support in GStreamer was driven by the need to integrate broadcast television with the IP-based streaming used to deliver most other media content. DVB signals arrive over the airwaves, from a cable provider, or through a satellite dish, but users care little about that distinction. HeliosTv makes software for set-top boxes and building-wide signal distribution (like one might find in a hotel or office building, although the same design is increasingly common in homes) that handles DVB alongside other streaming sources.
There was a pre-existing DVB element in GStreamer, he said, but the company's developers found it unworkable for the use case in question. So they implemented their own, which is available on GitHub. The HeliosTv DVB element can make use of hardware demultiplexers, which are found in some high-end satellite and terrestrial DVB receivers but are not common in consumer devices. It also uses a different processing pipeline designed to run on a central media server that will be accessed by (potentially physically separate) client devices.
First, the metadata from the DVB signal (including stream attributes and electronic program guide data) is extracted, so that application code can use it without needing to wrangle with the full media stream. The Helios server listens for connections from client applications, which can include live-playback endpoints or recording tools. Once a client connects, two TCP connections are established: one to stream the media, and one for control. At present, the DVB element works well for basic program streaming; Picard said they are working on defining a fuller API and on creating bindings for a range of programming languages.
Paris began his WebRTC talk with a bit of background information on the protocol itself. WebRTC is designed for audio and video chat in web browsers, so it utilizes browser APIs and protocol stacks. But it has proven useful enough that developers are beginning to want WebRTC support in other applications. Consequently, his company Kurento has been developing a set of GStreamer plugins suitable for creating a WebRTC endpoint.
While GStreamer already had good support for the media codecs used in WebRTC (primarily Opus audio and VP8 video), it lacked support for the Interactive Connectivity Establishment (ICE) protocol used for session establishment and only recently added support for Datagram Transport Layer Security (DTLS). The team used libnice to implement ICE support, and uncovered a number of bugs along the way. As it happens, a libnice maintainer happened to be in the audience for the talk, and a number of issues were resolved through the discussion that followed. DTLS support has several inefficiencies, such as requiring per-user encryption operations to be performed by the server, but resolving those problems requires the finalization of work being done in the Privacy Enhanced RTP Conferencing (PERC) working group at the IETF.
The Kurento team has made several contributions that may be useful for other projects. For example, it had to implement bandwidth-estimation heuristics in GStreamer's RTP element to provide congestion control, and it implemented support for WebRTC's data channel feature. Fortunately, again, the Kurento team and several of the developers in the audience were able to start a discussion about the details of these features that looks like it will lead to even more useful code down the road.
Both the DVB and WebRTC projects, while available under open-source licenses, are outside efforts at this point. But, if they do prove useful to the GStreamer community, they may well make their way upstream for a future release. Certainly both topics are of importance to a number of developers and users, and certainly engaging with other GStreamer users and developers at the event is a step in the right direction.
[The author would like the thank the Linux Foundation for travel assistance to attend GStreamer Conference.]
Brief items
Quotes of the week
1 Well, at least after you get past the initial learning curve.
ownCloud Server 8.2 released
OwnCloud Server 8.2 is available. This release features a a revamped user interface and many improvements for ownCloud administrators. "ownCloud Server 8.2 makes it possible for ownCloud Administrators to send their users notifications, useful to let users know about a maintenance window for example. Admins can now also set limits on trash and version retention, ensuring that trashed files and versions get deleted after a set number of days or are not purged for a certain period. The occ command line tool has gained significant new maintenance and control features. It enables encrypting, decrypting and re-encrypting existing user data and can now set and get system and app configuration values. It can also be used to rescan the file system and update mime types after custom types have been defined."
Maughan: Org-mode Basics (five part series)
Ben Maughan has published a five-part series on using Emacs org-mode for note taking. The first four posts cover structuring notes, using tables, using links and images, and formatting text. The final installment, just published, addresses exporting notes for use in other applications.
Initial release of gnuspeech available
The gnuspeech project has made its first release. The announcement describes gnuspeech as a "new approach to synthetic speech as well as a speech research tool. It comprises a true articulatory model of the vocal tract, databases and rules for parameter composition, a 70,000 word plus pronouncing dictionary, a letter-to-sound fall-back module, and models of English rhythm and intonation, all based on extensive research that sets a new standard for synthetic speech, and computer-based speech research.
" Two modules are available: gnuspeechsa, which is a graphical, cross-platform speech-synthesis tool, and gnuspeech, the underlying engine and related utilities.
Mailpile: UI updates, OTF news
At the Mailpile webmail project's blog, Bjarni Einarsson announced the release of a revamped user interface. The most important changes are that the UI elements scale down for smartphone and tablet screen sizes, and that a number of full-page reloads have been eliminated—thus improving performance. The same announcement also notes that the project failed to get its Open Technology Fund (OTF) grant request approved, which is a setback on the funding front.
Erlang issue tracker available
The Erlang/OTP project has announced the launch of its first web-based bug tracker, running at bugs.erlang.org. The tracker is a JIRA instance, and replaces the old erlang-bugs mailing list, which will be phased out.
Newsletters and articles
Development newsletters from the past week
- What's cooking in git.git (October 14)
- What's cooking in git.git (October 20)
- Git Rev News (OCtober 14)
- LLVM Weekly (October 19)
- Perl Weekly (October 19)
- PostgreSQL Weekly News (October 18)
- Python Weekly (October 15)
- Ruby Weekly (October 15)
- This Week in Rust (October 19)
- Tahoe-LAFS Weekly News (October 20; returning to publication after a lengthy absence....)
- Wikimedia Tech News (October 19)
Sonic Pi uses code to compose a dance party (Opensource.com)
Opensource.com has an interview with Sam Aaron, creator of Sonic Pi. "Sonic Pi is a musical instrument that happens to use code as its interface. It's also a programming environment that happens to be very capable of making sophisticated sounds. It's actually many things—a tool for learning how to program, for exploring new notations for music, for improvising electronic music, for collaborating on musical ideas via text, for researching new programming techniques related to time and liveness. Most of all, it's a lot of fun."
Ali: Lessons on being a good maintainer
At his blog, Zeeshan Ali explores
what makes for a good maintainer in a free-software project. Among
the properties he discusses are only accepting feature requests when
they come with a specific use case, following "sane
"
commit log rules, and (for library maintainers) getting involved with
at least one application project that uses your API. And, of course,
constantly striving for high-quality code. "To be very honest, if you don't care about quality enough, you really should not be even working on software that effects others, let alone maintaining them.
"
Page editor: Nathan Willis
Announcements
Brief items
The GNU ethical repository criteria
The Free Software Foundation has announced the posting of a set of criteria meant to be used for judging the suitability of code-hosting sites. "The criteria emphasize protection of privacy (including accessibility through the Tor network), functionality without nonfree JavaScript, compatibility with copyleft licensing and philosophy, and equal treatment of all users' traffic."
Red Hat acquires Ansible
Red Hat has announced that it is acquiring Ansible, the company behind the Ansible configuration management system. "Ansible's automation capabilities, together with Red Hat's existing management portfolio, will help users drive down the cost and complexity of deploying and managing both cloud-native and traditional applications across hybrid cloud environments." LWN looked at Ansible in August.
Elections for TDF Board of Directors
Nominations are open for The Document Foundation board of directors. "We kindly ask nominees who would like to stand for elections to provide a 75 words statement on their candidacy as continuous text (so no bullet lists or multiple paragraphs)."
Videos from the Tracing Summit 2015
The Tracing Summit took place last August, colocated with LinuxCon NA. Videos of the presentations are available.
Articles of interest
Appeals Court Gives Google A Clear And Total Fair Use Win On Book Scanning (Techdirt)
Here's a lengthy Techdirt article looking through the US Appeals Court ruling that Google's scanning of books constitutes fair use under copyright law. "Thus, while authors are undoubtedly important intended beneficiaries of copyright, the ultimate, primary intended beneficiary is the public, whose access to knowledge copyright seeks to advance by providing rewards for authorship."
CiviCRM: a key part of the free software movement
The Free Software Foundation covers the CiviCRM User Summit. "CiviCRM is a free software success story, and it's become an integral to our work at the FSF. We're thankful to everyone who also works on it, submitting bug reports, programming, testing, and translating. We look forward to being part of its community for many years to come, and using it to win more victories for free software."
Calls for Presentations
SciPy India 2015: call for papers
SciPy India 2015 will be held December 14-16 in Bombay, India. The call for papers ends November 24.CFP Deadlines: October 22, 2015 to December 21, 2015
The following listing of CFP deadlines is taken from the LWN.net CFP Calendar.
| Deadline | Event Dates | Event | Location |
|---|---|---|---|
| October 26 | April 11 April 13 |
O’Reilly Software Architecture Conference | New York, NY, USA |
| October 30 | January 30 January 31 |
Free and Open Source Developers Meeting | Brussels, Belgium |
| October 30 | January 21 January 24 |
SCALE 14x - Southern California Linux Expo | Pasadena, CA, USA |
| October 31 | November 20 November 22 |
FUEL GILT Conference 2015 | Pune, India |
| November 2 | December 4 December 5 |
Haskell in Leipzig | Leipzig, Germany |
| November 23 | March 19 March 20 |
LibrePlanet | Boston, MA, USA |
| November 24 | December 14 December 16 |
SciPy India 2015 | Bombay, India |
| November 24 | May 16 May 19 |
OSCON 2016 | Austin, TX, USA |
| November 30 | February 1 | MINIXCon 2016 | Amsterdam, Netherlands |
| November 30 | February 5 February 7 |
DevConf.cz 2016 | Brno, Czech Republic |
| December 12 | February 1 | Sysadmin Miniconf | Geelong, Australia |
| December 20 | February 10 February 12 |
netdev 1.1 | Seville, Spain |
If the CFP deadline for your event does not appear here, please tell us about it.
Upcoming Events
SFLC Fall Conference
The Software Freedom Law Center's Fall Conference will take place on October 30 in New York City. An RSVP is appreciated if you are planning to attend. There will also be live stream for those unable to attend in person.SCALE 14x: Cory Doctorow keynotes
Cory Doctorow will be a keynote speaker at SCALE 14x, which will take place January 21-24, 2016 in Los Angeles, CA. This announcement also contains a reminder that the call for papers closes October 30.DebConf16 fundraising help requested
DebConf16, the annual Debian conference, will take place in Cape Town, South Africa, in July of 2016. Sponsors are needed to help fund the conference. "We are particularly interested in organisations that are either entirely new to sponsoring Debian, or have not sponsored Debian recently (as we are already in touch with our recent supporters)."
Events: October 22, 2015 to December 21, 2015
The following event listing is taken from the LWN.net Calendar.
| Date(s) | Event | Location |
|---|---|---|
| October 19 October 22 |
ZendCon 2015 | Las Vegas, NV, USA |
| October 19 October 22 |
Perl Dancer Conference 2015 | Vienna, Austria |
| October 19 October 23 |
Tcl/Tk Conference | Manassas, VA, USA |
| October 21 October 22 |
Real Time Linux Workshop | Graz, Austria |
| October 23 October 24 |
Seattle GNU/Linux Conference | Seattle, WA, USA |
| October 24 October 25 |
PyCon Ireland 2015 | Dublin, Ireland |
| October 26 | Korea Linux Forum | Seoul, South Korea |
| October 26 October 28 |
Kernel Summit | Seoul, South Korea |
| October 26 October 28 |
OSCON | Amsterdam, The Netherlands |
| October 26 October 28 |
Samsung Open Source Conference | Seoul, South Korea |
| October 27 October 30 |
PostgreSQL Conference Europe 2015 | Vienna, Austria |
| October 27 October 29 |
Open Source Developers' Conference | Hobart, Tasmania |
| October 27 October 30 |
OpenStack Summit | Tokyo, Japan |
| October 29 | FOSS4G Belgium 2015 | Brussels, Belgium |
| October 30 | Software Freedom Law Center Conference | New York, NY, USA |
| November 3 November 5 |
EclipseCon Europe 2015 | Ludwigsburg, Germany |
| November 5 November 7 |
systemd.conf 2015 | Berlin, Germany |
| November 5 November 8 |
mini-DebConf | Cambridge, UK |
| November 6 November 8 |
Jesień Linuksowa 2015 | Hucisko, Poland |
| November 6 November 8 |
Dublin blockchain hackathon | Dublin, Ireland |
| November 7 November 8 |
OpenFest 2015 | Sofia, Bulgaria |
| November 7 November 8 |
PyCON HK 2015 | Hong Kong, Hong Kong |
| November 7 November 9 |
PyCon Canada 2015 | Toronto, Canada |
| November 8 November 13 |
Large Installation System Administration Conference | Washington, D.C., USA |
| November 9 November 11 |
PyData NYC 2015 | New York, NY, USA |
| November 9 November 11 |
KubeCon | San Francisco, CA, USA |
| November 10 November 11 |
Open Compliance Summit | Yokohama, Japan |
| November 10 November 12 |
Allseen Alliance Summit | Santa Clara, CA, USA |
| November 10 November 13 |
Black Hat Europe 2015 | Amsterdam, The Netherlands |
| November 11 November 13 |
LDAP Conference 2015 | Edinburgh, UK |
| November 14 November 15 |
NixOS Conference 2015 | Berlin, Germany |
| November 14 November 15 |
PyCon Czech 2015 | Brno, Czech Republic |
| November 15 November 20 |
Supercomputing 15 | Austin, TX, USA |
| November 16 November 19 |
Open Source Monitoring Conference 2015 | Nuremberg, Germany |
| November 17 November 18 |
PGConf Silicon Valley | San Francisco, CA, USA |
| November 18 November 19 |
Paris Open Source Summit | Paris, France |
| November 18 November 22 |
Build Stuff 2015 | Vilnius, Lithuania |
| November 19 November 21 |
FOSSETCON 2015 | Orlando, Florida, USA |
| November 19 | NLUUG fall conference 2015 | Bunnik, The Netherlands |
| November 20 November 22 |
FUEL GILT Conference 2015 | Pune, India |
| November 20 November 22 |
Postgres User Conference China 2015 | Beijing, China |
| November 21 November 22 |
PyCon Spain 2015 | Valencia, Spain |
| November 21 November 22 |
Capitole du Libre 2015 | Toulouse, France |
| November 21 | LinuxPiter Conference | Saint-Petersburg, Russia |
| November 28 | Technical Dutch Open Source Event | Eindhoven, The Netherlands |
| December 4 December 5 |
Haskell in Leipzig | Leipzig, Germany |
| December 5 December 6 |
openSUSE.Asia Summit | Taipei, Taiwan |
| December 8 December 9 |
Node.js Interactive | Portland, OR, USA |
| December 14 December 16 |
SciPy India 2015 | Bombay, India |
If your event does not appear here, please tell us about it.
Page editor: Rebecca Sobol
