LWN.net Weekly Edition for January 21, 2016
OpenSSH and the dangers of unused code
Unused code is untested code, which probably means that it harbors bugs—sometimes significant security bugs. That lesson has been reinforced by the recent OpenSSH "roaming" vulnerability. Leaving a half-finished feature only in the client side of the equation might seem harmless on a cursory glance but, of course, is not. Those who mean harm can run servers that "implement" the feature to tickle the unused code. Given that the OpenSSH project has a strong security focus (and track record), it is truly surprising that a blunder like this could slip through—and keep slipping through for roughly six years.
The first notice of the bug was posted by Theo de Raadt on January 14. He noted that an update was coming soon and that users could turn off the experimental client roaming feature by setting the undocumented UseRoaming configuration variable to "no". The update was announced by Damien Miller later that day. It simply disabled the roaming feature entirely, though it also fixed a few other security bugs as well. The problems have been present since the roaming feature was added to the client (but not the server) in OpenSSH 5.4, which was released in March 2010.
The bug was found by Qualys, which put out a
detailed
advisory that described two separate flaws, both of which were in the
roaming code. The first is by far the most dangerous; it is an information
leak that can provide the server with a copy of the client system's private
SSH keys (CVE-2016-0777). The second is a buffer overflow (CVE-2016-0778)
that is "unlikely to have any
real-world impact
" because it relies on two non-default options
being used by the client (ProxyCommand and either
ForwardAgent or ForwardX11)
The private keys of an SSH client are, of course, the most important secret that is used to authenticate the client to a server where the corresponding public key has been installed. An attacker who has that private key can authenticate to any of the servers authorized by the user, assuming that there is no second authentication factor required. So they can effectively act as that user on the remote host(s). It should be noted that password-protected private keys are leaked in their encrypted form, which would still allow an attacker to try to break the passphrase offline. Also, if an agent such as ssh-agent is used, no key material is leaked.
The Qualys advisory includes patches to the OpenSSH server that implement a proof of concept of what a malicious server could do. The proof of concept is incomplete as there are environment-variable parameters used in the examples in the advisory that are not present in that code (notably, "heap_massaging:linux").
At its core, the problem in the client code (aside from still being present long after the server side was removed) is that it uses a server-supplied length to determine the size of a buffer to allocate—without much in the way of sanity checks. It also allocates the buffer using malloc(), which doesn't clear the memory being allocated.
The roaming feature is meant to handle the case when the SSH connection is lost (due to a transient problem of some sort) and allow the client to reconnect transparently. The client stores data that it has sent, but may not yet have been received by the server (and might get lost during the interruption). After the reconnect, the server can request that the client "resend" a certain number of bytes—even if the client never sent that many bytes. The server-controlled offset parameter can be used to trick the client into sending the entire contents of the buffer even though it has not written anything to it, thus leaking the data that was previously stored there.
So malicious servers can offer roaming to clients during the key-exchange phase, disconnect the client, then request a whole buffer's worth of data be "resent" after reconnection. There are some conditions that need to be met in order to exploit the flaw that are described in the advisory, such as "heap massaging" to force malloc() to return sensitive data and guessing the client send buffer size. But Qualys was able to extract some private key information from clients running on a number of different systems (including OpenBSD, FreeBSD, CentOS, and Fedora).
Qualys initially believed that the information leak would not actually leak private keys for a few different reasons. For one, the leak is from memory that has been freed, but is recycled in a subsequent allocation, rather than reading data beyond the end of a buffer, such as in a more-typical buffer overflow. In addition, OpenSSH took some pains to clear the sensitive data from memory.
It turns out that some of those attempts to clear sensitive information (like private keys) out of memory using memset() and bzero() were optimized away by some compilers. Clang/LLVM and GCC 5 use an optimization known as "dead store elimination" that gets rid of store operations to memory that is never read again. Some of the changes in the OpenSSH update are to use explicit_bzero() to avoid that optimization in sensitive places.
But a much bigger factor in disclosing the key information is the use of the C library's standard I/O functions—in this case fopen() and friends. The OpenSSH client uses those functions to read in the key files from the user's .ssh directory; they do buffered I/O, which means they have their own internal buffers that are allocated and freed as needed. On Linux, that's not a problem because the GNU C library (Glibc) effectively cleanses the buffers before freeing them. But on BSD-based systems, freed buffers will contain data from previous operations.
It is not entirely clear why Qualys was able to extract key information on
Linux systems given the Glibc behavior. The advisory does note that there
may be other ways for the key material to leak "as suggested by
the CentOS and Fedora examples at the end of this section
".
Beyond that, OpenSSH versions from 5.9 onward read() the private key in 1KB chunks into a buffer that is grown using realloc(). Since realloc() may return a newly allocated buffer, that can leave partial copies of the key information in freed memory. Chris Siebenmann has analyzed some of the lessons to be learned from OpenSSH's handling of this sensitive data.
Interactive SSH users who were communicating with a malicious server might well have noticed a problem, though. The OpenSSH client prints a message, "[connection suspended, press return to resume]", whenever a server disconnect is detected. Since causing a disconnect is part of tickling the bug, that message will appear. It would likely cause even a non-savvy user to wonder—and perhaps terminate the connection with Ctrl-C, which would not leak any key information.
But a large number of SSH sessions are not interactive. Various backup scripts and the like use SSH's public-key authentication to authenticate to the server and do their jobs, as does the SSH-based scp command. As Qualys showed, those can be tricked into providing the needed carriage return to resume the connection. Thus they are prime targets for an attack using this vulnerability.
While the bug is quite serious, it is hard to believe it wouldn't have been found if both sides of the roaming feature had been rolled out. Testing and code inspection might have led the OpenSSH developers to discover these problems far earlier. It was presumably overlooked because there was no server code, so it "couldn't hurt" to have the code still present in the client. Enabling an experimental feature by default is a little harder to understand.
For a project that "is developed with the same rigorous security
process that the OpenBSD group is famous for
", as the OpenSSH security page
notes, it is truly a remarkable oversight. It also highlights a lack of
community code review. We are sometimes a bit smug in the open-source
world because we can examine all of the security-sensitive code
running on our systems. But it appears that even for extremely important
tools like OpenSSH, the "can" does not always translate into "do". It
would serve us well to change that tendency.
Companies and organizations like Qualys are likely to have done multiple code audits on the OpenSSH code over the last six years. Attackers too, of course. The latter are not going to publish what they find, but security researchers generally do. A high-profile bug like this in a security tool that is in widespread use is exactly the kind of bug they are looking for, so it is surprising this was missed (in white hat communities, anyway) for so long. In hindsight, leaving the unused code in the client seems obviously wrong—that's a lesson we can all stand to relearn.
Automated image testing with Verified Pixel
Verifying the authenticity of a digital image is no simple task, given the array of image-manipulation tools available and the difficulty inherent in tracing the provenance of any digital file. But attempting to establish the origin and veracity of a photo is not a lost cause. The non-profit organization Sourcefabric, which produces open-source journalism tools, has developed a web-based utility called Verified Pixel that attempts to make assessing an image's reliability an attainable goal.
The Verified Pixel site notes that news organizations are increasingly struggling with how to verify the authenticity of smartphone footage and other images captured by eyewitnesses' cameras in the field. A lot of news organizations solicit user-contributed images, which results in a glut of input and not enough time to perform forensic analysis of every photo. This is in spite of the fact that there are several well-known forensic tools available.
In April 2015, Sourcefabric and Eyewitness Media received a grant from the Knight Foundation to develop an image-verification web service to address the problem. Verified Pixel is the result of that effort. After teasing the prototype on Twitter in October 2015, Sourcefabric began rolling out access to a test server to beta testers in January 2016. I requested an invite, then spent some time kicking its tires and asking questions of the development team.
The source code is available on GitHub, although the dependencies are likely large enough to ward off casual users. Verified Pixel is implemented as a module for Sourcefabric's newsroom-management tool Superdesk (although the beta is currently a standalone web application, the intent is likely to make Verified Pixel a standard Superdesk component). The server side is written in Python, with an AngularJS client front-end. Like Superdesk, Verified Pixel is designed to be run on an organization's own server; the test server used for the prototype was maintained by Sourcefabric simply to solicit feedback on the program's functionality.
The idea is for Verified Pixel to provide a consistent, multi-user workflow: any user can upload images to the database, and a flexible battery of verification tests will be run to assess each image's reliability. Users can then flag individual images as suspect or potentially valid based on the scores of the validation tests. Authors or editors can subsequently select images to use taking those test results into account.
My pixel is my passport, verify me
The current battery of tests includes three online services: Google's reverse-image search (which will find suspiciously similar images if they exist), TinEye (which will attempt to locate probable duplicates of an image and determine which is the oldest), and Izitru (which runs a set of forensic tests to determine if an image has been edited). In addition, Verified Pixel automatically extracts any Exif data from the image and highlights key fields (for example, plotting GPS locations and camera direction, if available, on a map).
All users can add comments to images in the database, and the image database can be searched on all metadata fields, including the verification-test results. Images can also be grouped into collections several layers deep; the topmost layer is called a "desk," which is representative of a separate office within a publication's newsroom.
The Google and TinEye search results can help attest to whether or not an image has been published previously, which presumably would not be the case for a user-submitted photo of a current event. Such a test would still have value even for images that are not coming in as part of a breaking news story, such as detecting stock photos that were presented as original coverage.
The Izitru service runs six forensic tests and returns a "trust rating" on a one-to-five scale that corresponds to whether or not the image is in a "direct from camera" state or has been modified after the fact. Unfortunately, the service does not detail what the six tests are, although the Verified Pixel test-results page provides some rough descriptions. They include tests of how JPEG data is packaged within the file (based on differences between how cameras and software package JPEGs), tests to detect the traces of camera sensor patterns, and tests that look for artifacts of recompression. Founder Harry Farid has authored a lengthy list of research papers on image forensics that likely cover the same ground.
These tests also depend, at least in part, on analysis of known camera
characteristics. The FAQ page notes that
"each digital camera has distinct ways of applying JPEG settings
when saving a file. Other differences result from artifacts that are
introduced when images are saved multiple times.
" The site
elsewhere says that it "relies on a database of 'signatures' that
describe the distinct ways that different camera models store their
JPEG files
" and that cameras not in the database will likely
not receive the highest trust rating.
So Izitru can rate the likelihood that an image is unmodified; if the image has been modified, though, further testing is required to determine what the modifications were. Simple resizing is not particularly bad, while adjusting color curves or editing for content are far more serious issues. Izitru is limited to analyzing JPEG files, which may be of concern to some users who would be interested in support for camera raw files. But the JPEG limitation is in line with what many news organizations ask from user-contributed content; in November 2015, the Reuters news service announced it would only accept image files from freelancers that were shot as JPEG.
Looking forward
For free-software developers, however, the service's proprietary and secret test battery is likely a larger issue. The good news is that Verified Pixel is designed to support pluggable test modules, and Sourcefabric's Douglas Arellanes said support for additional verification services is still to come. In an email, Arellanes said that the project spent a fair amount of time working with both OpenCV and ccv on modification-detection and other tasks using machine vision, but ultimately found that it was not a priority for users:
Still, he said that he hopes to attract code contributions to support open-source libraries, since there is a lot that could be applicable. The team has an informal wishlist of additional tests it would like to see, such as automatic image tagging to recognize images containing traumatic content and a method to compare weather data (based on geolocation and timestamps) with the apparent conditions indicated in an image. The traumatic-content issue is one that Verified Pixel developer Sam Dubberley has explored in his own research.
But, for now, the project is focused on getting real-world feedback from news organizations. Arellanes said that "we need to see where the bottlenecks are for image verifiers - is it in workflow and getting the images into newsrooms' CMSes? Is it in making metadata easier? This is what we'd like to glean" from testing. He also added that Verified Pixel may be useful beyond the initial newsroom use case. "We think it has uses in a human rights context as well as in any situation requiring image verification - insurance claims, for example."
The beta runs its test battery on new uploads quickly, and searching and sorting are both painless. Given that the Google and TinEye assessments rely on a massive corpus of published-image data, it is hard to imagine comparable tests that do not dictate the use of a proprietary service. The Izitru modification-detection tests, though, could face stiff competition from other libraries if the project attracts a development community. It the meantime, it is clear to see how automating all such tests and collating the results can help users—in a newsroom or elsewhere—simplify the job of sifting through digital images of unknown trustworthiness.
An interview with Joey Hess
In 1992 I interviewed Linus Torvalds for my little Linux newsletter, Linux News. That was traumatic enough that I haven't interviewed anyone since, but a few months ago I decided that it would be fun to interview Joey Hess, who kindly agreed to it.
Joey is known for many things. He wrote, alone and with others, debhelper, its dh component, and the debian-installer, as well as other tools like debconf. He wrote the ikiwiki wiki engine; and co-founded (with me) the Branchable hosting service for ikiwiki. He wrote git-annex, and ran a couple of successful crowdfunding campaigns to work on it full time. He lives in the eastern US, off-grid, in a solar-powered cabin, conserving power when necessary. He has retired from Debian.
The interview was done over email. This is a write-up of the messages into a linear form, to make it easier to read. All questions are by me, all answers by Joey, except in some places edited by me. The interview took several months, because I had a lot of other things I was doing, so sometimes it took me weeks to ask my next question. At one point, Joey pointed out that "the interviewee may become a different person than at the beginning".
Most of the credit for this interview goes to Joey. I merely asked some questions and wrote up the answers.
Lars: You were one of the most productive and respected Debian developers for a very long time. What made you want to leave the project?
Joey: A hard question to start with! Probably you didn't mean for it to be a hard question, but I guess you've read my blog post on leaving and my resignation, and it seems they didn't answer the question well enough. Perhaps they hint vaguely at problems I saw without giving enough detail, or suggest I had some ideas to solve them. And so, I guess, you (and others) ask this question, and I feel I should do my best to find an answer to it.
Thing is, I don't know if I can answer it well. Our experience of big
problems can seem vague (recall the blind men and the elephant). Where
I had good ideas, I had a very long time indeed to try to realize
them, and firing all my dud ideas off as parting shots on the way out
is not likely to have achieved much.
I do have the perspective now for a different kind of answer, which is that if I'd known how bothersome the process of leaving Debian turns out to be, I might not have bothered to formally leave.
Perhaps it would be easier to stop participating, just let things slide. Easier to not need to worry about my software going unmaintained in Debian; to not worry about users (or DNS registrars) who might try to contact me at my Debian email address and get a ugly "Unrouteable address" bounce; to not feel awkward when I meet old friends from Debian.
But, if I'd have gone that route, I'd lack the perspective I have now, of seeing Debian from the outside. I'd not have even the perspective to give this unsatisfying answer.
Lars: From the blog post, I understand that you prefer to work on smaller projects, where it's easier to make changes. Or perhaps I'm over-interpreting, since that's a feeling I have myself. I have, from time to time, spent a bit of thought on ways to make things better in Debian in this regard. My best idea, mostly untried, is to be able to branch and merge at the distro level: any developer (preferably anyone, not just Debian developers) could do what is effectively "git checkout -b my/feature/branch", make any changes they want in as many packages as they want, have an easy, effective way to build any .debs affected by the changes, and test. If the changes turn out to be useful, there would be a way to merge the source changes back. Do you have any thoughts on that?
Joey: I'm fairly addicted to that point in development of a project where it's all about exploring a vast solution space, and making countless little choices that will hopefully add up to something coherent and well thought out and useful. Or might fail gloriously.
Some projects seem to be able to stay in that state for a long time, or at least re-enter it later; in others it's a one-time thing; and in less fun areas, I hear this may never happen in the whole life cycle of an enterprise thingamajig.
Nothing wrong with the day-to-day work of fixing bugs and generally improving software, but projects that don't sometimes involve that wide-open sense of exploration are much less fun and interesting for me to work on.
Feels like a long time since I got much of that out of working on Debian. It certainly happened back in debian-installer days, and when I added dh to debhelper (though on a smaller scale), but I remember it used to all seem much more wide open.
I don't think this is entirely a social problem; technology is very important too. When I can make changes to data types and a strong type system lets me explore the complete ramifications of my my changes, it's easier to do exploratory programming in an established code base than when I'm stumbling over technical debt at every turn. But I feel in the case of Debian, a lot of it does have to do with accumulated non-technical debt.
Lars: You mention a strong type system, and you're known as a Haskell programmer. Previously you used Perl a lot. How would you compare programming in Haskell versus Perl? Especially on non-small programs, such as debhelper and ikiwiki versus git-annex? All are, by now, quite mature programs with a long history.
Joey: It's weird to be known as a Haskell programmer, since I still see myself as a beginner, and certainly not an exemplar. Indeed, I recently overheard someone complaining about some code in git-annex not being a good enough example of Haskell code, to merit whatever visibility it has on GitHub.
And they were right, this code is bad code in at least 3 ways; it's doing a lot of imperative I/O work, it's complicated by a hack that was put in to improve behavior without breaking backwards compatibility, and it implements half of an ad-hoc protocol, with no connection to the other half. There should be a way to abstract it out to higher level pure code, something more like this code.
So, I can write bad code in either language. But, I couldn't see so many of the problems with my bad Perl code. And, it's a lot more sane to rework bad Haskell code into better code, generally by improving the types, to add abstractions and preventing whole classes of problems from happening, and letting that change seep out into the code. And I continue to grow as a Haskell programmer, in ways that just didn't happen when I was writing Perl.
A couple other differences that I've noticed:
When I get a patch to a Haskell program, it's oh so much easier to tell if it's a good patch than when I get a patch in some other language.
My Haskell code often gets up to a high enough level of abstraction that it's generally reusable. Around 15% of the code in git-annex is not specific to it at all, and I hope to break it out into libraries.
For example, here is a library written for the code I linked to, and then reused in two other places in git-annex. Maybe three places if I get around to fixing that bad code I linked to earlier. Debconf contains an implementation of basically the same thing, but being written in Perl, I never thought to abstract it for reuse this way.
Lars: Speaking of Haskell, what got you interested in it initially? What got you to switch?
Joey: I remember reading about it in some blog posts on Planet Debian by John Goerzen and others, eight or nine years ago. There was a lot of mind-blowing stuff, like infinite lists and type inference. And I found some amazing videos of Simon Peyton Jones talking about Haskell. So I started to see that there were these interesting and useful areas that my traditional Unix programming background barely touched on or omitted. And, crucially, John pointed out that ghc can be used to build real world programs that are as fast and solid as C programs, while having all this crazy academic stuff available.
So, I spent around a year learning the basics of Haskell — very slowly. Didn't do much with it for a couple of years because all I could manage were toy programs and xmonad configurations, and I'd get stuck for hours on some stupid type error.
It was actually five years ago this week that I buckled down and wrote a real program in Haskell, because I had recently quit my job and had the time to burn, even though it felt like I could have dashed off in Perl in one day what took me a week to write in Haskell. That turned out to be git-annex.
After around another three years of writing Haskell, I finally felt comfortable enough with it that it seemed easier than using other languages. Although often mind-blowing still.
Lars: Haskell has a strong, powerful type system. Do you feel that does away with the need for unit testing completely? Do you do any unit testing, yourself? How about integration testing of an entire program? If you do that, what kind of tool do you use? Have you heard of my yarn tool and if so, what are your opinions on that?
Joey: It's a myth that strongly typed or functional programs don't need testing. Although they really do sometimes work correctly once you get them to compile, that's a happy accident, and even if they do, so what — some future idiot version of the savant who managed that feat will be in the code later and find a way to mess it up.
Often it's easier to think of a property that some data would have, and write a test for it, than would be to refine the data's type to only allow data with that property. Quickcheck makes short work of such tests, since you can just give it the property and let it find cases where it doesn't hold.
My favorite Quickcheck example is where I have two functions that serialize and deserialize some data type. Write down:
prop_roundtrip val = deserialize (serialize val) == val
and it will automatically find whatever bugs there are in edge cases of the functions. This is good because I'm lazy and not good at checking edge cases. Especially when they involve something like Unicode.
Most of my unit testing is of the Quickcheck variety. I probably do more integration testing overall though. My test infrastructure for git-annex makes temporary git repositories and runs git-annex in them and checks the results. I'm not super happy with the 2000 lines of Haskell code that runs all the tests, and it's too slow, but it does catch problems from time to time and by now checks a lot of weird edge cases due to regression tests.
I generally feel I'm quite poor at testing. I've never written tests that do mocking of interfaces, all that seems like too much work. I don't always write regression tests, even if when I don't manage to use the type system to close off any chance of a bug returning. I probably write an average of one to five tests a month. Propellor has twelve-thousand lines of code that runs as root on servers and not a single test. I'm not really qualified to talk about testing, am I?
I've read the yarn documentation before, and it's neat how it's an executable human readable specification. I'd worry about bugs in the tests themselves though, without strong types.
The best idea I ever had around testing is: put the test suite in your
program, so it can be run at anytime, anywhere. Being able to run
"git annex test" or ask users to run it is really useful
for testing how
well git-annex gets on in foreign environments.
Lars: One of the things you're known for, and which repeatedly is remarked on by Hacker News commenters, is that you live off the grid on the middle of the wilderness, relying on a dial-up modem for Internet. You've blogged about that. What led you on this path? What is your current living situation? Why do you stay there? Do you ever think about going somewhere to live in a more mainstream fashion? What are the best and worst things about that lifestyle?
Joey: I seem to have inverted some typical choices regarding life and work...
Rather than live in a city and take vacations to some rustic place in the country, I live a rustic life and travel to the city only when I want stimulation. This gives me a pleasant working environment with low distractions, and is more economical.
Rather than work for some company on whatever and gain only a paycheck and a resume, I work because I want to make something; the resulting free software is my resume, and the money somehow comes when someone finds my work valuable and wants it to continue. (Dartmouth College at the moment.)
Right now I'm renting a nice house with enough woods surrounding it to feel totally apart, located in a hole in the map that none of the nearby states of Tennessee, Virginia, or Kentucky have much interest in, so it's super cheap. It's got grape arbors and fruit trees, excellent earth-sheltered insulation, ancient solar panels and a spring and phone line and not much else by way of utilities or resources. I haul water, chop firewood, and now in the waning days of the year, have to be very careful about how much electricity I use.
I love it. I'm forced to get out away from keyboard to tend to basic necessities, and I feel in tune with the seasons, with the light, with the water, with everything that comes in and goes out. Even the annoying parts, like a week of clouds that mean super low power budget, or having to hike in food after a blizzard, or not being able to load a bloated web page in under an hour, seem like opportunities to learn and grow and have more intense experiences.
I kind of fell into this, by degrees. Working on free software was a key part, and then keeping working on it until I'd done things that mattered. Also, being willing to try a different lifestyle and keep living it until it became natural. Being willing to take chances and follow through, basically.
I've done this on and off for over ten years, but it still seems it could fall apart any time. I'm enjoying the ride anyway, and I feel super lucky to have been able to experience this.
Lars: What got you started with programming? When? What was your first significant program?
Joey: I bought an Atari computer with 128KB of RAM and BASIC. It came with no interesting programs, so provided motivation to write your own. I think that some of the money to pay for it, probably $50 or so, was earned working on the family tobacco farm. I was ten.
I have a blog post with some other stories about that computer. And I still have the programs I wrote, you can see them at http://joeyh.name/blog/entry/saved_my_atari_programs/.
But "significant" programs? That's subjective. Writing my own Tetris clone seemed significant at the time. The first program that seems significant in retrospect would be something from much later on, like debhelper.
Lars: What got you into free software?
Joey: I got into Linux soon after I got on the Internet at college, and from there learned about the GNU project and free software. I started using the GPL on my software pretty much immediately, mostly because it seemed to be what all the cool kids were doing.
Took me rather longer to really feel free software was super important in its own right. I remember being annoyed in the late 90's to be stereotyped as a Debian guy and thus a free-software fanatic, when I was actually very much on the pragmatic side. Sometime since then, free software has come to seem crucially important to me.
These days feel kind of like when the scientific method was still young and not widely accepted. Crucial foundational stuff is being built thanks to free software, but at the same time we have alchemists claiming to be able to turn their amassed user data into self-driving cars. People are using computers in increasingly constrained ways, so they are cut off from understanding how things work and become increasingly dis-empowered. These are worrying directions when you try to think long-term, and free software seems the only significant force in a better direction.
Lars: What advice would you give to someone new to programming? Or to someone new to free software development?
Joey: Programming can be a delight, a font of inspiration, of making better things and perhaps even making things better. Or it can be just another job. Give it the chance to be more, even if that involves quitting the job and living cheap in a cabin in the woods. Also, learn a few quite different things very deeply; there's too much quick, shallow learning of redundant stuff.
Security
The HTTPS bicycle attack
While HTTPS is an encrypted protocol, it does leak a certain amount of information about the communication—the source and destination addresses, at a minimum. But a newly reported technique can actually "see" inside of the encrypted data without requiring the key or cracking the encryption. By using the length information inherent in the protocol, some simple math can be done to determine the length of some portions of the encrypted data, which can be used to figure out things like password length. It only requires a recording of the packets in a session of interest, along with a bit of information about the target, which means it can be performed days or months later.
In a paper [PDF], Guido Vranken described the weakness that he has dubbed the "HTTPS bicycle attack". The name comes from the idea that wrapping a bicycle as a gift doesn't really hide what is inside the package. Similarly, HTTPS doesn't entirely obscure the contents of its encrypted payloads.
Vranken concentrates on stream ciphers in the paper, noting that they have a 1:1 relationship between the plain text and the cipher text; adding one byte to the plain text results in an additional encrypted byte in the HTTPS payload. The attack only considers messages that have the "application data" content type (0x17 in the first byte) and uses the length information stored at the fourth and fifth bytes of the message. From that, coupled with a little detective work, things like the length of a password submitted to the site can be derived.
The bicycle technique will be most effective for targeted attacks, where an eavesdropper can record the traffic to and from a host of interest. In particular, the "user agent" header being sent by the browser (or, really, its length) is helpful, though not necessarily required. It can be captured, along with other standard headers sent by the user's browser, from a regular HTTP request to any site. There may be other unknown headers in the HTTPS requests, but their length can be deduced from other encrypted requests as Vranken has shown.
The other major piece of the puzzle is that the attacker must also record their own session that exercises the web application in the same way that the victim has. Because they can decode their own traffic, the attacker gains the knowledge of the contents and lengths of various resources requested in the process. That allows the attacker to figure out which HTTPS messages correspond to the ones they are interested in.
For example, if a particular login page consists of half a dozen different resources (e.g. images, style sheets), each with a distinct length, it is relatively straightforward to isolate that part of encrypted stream even if the requests are handled in a different order. In addition, the analysis can ignore any constant difference in the sizes of the requests that comes from additional or different headers that the victim's browser sends. (Vranken used Pearson correlation to match a WordPress login page and its resources in the paper).
Once the messages of interest are identified, the request that sends the login credentials is scrutinized. Its length will consist of a mixture of known headers, unknown headers, and the actual form parameters that are being submitted. The length of the unknown headers can be derived from the other requests since the attacker knows the lengths they recorded from their own session. The difference between those other requests and what the attacker recorded can be used to adjust the length of the authentication message, which just leaves the length of the form parameters.
The login credentials consist of both a username and password, of course, so all of the analysis only gives the combined length of the two. That, again, is where targeting comes in. In general, finding out the username for a target is not that difficult. Subtracting its length gives the attacker the length of the target's password.
That may seem like a fair amount of work just to get the length of the password, but that can be used in various ways to potentially compromise the account (e.g. brute force, dictionary attacks). In addition, Vranken showed several other ways that the length of a string in a web request or response (e.g. geographic coordinates, IP addresses) might be used to peer inside the encrypted data to extract useful information.
Vranken offered some suggestions for mitigating the problem. Using JavaScript to hash the password (using SHA-256, say) on the client side would be one way to do that, since all passwords would hash to the same length. That would also mean that the server never has access to the plain-text password. While that would be advantageous in some ways, it would prevent the server from validating the password (e.g. that it must contain letters and digits), which might be undesirable.
Padding the password is another option, though there are some potential pitfalls there. Ensuring that the browser does not strip the padding characters is obviously essential. Variable-length padding seems attractive, but will actually leak information as well. Vranken recommended using the ASCII NUL ("\0") character for padding, then hexadecimal-encoding the password plus padding into a string to be sent to the server.
This attack is another reminder that encrypted communication is not necessarily a panacea. There are certainly government security agencies that have tons of HTTPS traffic stored that could be used to target a variety of web applications for a subject of interest. Placing the length parameter in the unencrypted portion of the message certainly helps here; if the message boundaries were obscured, this kind of attack would be more difficult, at a minimum.
Brief items
Security quotes of the week
So long as basebands are not audited, and smartphones do not possess IOMMUs and have their operating systems configure them in a way that effectively mitigates the threat, no smartphone can be trusted for the integrity or confidentiality of any data it processes.
It doesn't matter if you break up the backdoor key into a thousand pieces and distribute them to Boy and Girl Scouts sworn to only use them in a national emergency.
A happy user is one who finds that a useful and fun-to-use tool also protects him from threats that he often may not fully appreciate until it’s too late.
De Raadt: Important SSH patch coming soon
Theo de Raadt suggests that a significant OpenSSH security issue is about to be exposed; the message reads, in full: "Important SSH patch coming soon. For now, every on all operating systems, please do the following: Add undocumented 'UseRoaming no' to ssh_config or use '-oUseRoaming=no' to prevent upcoming #openssh client bug CVE-2016-0777. More later."
Update: that important patch appears to be OpenSSH 7.1p2, available now. "The OpenSSH client code between 5.4 and 7.1
contains experimential support for resuming SSH-connections (roaming).
The matching server code has never been shipped, but the client
code was enabled by default and could be tricked by a malicious
server into leaking client memory to the server, including private
client user keys.
" There are a few other security fixes there as
well.
Update 2: see the Qualys advisory for vast amounts of detail.
An unpleasant local kernel vulnerability
Perception Point discloses a use-after-free vulnerability in the kernel's keyring subsystem; it is exploitable for local privilege escalation. "If a process causes the kernel to leak 0x100000000 references to the same object, it can later cause the kernel to think the object is no longer referenced and consequently free the object. If the same process holds another legitimate reference and uses it after the kernel freed the object, it will cause the kernel to reference deallocated, or a reallocated memory. This way, we can achieve a use-after-free, by using the exact same bug from before. A lot has been written on use-after-free vulnerability exploitation in the kernel, so the following steps wouldn’t surprise an experienced vulnerability researcher." This bug, introduced in 3.8, looks like a good one to patch quickly; of course, for vast numbers of users of mobile and embedded systems, that may not be an option.
Linux Kernel ROP - Ropping your way to #
This article from Cysec Labs starts a series explaining how return-oriented programming (ROP) can be used to exploit vulnerabilities in the kernel. "ROP techniques take advantage of code misalignment to identify new gadgets. This is possible due to x86 language density, i.e., the x86 instruction set is large enough (and instructions have different lengths), that almost any sequence of bytes can be interpreted as a valid instruction."
New vulnerabilities
bind9: denial of service
| Package(s): | bind9 | CVE #(s): | CVE-2015-8704 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Created: | January 20, 2016 | Updated: | February 29, 2016 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Description: | From the Debian advisory:
It was discovered that specific APL RR data could trigger an INSIST failure in apl_42.c and cause the BIND DNS server to exit, leading to a denial-of-service. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
cacti: SQL injection
| Package(s): | cacti | CVE #(s): | CVE-2015-8604 | ||||||||||||||||||||||||||||||||
| Created: | January 14, 2016 | Updated: | January 20, 2016 | ||||||||||||||||||||||||||||||||
| Description: | From the Debian-LTS advisory:
It was discovered that there was another SQL injection vulnerability in cacti, a web interface for graphing monitoring systems. | ||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||
dbconfig-common: information leak
| Package(s): | dbconfig-common | CVE #(s): | |||||
| Created: | January 15, 2016 | Updated: | January 20, 2016 | ||||
| Description: | From the Debian-LTS advisory: It was discovered that dbconfig-common could, depending on the local umask, make PostgreSQL database backups that were readable by other users than the database owner. | ||||||
| Alerts: |
| ||||||
docker: information disclosure
| Package(s): | docker go | CVE #(s): | CVE-2015-8618 | ||||||||||||||||||||||||||||||||||||
| Created: | January 18, 2016 | Updated: | May 18, 2016 | ||||||||||||||||||||||||||||||||||||
| Description: | From the Arch Linux advisory:
This issue can affect RSA computations in crypto/rsa, which is used by crypto/tls. TLS servers on 32-bit systems could plausibly leak their RSA private key due to this issue. Other protocol implementations that create many RSA signatures could also be impacted in the same way. Specifically, incorrect results in one part of the RSA Chinese Remainder computation can cause the result to be incorrect in such a way that it leaks one of the primes. While RSA blinding should prevent an attacker from crafting specific inputs that trigger the bug, on 32-bit systems the bug can be expected to occur at random around one in 2^26 times. Thus collecting around 64 million signatures (of known data) from an affected server should be enough to extract the private key used. On 64-bit systems, the frequency of the bug is so low (less than one in 2^50) that it would be very difficult to exploit. Nonetheless, everyone is strongly encouraged to upgrade. A remote unauthenticated attacker can extract a private RSA key by passively collecting signatures. | ||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||
dwarfutils: information leak
| Package(s): | dwarfutils | CVE #(s): | CVE-2015-8750 | ||||||||||||
| Created: | January 15, 2016 | Updated: | January 21, 2016 | ||||||||||||
| Description: | From the Debian advisory: It was discovered that there was a NULL deference in dwarfutils, a tool to dump DWARF debug information from ELF objects. | ||||||||||||||
| Alerts: |
| ||||||||||||||
ecryptfs-utils: privilege escalation
| Package(s): | ecryptfs-utils | CVE #(s): | CVE-2016-1572 | ||||||||||||||||||||||||||||||||||||
| Created: | January 20, 2016 | Updated: | February 17, 2016 | ||||||||||||||||||||||||||||||||||||
| Description: | From the Debian advisory:
Jann Horn discovered that the setuid-root mount.ecryptfs_private helper in the ecryptfs-utils would mount over any target directory that the user owns, including a directory in procfs. A local attacker could use this flaw to escalate his privileges. | ||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||
ffmpeg: multiple vulnerabilities
| Package(s): | ffmpeg | CVE #(s): | CVE-2015-6818 CVE-2015-6820 CVE-2015-6821 CVE-2015-6822 CVE-2015-6823 CVE-2015-6824 CVE-2015-6825 CVE-2015-6826 | ||||||||
| Created: | January 15, 2016 | Updated: | January 20, 2016 | ||||||||
| Description: | From the Mageia advisory: CVE-2015-6818 - The decode_ihdr_chunk function in libavcodec/pngdec.c in FFmpeg before 2.4.11 does not enforce uniqueness of the IHDR (aka image header) chunk in a PNG image, which allows remote attackers to cause a denial of service (out-of-bounds array access) or possibly have unspecified other impact via a crafted image with two or more of these chunks. CVE-2015-6820 - The ff_sbr_apply function in libavcodec/aacsbr.c in FFmpeg before 2.4.11 does not check for a matching AAC frame syntax element before proceeding with Spectral Band Replication calculations, which allows remote attackers to cause a denial of service (out-of-bounds array access) or possibly have unspecified other impact via crafted AAC data. CVE-2015-6821 - The ff_mpv_common_init function in libavcodec/mpegvideo.c in FFmpeg before 2.4.11 does not properly maintain the encoding context, which allows remote attackers to cause a denial of service (invalid pointer access) or possibly have unspecified other impact via crafted MPEG data. CVE-2015-6822 - The destroy_buffers function in libavcodec/sanm.c in FFmpeg before 2.4.11 does not properly maintain height and width values in the video context, which allows remote attackers to cause a denial of service (segmentation violation and application crash) or possibly have unspecified other impact via crafted LucasArts Smush video data. CVE-2015-6823 - The allocate_buffers function in libavcodec/alac.c in FFmpeg before 2.4.11 does not initialize certain context data, which allows remote attackers to cause a denial of service (segmentation violation) or possibly have unspecified other impact via crafted Apple Lossless Audio Codec (ALAC) data. CVE-2015-6824 - The sws_init_context function in libswscale/utils.c in FFmpeg before 2.4.11 does not initialize certain pixbuf data structures, which allows remote attackers to cause a denial of service (segmentation violation) or possibly have unspecified other impact via crafted video data. CVE-2015-6825 - The ff_frame_thread_init function in libavcodec/pthread_frame.c in FFmpeg before 2.4.11 mishandles certain memory-allocation failures, which allows remote attackers to cause a denial of service (invalid pointer access) or possibly have unspecified other impact via a crafted file, as demonstrated by an AVI file. CVE-2015-6826 - The ff_rv34_decode_init_thread_copy function in libavcodec/rv34.c in FFmpeg before 2.4.11 does not initialize certain structure members, which allows remote attackers to cause a denial of service (invalid pointer access) or possibly have unspecified other impact via crafted RV30 or RV40 RealVideo data. | ||||||||||
| Alerts: |
| ||||||||||
ffmpeg: cross-origin attacks
| Package(s): | ffmpeg | CVE #(s): | CVE-2016-1897 CVE-2016-1898 | ||||||||||||||||||||||||||||
| Created: | January 18, 2016 | Updated: | March 7, 2016 | ||||||||||||||||||||||||||||
| Description: | From the CVE entries:
FFmpeg 2.x allows remote attackers to conduct cross-origin attacks and read arbitrary files by using the concat protocol in an HTTP Live Streaming (HLS) M3U8 file, leading to an external HTTP request in which the URL string contains the first line of a local file. (CVE-2016-1897) FFmpeg 2.x allows remote attackers to conduct cross-origin attacks and read arbitrary files by using the subfile protocol in an HTTP Live Streaming (HLS) M3U8 file, leading to an external HTTP request in which the URL string contains an arbitrary line of a local file. (CVE-2016-1898) | ||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||
kernel: privilege escalation
| Package(s): | kernel | CVE #(s): | CVE-2015-8539 | ||||||||||||||||||||||||||||||||||||
| Created: | January 19, 2016 | Updated: | January 20, 2016 | ||||||||||||||||||||||||||||||||||||
| Description: | From the SUSE bugzilla entry:
If a user key gets negatively instantiated, an error code is cached in the payload area. A negatively instantiated key may be then be positively instantiated by updating it with valid data. However, the ->update key type method must be aware that the error code may be there. The paging address is predictable and mappable as userspace memory and can be used by abused by an attacker to escalate privileges. | ||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||
kernel: multiple vulnerabilities
| Package(s): | kernel | CVE #(s): | CVE-2013-4312 CVE-2015-7566 CVE-2015-8767 CVE-2016-0723 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Created: | January 19, 2016 | Updated: | February 1, 2016 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Description: | From the Debian advisory:
CVE-2013-4312: Tetsuo Handa discovered that it is possible for a process to open far more files than the process' limit leading to denial-of-service conditions. CVE-2015-7566: Ralf Spenneberg of OpenSource Security reported that the visor driver crashes when a specially crafted USB device without bulk-out endpoint is detected. CVE-2015-8767: An SCTP denial-of-service was discovered which can be triggered by a local attacker during a heartbeat timeout event after the 4-way handshake. CVE-2016-0723: A use-after-free vulnerability was discovered in the TIOCGETD ioctl. A local attacker could use this flaw for denial-of-service. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
kernel: privilege escalation
| Package(s): | kernel | CVE #(s): | CVE-2016-0728 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Created: | January 19, 2016 | Updated: | January 26, 2016 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Description: | From the Debian advisory:
CVE-2016-0728: The Perception Point research team discovered a use-after-free vulnerability in the keyring facility, possibly leading to local privilege escalation. [See the Perception Point advisory for lots more information.] | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
librsvg: multiple vulnerabilities
| Package(s): | librsvg | CVE #(s): | CVE-2015-7557 CVE-2015-7558 | ||||||||||||||||
| Created: | January 15, 2016 | Updated: | May 18, 2016 | ||||||||||||||||
| Description: | From the Mageia advisory: Out-of-bounds heap read in librsvg2 was found when parsing SVG file (CVE-2015-7557). Stack exhaustion due to cyclic dependency causing to crash an application was found in librsvg2 while parsing SVG file (CVE-2015-7558). | ||||||||||||||||||
| Alerts: |
| ||||||||||||||||||
libtiff: code execution
| Package(s): | libtiff | CVE #(s): | CVE-2015-8665 CVE-2015-8683 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Created: | January 14, 2016 | Updated: | January 27, 2016 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Description: | From the Mageia advisory:
In libtiff, in tif_getimage.c, out-of-bound reads in the TIFFRGBAImage interface in case of unsupported values of SamplesPerPixel/ExtraSamples for LogLUV / CIELab (CVE-2015-8665, CVE-2015-8683). | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
libxml2: denial of service
| Package(s): | libxml2 | CVE #(s): | CVE-2015-8710 | ||||||||||||
| Created: | January 20, 2016 | Updated: | January 22, 2016 | ||||||||||||
| Description: | From the Ubuntu advisory:
It was discovered that libxml2 incorrectly handled certain malformed documents. If a user or automated system were tricked into opening a specially crafted document, an attacker could possibly cause libxml2 to crash, resulting in a denial of service. | ||||||||||||||
| Alerts: |
| ||||||||||||||
libxmp: multiple vulnerabilities
| Package(s): | libxmp | CVE #(s): | |||||||||
| Created: | January 20, 2016 | Updated: | January 20, 2016 | ||||||||
| Description: | Version 4.3.10 fixes many bugs, some of which may be exploitable. See the changelog for details. | ||||||||||
| Alerts: |
| ||||||||||
mbedtls: memory leak
| Package(s): | mbedtls | CVE #(s): | |||||||||
| Created: | January 20, 2016 | Updated: | January 20, 2016 | ||||||||
| Description: | From the Red Hat bugzilla:
In case an entry with the given OID already exists in the list passed to mbedtls_asn1_store_named_data() and there is not enough memory to allocate room for the new value, the existing entry will be freed but the preceding entry in the list will sill hold a pointer to it. (And the following entries in the list are no longer reachable.) This results in memory leak or a double free. | ||||||||||
| Alerts: |
| ||||||||||
nodejs-ws: remote information disclosure
| Package(s): | nodejs-ws | CVE #(s): | |||||||||
| Created: | January 14, 2016 | Updated: | January 20, 2016 | ||||||||
| Description: | From the Red Hat bugzilla entry:
A vulnerability in the ping functionality of ws module which allowed clients to allocate memory by simply sending a ping frame. The ping functionality by default responds with a pong frame and the previously given payload of the ping frame. As a result, client receives non-zeroed out allocated buffer from server of arbitrary length. Assuming the usage of modern kernel, only the memory previously used and deallocated by the node process and the memory that has been previously allocated as a Buffer can be leaked using this way. | ||||||||||
| Alerts: |
| ||||||||||
openssh: multiple vulnerabilities
| Package(s): | openssh | CVE #(s): | CVE-2016-0777 CVE-2016-0778 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Created: | January 15, 2016 | Updated: | January 20, 2016 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Description: | From the Arch Linux advisory: CVE-2016-0777 (information disclosure) An information leak flaw was found in the way the OpenSSH client roaming feature was implemented. A malicious server could potentially use this flaw to leak portions of memory (possibly including private SSH keys) of a successfully authenticated OpenSSH client. CVE-2016-0778 (arbitrary code execution) A buffer overflow flaw was found in the way the OpenSSH client roaming feature was implemented that is leading to a file descriptor leak. A malicious server could potentially use this flaw to execute arbitrary code on a successfully authenticated OpenSSH client if that client used certain non-default configuration options (ProxyCommand, ForwardAgent or ForwardX11). | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
openssh: out of-bound read access
| Package(s): | openssh | CVE #(s): | CVE-2016-1907 | ||||||||||||||||
| Created: | January 18, 2016 | Updated: | January 20, 2016 | ||||||||||||||||
| Description: | From the Red Hat bugzilla:
OpenSSH 7.1p2 release notes mention the following security fix: * SECURITY: Fix an out of-bound read access in the packet handling code. Reported by Ben Hawkes. | ||||||||||||||||||
| Alerts: |
| ||||||||||||||||||
openstack-glance: unspecified
| Package(s): | openstack-glance | CVE #(s): | |||||
| Created: | January 18, 2016 | Updated: | January 20, 2016 | ||||
| Description: | An update to upstream 2015.1.2 fixes unspecified security issues. | ||||||
| Alerts: |
| ||||||
php: multiple vulnerabilities
| Package(s): | php | CVE #(s): | CVE-2016-1903 CVE-2016-1904 | ||||||||||||||||||||||||||||||||
| Created: | January 15, 2016 | Updated: | January 20, 2016 | ||||||||||||||||||||||||||||||||
| Description: | From the Arch Linux advisory: CVE-2016-1903 (information disclosure) An out-of-bounds vulnerability has been discovered in ext/gd/libgd/gd_interpolation.c in the gdImageRotateInterpolated function. The background color of an image is passed in as an integer that represents an index to the color palette. As there is a lack of validation of that parameter, one can pass in a large number that exceeds the color palette array. This reads memory beyond the color palette. Information of the memory leak can then be obtained via the background color after the image has been rotated. CVE-2016-1904 (arbitrary code execution) A not further specified integer overflow vulnerability has been discovered in ext/standard/exec.c (in the php_escape_shell_cmd function and the php_escape_shell_arg function). This issue results in a heap buffer overflow that is leading to a denial of service or possibly arbitrary code execution. | ||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||
php: multiple vulnerabilities
| Package(s): | php | CVE #(s): | |||||||||
| Created: | January 18, 2016 | Updated: | January 20, 2016 | ||||||||
| Description: | From the Red Hat bugzilla:
1297730: It was found that an attacker can control type and val via get_zval_xmlrpc_type() with a crafted object-type ZVAL. Z_STRVAL_P macro and the Z_STRLEN_P macro handles a non-string-type val, which is able to look up an arbitrary memory address. This results in leaking arbitrary memory blocks, crash application or other issues. 1297726: It was found that attacker can deserialize a string-type ZVAL via php_wddx_deserialize_ex(), which means he is able to create fake HashTable via the Z_ARRVAL_P macro with the string-type ZVAL. This could result in arbitrary remote code execution. 1297720: A use-after free vulnerability was found that could possible lead to arbitrary remote code execution. 1297710: A memory leak and out-of-bounds write was found in fpm_log.c. | ||||||||||
| Alerts: |
| ||||||||||
python-kdcproxy: unspecified
| Package(s): | python-kdcproxy | CVE #(s): | CVE-2015-5159 | ||||
| Created: | January 18, 2016 | Updated: | January 20, 2016 | ||||
| Description: | An update to 0.3.1 fixes CVE-2015-5159. | ||||||
| Alerts: |
| ||||||
qemu: multiple vulnerabilities
| Package(s): | qemu | CVE #(s): | CVE-2015-8613 CVE-2015-8619 CVE-2015-8743 CVE-2016-1568 CVE-2016-1714 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Created: | January 18, 2016 | Updated: | February 1, 2016 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Description: | From the Mageia advisory:
A stack buffer-overflow vulnerability has been discovered in the QEMU emulator built with SCSI MegaRAID SAS HBA emulation support. The flaw occurs when processing the SCSI controller's CTRL_GET_INFO command. A privileged guest user could exploit this flaw to crash the QEMU process instance (denial of service). (CVE-2015-8613) An out-of-bounds write vulnerability has been found in the QEMU emulator built with Human Monitor Interface(HMP) support. The issue occurs when the 'sendkey' command (in hmp_sendkey) is processed with a 'keyname_len' that is greater than the 'keyname_buf' array size. A user or process could exploit this flaw to crash the QEMU process instance (denial of service). (CVE-2015-8619) An out-of-bounds read-write access flaw was found in the QEMU emulator built with NE2000-device emulation support. The flaw occurred while performing 'ioport' read-write operations. A privileged (CAP_SYS_RAWIO) user or process could exploit the flaw to leak or corrupt QEMU memory bytes (CVE-2015-8743) A user-after-free vulnerability was discovered in the QEMU emulator built with IDE AHCI emulation support. The flaw could occur after processing AHCI Native Command Queuing(NCQ) AIO commands. A privileged user inside the guest could use this flaw to crash the QEMU process instance (denial of service) or potentially execute arbitrary code on the host with QEMU-process privileges (CVE-2016-1568). An out-of-bounds read/write flaw was discovered in the QEMU emulator built with Firmware Configuration device emulation support. The flaw could occur while processing firmware configurations if the current configuration entry value was set to be invalid. A privileged(CAP_SYS_RAWIO) user or process inside the guest could exploit this flaw to crash the QEMU process instance (denial of service), or potentially execute arbitrary code on the host with QEMU-process privileges (CVE-2016-1714). | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
radicale: multiple vulnerabilities
| Package(s): | radicale | CVE #(s): | CVE-2015-8747 CVE-2015-8748 | ||||||||||||||||||||
| Created: | January 20, 2016 | Updated: | February 9, 2016 | ||||||||||||||||||||
| Description: | From the Red Hat bugzilla:
Multiple security fixes related mostly to improved input sanitization appeared in release of radicale 1.1:
| ||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||
roundcubemail: code execution
| Package(s): | roundcubemail | CVE #(s): | CVE-2015-8770 | ||||||||||||||||||||||||||||
| Created: | January 18, 2016 | Updated: | April 5, 2016 | ||||||||||||||||||||||||||||
| Description: | From the Arch Linux advisory:
High-Tech Bridge Security Research Lab discovered a path traversal vulnerability in Roundcube. Vulnerability can be exploited to gain access to sensitive information and under certain circumstances to execute arbitrary code and totally compromise the vulnerable server. The vulnerability exists due to insufficient sanitization of "_skin" HTTP POST parameter in "/index.php" script when changing between different skins of the web application. A remote authenticated attacker can use path traversal sequences (e.g. "../../") to load a new skin from arbitrary location on the system, readable by the webserver. Exploitation of the vulnerability requires valid user credentials and ability to create files on vulnerable host. A remote authenticated attacker can access sensitive information and may be able to execute arbitrary code on the affected host. | ||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||
salt: insecure /tmp file handling
| Package(s): | salt | CVE #(s): | CVE-2015-1838 CVE-2015-1839 | ||||
| Created: | January 18, 2016 | Updated: | January 20, 2016 | ||||
| Description: | From the Red Hat bugzilla:
CVE-2015-1838: Michael Scherer of Red Hat reported an insecure /tmp file handling in salt/modules/serverdensity_device.py in SaltStack. This issue is fixed in SaltStack version 2014.7.4. CVE-2015-1839: Michael Scherer of Red Hat reported an insecure /tmp file handling in salt/modules/chef.py in SaltStack. This issue is fixed in SaltStack version 2014.7.4. | ||||||
| Alerts: |
| ||||||
srtp: denial of service
| Package(s): | srtp | CVE #(s): | CVE-2015-6360 | ||||||||||||||||
| Created: | January 19, 2016 | Updated: | September 8, 2016 | ||||||||||||||||
| Description: | From the Debian LTS advisory:
Prevent potential DoS attack due to lack of bounds checking on RTP header CSRC count and extension header length. Credit goes to Randell Jesup and the Firefox team for reporting this issue. | ||||||||||||||||||
| Alerts: |
| ||||||||||||||||||
xen: denial of service
| Package(s): | xen | CVE #(s): | CVE-2015-8567 CVE-2015-8568 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Created: | January 15, 2016 | Updated: | January 20, 2016 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Description: | From the openSUSE bug report: Qemu emulator built with a VMWARE VMXNET3 paravirtual NIC emulator support is vulnerable to a memory leakage flaw. It occurs when a guest repeatedly tries to activate the vmxnet3 device. A privileged guest user could use this flaw to leak host memory, resulting in DoS on the host. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Page editor: Jake Edge
Kernel development
Brief items
Kernel release status
The 4.5 merge window is still open, probably until January 24.Stable updates: none have been released since December 14. The 4.1.16, 3.14.59, and 3.10.95 updates are in the review process as of this writing; they can be expected on or after January 22.
Quotes of the week
In fact our kernel configuration UI and workflow is still so bad that it's an effort to stay current even with a standalone and working .config, even for experienced kernel developers...
Kernel development news
4.5 merge window part 2
As of this writing, Linus has pulled 8,415 non-merge changesets into the mainline repository for the 4.5 development cycle; 5,300 of those have come in since last week's summary. Recent merge-window history (12,092 patches for 4.2, 10,756 for 4.3, 11,528 for 4.4) suggests that we probably have some merging to go still; a quick look at linux-next suggests that there is still a fair amount of unmerged work in the ARM tree in particular. It is probably fair to say, though, that the bulk of the significant features that we will see in 4.5 are in place now.The most significant of those features include:
- There is a new restriction on access to memory via /dev/mem:
it can no longer access ranges of memory that have been claimed by a
device driver. The specific purpose is to protect non-volatile memory
arrays, which, due to their size, are relatively easy to hit by
accident, but there are other advantages as well. Someday, perhaps,
/dev/mem will go away entirely, but there are still a few
things that use it now. Note that the first 1MB of memory is
unaffected by this restriction; see the
commit changelog for some more information.
- The kernel's persistent-memory support has, until now, lacked the
ability to properly support direct I/O and DMA to persistent memory.
That has changed in 4.5 with the
merging of proper support for page structures backing up
persistent-memory arrays.
- The libnvdimm (non-volatile memory) layer has gained a bad-block
management layer borrowed from the MD RAID code.
- The XFS filesystem now performs checksum validation of all log entries
before applying them during recovery. That should greatly reduce the
chance of applying corrupted data.
- There is now more extensive accounting of kernel memory allocated via
the slab allocators. At the user level, users will see various kernel
allocations charged against their memory-control-group limits. At the
kernel level, the new SLAB_ACCOUNT and __GFP_ACCOUNT
flags are used to mark allocations that should be charged in this
way. Among others, mm_struct, vm_area_struct,
dentry, and inode structures are all tracked now.
- As described in this article, it is
now possible to increase the range of randomness used for
address-space layout randomization. That might increase the security
of the system, at the possible cost of making huge allocations fail.
- The MADV_FREE option to
madvise(), which has been under development for some
time, has finally been merged. MADV_FREE allows an
application to mark memory that it won't need immediately; the kernel
can then reclaim that memory preferentially if resources are tight.
- User-space mode-setting support, deprecated for years, has finally
been removed from the Radeon driver. With luck, all users have long
since switched to kernel mode-setting.
- New hardware support includes:
- Audio:
Cirrus Logic CS47L24 codecs,
Imagination Technologies audio controllers,
Rockchip rk3036 Inno codecs,
Dialog Semiconductor DA7217 and DA7218 audio codecs,
Texas Instruments pcm3168a codecs,
Pistachio SoC internal digital-to-analog converters,
Realtec RT5616 and 5659 codecs, and
AMD audio coprocessors.
- Graphics:
Panasonic VVX10F034N00 1920x1200 video mode panels and
Sharp LS043T1LE01 qHD video mode panels.
Notably, the "Etnaviv" driver, a free driver for Vivante GPUs,
has finally been merged. The AMD driver has gained PowerPlay
power-management support.
- Industrial I/O:
Memsic MXC6255 orientation-sensing accelerometers,
TI Palmas general-purpose analog-to-digital converters (ADCs),
TI ADS8688 ADCs,
TI INA2xx power monitors,
Freescale IMX7D ADCs,
Freescale MMA7455L/MMA7456L accelerometers,
Maxim MAX30100 heart rate and pulse oximeter sensors, and
AMS iAQ-Core VOC sensors.
- Input:
EETI eGalax serial touchscreens and
Technologic TS-4800 touchscreens.
- Miscellaneous:
STMicroelectronics STM32 DMA controllers,
Mediatek MT81xx SPI NOR flash controllers,
Ingenic JZ4780 NAND flash controllers,
HiSilicon SAS SCSI adapters,
TI LM363X voltage regulators,
TI TPS65086 power regulators,
Powerventure Semiconductor PV88060 and PV88090 voltage regulators,
Cirrus Logic Fractional-N Clock synthesizer/multipliers,
Qualcomm MSM8996 clock controllers,
Epson RX8010SJ realtime clocks, and
Intel P-Unit mailboxes.
- USB:
Mediatek MT65xx host controllers,
Renesas USB3.0 peripheral controllers,
Renesas R-Car generation 3 USB 2.0 PHYs,
Hisilicon hi6220 USB PHYs, and
Moxa UPORT 11x0 serial hubs.
- Watchdog: CSR CSRatlas7 watchdogs, Technologic TS-4800 watchdogs, Alphascale ASM9260 watchdogs, Zodiac RAVE watchdog timers, Sigma Designs SMP86xx/SMP87xx watchdogs, and Mediatek SoC watchdogs.
- Audio:
Cirrus Logic CS47L24 codecs,
Imagination Technologies audio controllers,
Rockchip rk3036 Inno codecs,
Dialog Semiconductor DA7217 and DA7218 audio codecs,
Texas Instruments pcm3168a codecs,
Pistachio SoC internal digital-to-analog converters,
Realtec RT5616 and 5659 codecs, and
AMD audio coprocessors.
Changes visible to kernel developers include:
- A new version of the media
controller API has been merged. As Mauro Carvalho Chehab
described this work in the
pull request: "
The goal is to improve the media controller to allow proper support for other types of Video4Linux devices (radio and TV ones) and to extend the media controller functionality to allow it to be used by other subsystems like DVB, ALSA and IIO.
" Parts of the user-space API remain disabled, though, until 4.6 so some final points can be worked out. - The extensive huge-page reference counting patch set has been merged. The end goal (supporting transparent huge pages in the page cache) has not yet been reached, though.
The most likely day for the closing of the merge window remains January 24. As usual, we'll cover any final changes that come in through this merge window in next week's edition.
Direct I/O and DMA for persistent memory
The last year or so has seen a great deal of work toward improving the kernel's support of persistent-memory (or "nonvolatile-memory") devices. Persistent memory looks like regular memory to the system in a number of ways, but it differs in others, most notably in that its contents persist across reboots and power cycles. The upcoming 4.5 kernel contains some core memory-management changes that address one of the biggest items left on the "to do" list for persistent memory: support for DMA and direct I/O. Getting there was a multi-step process, though.One of the biggest areas of disagreement with regard to persistent-memory support has been whether that memory should be represented in the system memory map. Doing so means setting aside considerable amounts of memory for a page structure representing each persistent-memory page; with large persistent-memory arrays, those structures could occupy a significant percentage of the system's RAM — or not fit at all. But the lack of page structures makes persistent memory invisible to much of the low-level memory-management code and, as a result, rules out operations like direct I/O. Since some of the prominent use cases for persistent memory (serving as a fast cache for a huge disk array, for example) require DMA and direct I/O, this was seen as a significant problem.
The solution, merged for 4.5, is evolved from the approaches described here in September 2015. At that point, there was a significant push to use page-frame numbers (PFNs) as a replacement for page structures in much of the memory-management subsystem. If all the memory in the system is seen as a huge array, a PFN is simply an index into that array for a specific page. Any memory that is addressable by the CPU will have an associated PFN, so using the PFN seems like a logical way to refer to that page. There is a catch, though: struct page, beyond just identifying a page, also contains crucial information about how that page is being used. So it's not possible to do without struct page entirely.
The approach found in the 4.5 kernel, implemented by Dan Williams, starts with some of the PFN-based ideas that have been passed around in the past, but does not stop there. There is a new type to represent a PFN and some associated information:
typedef struct {
unsigned long val;
} pfn_t;
Adding this type required renaming a couple of pfn_t types already existing in other parts of the kernel. The val member contains the actual PFN, but the high-order bits are used to encode a few extra flags. Two of them, PFN_SG_CHAIN and PFN_SG_LAST, are meant to be used with scatter-gather lists for DMA that use PFNs rather than pointers to page structures, but the scatter-gather part has not (yet) been merged, so these flags are unused as of this writing. Beyond that, PFN_DEV indicates a page frame stored on special "device" memory that may not have an associated page structure, and PFN_MAP indicates that a page structure does, in fact, exist.
The kernel has had the ability to (easily) create page structures for persistent memory since 4.3, when devm_memremap_pages() was introduced by Christoph Hellwig:
/* The v4.3 version of this function */
void *devm_memremap_pages(struct device *dev, struct resource *res);
This function will map the region described by res into the kernel's virtual address space, allocating page structures for it along the way. It is not a complete solution to the problem, though, for a couple of reasons. One is that it lacks the reference-counting support needed to ensure that a persistent-memory device doesn't disappear while it is in use. The other, of course, is the same old problem: for a huge persistent-memory array, there just isn't room in RAM for all of those page structures.
The lack of reference counting matters for use cases like DMA and direct I/O; it would not do to have some persistent memory (or the mapping to it) disappear in the middle of an operation. In 4.5, this problem is fixed by requiring persistent-memory drivers to provide a percpu_ref structure to go with any memory array that is mapped into the kernel's address space. A pointer to this reference counter is then stored (with a level of indirection) in the already overloaded page structure; since persistent-memory page structures will never appear in the memory-management subsystem's LRU lists, the space occupied by the lru field is available for this purpose.
The 4.5 work introduces a new flag, _PAGE_DEVMAP, which is stored in the page-table entry itself when persistent memory is mapped into a process's address space. Code that creates references to this memory, get_user_pages() for example, will see that flag and respond by incrementing the percpu_ref counter associated with the persistent-memory array. As long as that counter remains elevated, it will not be possible to remove the memory from the system.
The other problem — the size of all those page structures — has an obvious solution: store those structures in the persistent-memory array itself. This solution is not ideal; page structures can change frequently, which mixes poorly with the relatively high cost of writing to persistent memory. But it is better than having no page structures at all. So, in 4.5, drivers for persistent memory can set aside a chunk of each array for the storage of page structures. That is done by filling in a vmem_altmap structure:
struct vmem_altmap {
const unsigned long base_pfn;
const unsigned long reserve;
unsigned long free;
unsigned long align;
unsigned long alloc;
};
The base_pfn field points to the base of the array. A driver can keep some of the memory for its own purposes by storing the amount in the reserve field; the free field should be set to the number of pages that can be used to hold page structures. A simple allocator built into the memory-management code will then use those pages (tracking them with the alloc field) to create page structures when mapping the array into kernel space.
All of these additions come together in the 4.5 version of devm_memremap_pages():
void *devm_memremap_pages(struct device *dev, struct resource *res,
struct percpu_ref *ref, struct vmem_altmap *altmap);
With this infrastructure in place, a persistent-memory driver can easily set up an array that is mapped into kernel memory and which has page structures behind it. That allows functions like get_user_pages() to work, and, as a consequence, direct I/O and DMA also work. An additional benefit (from a bit more work) is that huge-page mappings into persistent memory work properly.
Without doubt, work on supporting persistent memory will continue for some time; this memory represents a major change in how our systems work. But, as of the 4.5 kernel, it would appear that the important low-level pieces are in place. What remains now is figuring out the best ways to actually use terabytes of directly connected persistent memory, both within the kernel and at the application level. It will be interesting to see what developers come up with in the next few years.
Heading toward 2038-safe filesystems
It is a little hard to call the "year 2038" problem looming, given that it is still nearly 22 years off. But Linux is installed in lots of places where it may continue running past 2038—particularly in embedded systems. Kernel developers have done a fair amount of work to address the problem, much of which we have covered along the way. Attention is now turning to preparing the virtual filesystem (VFS) layer, along with all of the myriad filesystems supported by Linux, for 2038.
In a nutshell, the problem is that the representation of time on a Linux system—inherited from the original Unix systems—uses a 32-bit signed integer, at least on 32-bit systems. It stores the number of seconds since January 1, 1970, which is known as the "epoch". That value will wrap in January 2038. The fallout from the year 2000 problem was far smaller than expected, but that was largely a user-space issue. The year 2038 problem will affect all existing kernels, so getting ahead of the curve is certainly prudent.
There are a number of facets to the filesystem side of the problem. Filesystems often store timestamps for each file (Unix filesystems store three), typically in 32-bit formats. That means those filesystems will need to change to a larger-sized timestamp at some point, but they will also need to be able to handle today's already-on-disk filesystems with their 32-bit timestamps. In addition, filesystems may want to handle on-disk timestamps in their own way, without converting to the 64-bit timestamp that is being used internally in the kernel moving forward.
The VFS layer, on the other hand, has its own timestamp handling for its in-memory inodes and other structures. It will need to change too, but there are various carts and horses that need to be aligned correctly before that can happen.
Deepa Dinamani recently posted a patch set that made an attempt at solving the problem in the VFS layer. Somewhat confusingly to some, it also included patches for some filesystems to try to show the scope of the changes needed. That part of the patch set had not been compile-tested, which was part of the confusion.
But the first seven (of fifteen) patches targeted VFS. Currently VFS uses a struct timespec to represent time. That structure suffers from the year 2038 problem because it uses a time_t for seconds, which is 32 bits on some systems. It also uses a long to store nanoseconds, which can vary in size as well. That means the structure has a different size on different systems. The replacement for that in a year-2038-compatible world is the struct timespec64, which has a 64-bit seconds field, but still has a long for nanoseconds, so it still will change size between systems.
Dinamani proposed using a new struct inode_timespec that is defined as a 64-bit seconds field and a 32-bit nanoseconds field everywhere. It is mainly introduced to prevent the need for a big "flag day" patch that converts everything to a timespec64 at once. She added macros to access the fields so that eventually inode_timespec could be turned into a timespec64. The inode_timespec would be aligned so that it only used 12 bytes, rather than 16 on 64-bit systems. But Dave Chinner called that a premature optimization.
As the conversation continued, there was a clear difference of opinion about how to attack the whole problem. The memory savings for 12 versus 16 bytes for timestamps in inodes in memory may not be that significant, as Arnd Bergmann pointed out. 32-bit systems will need larger inodes to handle post-2038 timestamps, so it is really a matter of how much they grow. Bergmann copied other architecture mailing lists to see if there were strong feelings about it, but so far there have been no replies.
But Dinamani also wanted feedback on other parts of the patch set. She summarized some of the outstanding questions that needed to be addressed before the problem can be solved. Essentially, there is a tension between the need to move everything to timespec64 and how that can be done without disrupting filesystem and VFS development. Dinamani sees the transitional inode_timespec as something of a necessary evil that will be eliminated once all of the filesystems have been converted.
Chinner, on the other hand, thinks that moving directly to timespec64 makes more sense. Both agreed that there are some preliminary steps that should be taken, such as ensuring that timestamps are range-checked and clamped to reasonable values on their way into and out of filesystems and VFS. There is also the matter of eliminating the use of the CURRENT_TIME macro in filesystems in favor of current_fs_time(), which references the filesystem superblock so that the proper time granularity and range can be enforced. Beyond that, the approaches diverge.
Rather than go through an intermediate inode timestamp type, so that filesystems can be converted over time, Chinner would like to turn that on its head a bit. Start by ensuring that all filesystems that use timespec internally call a (for now empty) conversion function to change them to and from the VFS representation. That would eliminate all of the macro changes that were needed when using inode_timespec:
Internally, time handling in those filesystems could remain unchanged; it would just be a change at the boundary between the filesystem and the VFS. That would isolate the changes that need to be done for the VFS from those that need to be done for the filesystems. Chinner said that all filesystems will need an audit to determine what they need to support post-2038 timestamps, so this decoupling is useful:
Filesystems that have intermediate timestamp formats such as Lustre, NFS, CIFS, etc will need conversion at the vfs/filesystem entry points, and their internals will remain unchanged. Fixing the internals is outside the scope fo the VFS change - the 64 bit VFS inode support stops at the VFS inode/filesystem boundary.
But Dinamani and Bergmann are leery of an
enormous patch set that touches lots of code all over the place. It is
both "ugly and fragile
" as Bergmann put it, though he suggested at least
investigating that path. Both he and Dinamani have made
various attempts to find the right approach and they have both run into
various walls. Chinner's suggestion of how
to handle a particular case for the FAT filesystem is not workable, they
said. Bergmann elaborated:
So there seems to be an impasse at this point. Dinamani said that she would try to convert an example filesystem using the two different methods for comparison purposes. Hopefully that will help point the way toward a solution that leads to as little disruption as possible. A change of this sort is always going to lead to some upheaval, but finding a way to reduce it as much as possible will be good. So far, Dinamani and Bergmann haven't quite found the right approach—or haven't yet convinced Chinner—but it is good to see that kernel developers are thinking about this.
Patches and updates
Kernel trees
Architecture-specific
Build system
Core kernel code
Development tools
Device drivers
Device driver infrastructure
Filesystems and block I/O
Memory management
Networking
Security-related
Miscellaneous
Page editor: Jonathan Corbet
Distributions
Apt over Tor
Debian users can now download packages and updates over the Tor network—a feature that, among other benefits, guards against attackers attempting to locate machines with unpatched security vulnerabilities. Developers have yet to integrate Tor support into every package-management tool, much less push for the service to become the default, but in light of the advantages that using Tor as a transport provides, it would not be surprising to see further integration work in the months to come.
The principle rationale for employing Tor for package and update downloads is straightforward. If an eavesdropper observes an HTTP request for an update from a Debian mirror—specifically, an update that patches a vulnerability—coming from a particular IP address, then they know with degree of certainty that the machine at that IP is currently unpatched, and they can attempt an attack before the update is installed. There may be only a short window of time to capitalize on this knowledge, but if there are a lot of updates being downloaded, it may still be plenty.
At DebConf15, Tor's Jacob Appelbaum made the case that this sort of attack was grounds for always downloading updates over HTTPS. But Richard Hartmann pointed out that even a TLS-protected download could leak enough information for an attacker to determine that a specific update is being downloaded. For one thing, the updates are publicly accessible, so the attacker can easily learn the download sizes of any updates of interest. Furthermore, security updates are hosted at a different server, security.debian.org, so all downloads coming from that address are potential red flags for observant attackers.
Hartmann proposed setting up a Tor hidden service to make the Debian archive accessible through Tor. Peter Palfrader then took him up on the idea and established such a service at http://vwakviie2ienjx6t.onion/. Hartmann's post reports that he successfully used the torify wrapper script to access the service over a SOCKS proxy using the standard Apt. He also noted that several Debian project members had discussed plans to Tor-enable not just Apt, but the Debian bug-reporting tools, package-upload tools, and other components of the package-management lifecycle.
Hartmann's usage of torify to access the service seemed to puzzle some commenters at his blog, since it amounted to manually tunneling a normal Apt connection through SOCKS. If done regularly, that approach could become tiresome, and there was already a tool available that could automatically direct all Apt operations over a Tor circuit.
That somewhat more sophisticated solution is Tim Retout's apt-transport-tor package. First started in mid-2014, apt-transport-tor is a pluggable transport layer for Apt; one needs only to install the package and change any URLs in a sources.list to use the prefix tor+http:// or tor+https:// .
Perhaps it is unknown to some, but Apt has supported pluggable transports since the 0.7.0 release in January 2007. The framework was first used to implement HTTPS support (with the Apt team maintaining the apt-transport-https package). The first transport from outside developers was implemented in the apt-transport-debtorrent package added with Apt 0.7.25 in December 2009. There are a handful of other apt-transport options, including one for retrieving packages from Amazon S3 storage and one for accessing Spacewalk configuration servers. In January 2016, Petter Reinholdtsen posted instructions on his blog for how to configure Apt to use Palfrader's hidden service. Currently, only Debian "Jessie" and newer (i.e., the unstable or experimental suites) are supported.
Strictly speaking, the hidden-service site is not required in order to benefit from using Tor as a transport mechanism. A user could connect to a Debian mirror over a Tor circuit and the user's endpoint would be effectively protected against discovery by attackers. But traffic between the Tor exit node and the mirror could still be monitored, and a powerful attacker could, theoretically, use timing or other side-channel attacks to infer that an update of interest was being downloaded.
The hidden service enables package downloads to be performed entirely within the Tor network, eliminating that risk. In his blog post, Reinholdtsen observed that there may be other benefits to using Tor to fetch package updates, such as the fact that Apt will periodically generate Tor traffic from the machine by fetching package updates and checking for update information, thus making it incrementally more difficult for an attacker to determine that the user is doing something else with Tor (e.g., accessing a censored web site).
Reinholdtsen also pointed out that there are several Apt-related tools that currently do not work with the Tor transport mechanism—most notably apt-file, which is used to find the available packages that include a given filename. The apt-file package in Debian experimental, however, has been updated to support apt-transport-tor.
Hartmann also posted a follow-up where he addressed some common questions. Perhaps the biggest concern moving forward is that using Apt does leak some potentially private information about the user's machine: the architecture, Debian release, software suite, and the names and versions of packages are all transmitted to the server. That data is required for Apt to fetch the required package files and updates, which means that users who are particularly concerned about maintaining privacy will want to employ the Tor transport and specify HTTPS. Accessing the hidden-service endpoint of the Debian mirror and not simply tunneling the traffic over Tor may provide additional protections against the inference of what is being downloaded if there are detectable differences in the size of updates for different suites or architectures.
But the Tor hidden service set up by Palfrader remains experimental, and converting it to a stable, permanent offering may take some time—as Hartmann noted in his second post, the hidden-service endpoint will need to be load-balanced in order to handle a significant number of users. The project has gained a following within Debian, though. Following Palfrader's mirror, Jonathan "noodles" McDowell added a hidden-service endpoint to his own Debian mirror.
Subsequently, a project page was set up on the Debian wiki, which notes other services that are candidates for "Torification" and the infrastructure tasks to go along with them—such as tools to let the Debian System Administration team manage the hidden-service key pairs.
It is too early to predict how or when Debian will make Tor hidden-service access to core services a standard feature but, if it does so, Debian would be the first major distribution to implement that level of Tor support. As Reinholdtsen pointed out in his post, the FreedomBox distribution already uses apt-transport-tor if Tor support is enabled in the configuration. But FreedomBox is considerably more limited in its scope. A successful implementation of Tor services by Debian would, no doubt, encourage several other distributions to take a hard look at providing similar features to their users.
Brief items
Distribution quotes of the week
Distribution News
Ubuntu family
Ubuntu 15.04 reaches End of Life
Ubuntu 15.04 (Vivid Vervet) will reach its end of life on February 4, 2016. The supported upgrade path from Ubuntu 15.04 is via Ubuntu 15.10.
Newsletters and articles of interest
Distribution newsletters
- DistroWatch Weekly, Issue 644 (January 18)
- 5 things in Fedora (January 15)
- openSUSE Tumbleweed – Review of the week 2016/2 (January 15)
- Ubuntu Weekly Newsletter, Issue 450 (January 17)
Deepin Takes Linux to New Depths (LinuxInsider)
LinuxInsider takes a look at Depth OS from Deepin. "The Deepin/Depth OS distro remains something totally new. It was an Ubuntu-based distribution built around its own desktop environment based on the Qt 5 toolkit. With the latest release, Qt powers the desktop to replace the previous HTML5 + WebKit implementation. Mutter is now used as the window manager. Another change is the Linux 4.2 kernel. Systemd has replaced Upstart, Bash is now the default shell rather than Zsh, and GCC 5.3.1 is the base compiler. Version 15 switches from its roots as an Ubuntu-based distro to the Debian Unstable Channel. For most users, that presents little or no consequence. Nothing visible in the desktop design or the software repository resembles any connection to the Ubuntu infrastructure. The distro's developers built their own ecosystem of homegrown applications. Applications such as the Deepin Software Center, Deepin Music Player and Deepin Media Player contribute to an operating system tailored to the average user."
Solus Project: No Longer Just A Chrome OS Alternative
Linux.com reviews Solus. "Months ago, I covered Solus Project as an alternative to Chrome OS. It made sense, as the Budgie desktop environment resembled the Chrome OS UI and the system integrated well with the user’s Google cloud account. Even at that early iteration, Solus was a solid distribution that made Linux incredibly easy to use. Fast forward to now and Solus no longer exists as a shadow of Chrome OS. Solus is a distribution that lives somewhere in the intersection of the GNOME, Chrome OS, and Xfce Venn diagram. It is simultaneously familiar and brand new. With that “brand new familiarity” comes an ease of introduction you won’t find with other 1.0 distributions sporting a new desktop environment."
Page editor: Rebecca Sobol
Development
An early peek at Krita 3.0
The Krita project has made the first "pre-alpha" builds of its upcoming 3.0 release available for download. The release of 3.0 will mark a significant milestone in the project's history, bringing a new set of features, a port to Qt 5, and a commitment to supporting a new platform.
The alpha release was announced on January 17. For now, the post
contains direct links to the binary packages, which are not available
through the normal Krita download channels. As maintainer Boudewijn
Rempt said in the post: "Right now, Krita is in the 'may eat
your cat'-stage...
". There are standard Mac OS X and
64-bit Windows installers provided. The Linux builds are provided as AppImage packages. The AppImage
format includes an ISO 9660 filesystem that bundles all necessary
files to launch the application, plus a small launcher binary that
mounts the filesystem with FUSE and runs the application within. The
upshot is that, like the Windows and OS X downloads, the Linux
package should be portable (or at least portable enough to run on
almost any desktop Linux distribution).
Improved OS X support is one of the key goals for Krita 3.0;
Rempt said in the announcement: "We fully intend to make
Krita 3.0 as supported on OSX as on Windows and Linux
". Since
OS X is disproportionately popular within the digital-artist
community, emphasizing support for the platform will, hopefully, gain
Krita quite a few new users.
Whatever operating system one uses, though, the new release packs in a lot of features. Under the hood, the big news is that Krita 3.0 marks the completion of a port from Qt 4 to Qt 5. Bumping the Qt version used was not a minor undertaking; it entailed substantial rewrites of the graphics-display subsystem and the tablet-computer support.
It also meant reworking the major new user-visible feature: animation support. The animation code was added in 2015 (two betas based on the 2.9 series were released: one in November and another in December), with the work supported by a Kickstarter campaign.
Krita's animation mode adds a timeline to the bottom edge of the screen and lets the user draw or paint every frame with the full complement of Krita's "natural-media" painting-and-drawing tools. The user can set the frame rate and there is a set of playback controls to advance and rewind through the timeline. However, Krita does not save or export animations as video files. Instead, it exports each frame of a document as a separate still image—automatically numbered to preserve the correct sequence—for use later in a video-editing application.
Because free-software animation programs are few and far between, Krita's animation support is a significant addition. The other active open-source 2D-animation tools (Synfig Studio and Tupi) may offer more control over timeline manipulation and animation features (e.g., effects, transitions, loops, and so on), but they cannot boast Krita's variety of painting tools, filters, and image-manipulation options.
Animation in Krita is rather easy to get started with. From a new or opened document, one only needs to click on the "Animation" option in the workspace menu in the top-right corner. That pops open the timeline as well as the animation controls. Down in the timeline, one then needs to right-click to create a new frame (which will be frame zero to start with). Subsequently, there is no need to manually create additional frames: just click on a square in the timeline and begin drawing.
The user can drag frames in the timeline to rearrange them, select multiple frames (by holding down the Control key) to draw on several frames simultaneously, and delete frames at will. It is about as intuitive as one could imagine. Furthermore, each frame can contain multiple layers (just like any other Krita document) that can be hidden or shown on screen as needed. There is also support for advancing or rewinding through the timeline using only the keyboard, and it is possible to open a set of several image files as an animation project (which is no doubt useful for those who have already worked on their animations in Krita).
Another nice touch is support for "onion skinning," a visualization that shows a faint rendering of previous and subsequent frames beneath the current one to help the user keep everything lined up. It is similar to drawing in a translucent notebook. Krita also lets the user tint previous and subsequent frames in different colors (red and green, respectively, by default). The new release also adds a feature called "drop-frame support" that drops frames from playback if the graphics card cannot keep up; that allows playback to keep time rather than slowing to a crawl. Statistics are available to let the user know how many frames are being dropped.
But animation support is not the only new feature to be found in the new release. Working with multi-layer documents is now easier. It is possible to select multiple layers and operate on them all together—such as duplicating several layers at once, merging several layers, or rearranging several layers in the layer stack like a single unit. It is even possible to edit certain aspects of a selected set of layers all at once: opacity, visibility, lock status, active image channels, and blend mode can all be changed for multiple layers together.
There are a couple of new features likely to be of interest to professional users. The first is instant-preview mode, which makes Krita more responsive when editing extremely high-resolution images. With instant-preview mode enabled, the canvas shown on screen will show a low-resolution approximation of the image, rather than Krita attempting to redraw and rescale the full-resolution image with every stroke. The second is a time tracker that records how much time is actively spent working on an image. The timer pauses when it detects no activity for 60 seconds, and the resolution is only at the one-minute level, but it should still be useful for illustrators who need to track their time on a per-project basis.
So far, there has been no schedule announced for the final release of this new Krita edition, but it is shaping up to be a release that will attract significant attention for the project—and perhaps many new users from the animation and OS X communities.
Brief items
Quote of the week
git-annex v6 released
Version 6 of git-annex has been released. The
announcement highlights the addition of one major new feature,
"support for unlocked large files that can be edited as usual
and committed using regular git commands.
" The feature makes
use of Git's smudge and clean filters, and is said to result in 50%
less disk usage compared to the git-lfs extension.
MyPaint 1.2.0 is available
Version 1.2 of the MyPaint natural-media-painting application has been released. Changes include new tools for smooth-stroke inking and flood filling, automatic file backup and recovery, the ability to group layers, and GTK+3 support. Ubuntu packages are already available through the project's official testing PPA; builds will follow shortly for other distributions and platforms. In the meantime, source bundles are provided at the project's GitHub page.
Clojure 1.8 released
Clojure version 1.8 is now available. New features include additional string functions, socket servers, and direct linking, which can speed up execution by inserting static function invocations in place of references.
PEP 512: migrating hg.python.org to GitHub
Python Enhancement Proposal (PEP) 512 is now available. The PEP "outlines the steps required to migrate Python's development
process from Mercurial as hosted at
hg.python.org to Git on GitHub. Meeting
the minimum goals of this PEP should allow for the development
process of Python to be as productive as it currently is, and meeting
its extended goals should improve it.
" The proposal is a lengthy one, encompassing the migration steps for six separate repositories.
CyanogenMod shutting down WhisperPush
The CyanogenMod developers have announced that they will be shutting down the WhisperPush secure messaging system (covered here in 2013). "We’ve ultimately made the decision that we will no longer be supporting WhisperPush functionality directly within CyanogenMod. Further, WhisperPush services will be end-of-lifed beginning Feb 1st 2016. As this is a server side implementation, all branches of CM from CM10.2 and forward will be affected."
Newsletters and articles
Development newsletters from the past week
- LLVM Weekly (January 18)
- OCaml Weekly News (January 19)
- Perl Weekly (January 18)
- PostgreSQL Weekly News (January 17)
- Python Weekly (January 14)
- Ruby Weekly (January 14)
- This Week in Rust (January 18)
- Wikimedia Tech News (January 18)
The State Of Meteor Part 1: What Went Wrong
Back in 2014, LWN looked at the Meteor web application framework. Now, Meteor's developers are contemplating why it failed to take over the world. "New developers love how easy it is to get started with it, but can get discouraged when they start struggling with more complex apps. And purely from a financial standpoint, it’s hard to build a sustainable business on the back of new developers hacking on smaller apps. On the other hand, many of the more experienced developers who’d be able to handle (and help solve) Meteor’s trickier challenges are turned off by its all-in-one approach, and never even give it a chance in the first place." They promise the imminent unveiling of a new approach that is going to address these problems.
How conference organizers can create better attendee experiences (Opensource.com)
Over at Opensource.com, VM (Vicky) Brasseur and Josh Berkus give advice to conference organizers on how they can improve their conferences for attendees. There are ten different areas they address, including "Clear communications", "Have a Code of Conduct (and train staff on what that means)", "Fix your darn badges", and "Working Wi-Fi (here be dragons)". "When asked, attendees have a lot of strong opinions on the subject of conference badges, and the majority of those opinions are not positive. Badges serve multiple purposes, but the single most important one is allowing attendees to identify each other. Yet, despite that, few conference badges do a good job of performing this one deceptively simple duty."
Hearn: The resolution of the Bitcoin experiment
Core Bitcoin developer Mike Hearn writes that the Bitcoin experiment has failed. "In a company, someone who did not share the goals of the organisation would be dealt with in a simple way: by firing him. But Bitcoin Core is an open source project, not a company. Once the 5 developers with commit access to the code had been chosen and Gavin [Andresen] had decided he did not want to be the leader, there was no procedure in place to ever remove one. And there was no interview or screening process to ensure they actually agreed with the project’s goals." If Bitcoin is indeed failing as the article says, it's failing due to project governance issues rather than technical or regulatory problems.
Mycroft: Linux’s Own AI (Linux.com)
Swapnil Bhartiya takes a look at Mycroft AI and talks with CTO Ryan Sipes, on Linux.com. "Earlier this month, the developers released the Adapt intent parser as open source. When many people look at Mycroft, they think voice recognition is the important piece, but the brain of Mycroft is the Adapt intent. It takes natural language, analyzes the ultimate sentence, and then decides what action needs to be taken. That means when someone says “turn the lights off in the conference room,” Adapt grabs the intent “turn off” and identifies the entity as “conference room.” So, it makes a decision and then reaches out to whatever device is controlling the lights in the conference rooms and tells it to turn them off. That’s complex work. And, the Mycroft developers just open sourced the biggest and most powerful piece of their software."
Wingo: Unboxing in Guile
Here is a long and detailed post from Andy Wingo on how he improved numerical performance in the Guile language by carefully removing runtime type information ("unboxing"). "If Guile did native compilation, it would always be a win to unbox any integer operation, if only because you would avoid polymorphism or any other potential side exit. For bignums that are within the unboxable range, the considerations are similar to the floating-point case: allocation costs dominate, so unboxing is almost always a win, provided that you avoid double-boxing. Eliminating one allocation can pay off a lot of instruction dispatch."
Page editor: Nathan Willis
Announcements
Brief items
Linux Foundation and Goodwill team up to provide free Linux training in Central Texas
The Linux Foundation and Goodwill are working together to bring free Linux training and certification to adult students in Texas. "The scholarship program will begin with The Goodwill Excel Center and the Goodwill Career and Technical Academy in Central Texas and is expected to expand to other communities in the future. The Goodwill Excel Center is the first free public charter high school for adults in Texas. Students age 17-50 have the opportunity to earn their high school diploma, complete an in-demand professional certification and begin post-secondary education. The Extended Learning Linux Foundation Scholarship Program created by Linux Foundation and Goodwill includes free access to the Intro to Linux (LFS101x) and Essentials of System Administration (LFS201) courses, and the Linux Foundation Certified System Administrator exam at no cost. Hundreds of disadvantaged individuals from underserved communities and a variety of backgrounds are expected to enroll in the new program in the year ahead."
Articles of interest
Dutch consumer group sues Samsung over Android updates (OSNews)
OSNews reports that the Dutch consumer protection advocacy agency Consumentenbond has sued Samsung, demanding updates for its Android phones. "The Consumentenbond had been in talks with Samsung about this issue for a while now, but no positive outcome was reached, and as such, they saw no other option but to file suit. The Consumentenbond is demanding that Samsung provides two years of updates for all its Android devices, with the two-year period starting not at the date of market introduction of the device, but at the date of sale. This means that devices introduced one or even more years ago that are still being sold should still get two years' worth of updates starting today." (Thanks to Paolo Bonzini)
Garrett: Linux Foundation quietly drops community representation
On his blog, Matthew Garrett has noted that the Linux Foundation (LF) has dropped the community representatives to its board that were elected by the individual LF members. "The by-laws were amended to drop the clause that permitted individual members to elect any directors. Section 3.3(a) now says that no affiliate members may be involved in the election of directors, and section 5.3(d) still permits at-large directors but does not require them[2]. The old version of the bylaws are here - the only non-whitespace differences are in sections 3.3(a) and 5.3(d). These changes all happened shortly after Karen Sandler [executive director of the Software Freedom Conservancy] announced that she planned to stand for the Linux Foundation board during a presentation last September [YouTube link]. A short time later, the "Individual membership" program was quietly renamed to the "Individual supporter" program and the promised benefit of being allowed to stand for and participate in board elections was dropped (compare the old page to the new one)." Garrett speculates that the GPL enforcement suit that the Software Freedom Conservancy is funding against VMware, which is an LF member, is ultimately behind the move. He also notes (the [2] above) that there is still a community representative from the Technical Advisory Board (TAB) that sits on the LF board.
New Books
Second Edition of the BIND DNS Administration Reference
Reed Media Services has published the second edition of the definitive "BIND DNS Administration Reference" book covering name server operations and DNS configuration using BIND.
Calls for Presentations
Call for Papers, PostgreSQL and PostGIS, Session #8
The 8th PostgreSQL Session will be held on April 6, 2016, in Lyon, France. The call for papers ends February 29. "Talks can be either: a case study, a Proof of Concept, a tutorial, a benchmark, a presentation of a new feature, etc. Of course, we're open to propositions on any other migration related topics (monitoring, hardware, replication, etc.) !"
2016 Linux Plumbers Conference Call for Microconferences
The 2016 Linux Plumbers Conference (LPC) has announced its Call for Microconferences. LPC will be held in Santa Fe, NM, USA on November 2-4, co-located with the Kernel Summit. "A microconference is a collection of collaborative sessions focused on problems in a particular area of the Linux plumbing, which includes the kernel, libraries, utilities, UI, and so forth, but can also focus on cross-cutting concerns such as security, scaling, energy efficiency, or a particular use case. Good microconferences result in solutions to these problems and concerns, while the best microconferences result in patches that implement those solutions."
CFP Deadlines: January 21, 2016 to March 21, 2016
The following listing of CFP deadlines is taken from the LWN.net CFP Calendar.
| Deadline | Event Dates | Event | Location |
|---|---|---|---|
| January 22 | May 2 May 5 |
FOSS4G North America | Raleigh, NC, USA |
| January 22 | January 22 January 23 |
XenProject - Cloud Innovators Forum | Pasadena, CA, USA |
| January 24 | March 14 March 18 |
CeBIT 2016 Open Source Forum | Hannover, Germany |
| January 24 | March 11 March 13 |
PyCon SK 2016 | Bratislava, Slovakia |
| January 29 | April 20 April 21 |
Vault 2016 | Raleigh, NC, USA |
| February 1 | April 25 April 29 |
OpenStack Summit | Austin, TX, USA |
| February 1 | June 22 June 24 |
USENIX Annual Technical Conference | Denver, CO, USA |
| February 1 | April 4 April 8 |
OpenFabrics Alliance Workshop | Monterey, CA, USA |
| February 2 | March 29 March 31 |
Collaboration Summit | Lake Tahoe, CA, USA |
| February 5 | April 4 April 6 |
Embedded Linux Conference | San Diego, CA, USA |
| February 5 | April 4 April 6 |
OpenIoT Summit | San Diego, CA, USA |
| February 6 | February 12 February 14 |
Linux Vacation / Eastern Europe Winter Edition 2016 | Minsk, Belarus |
| February 8 | April 7 April 8 |
SRECon16 | Santa Clara, CA, USA |
| February 10 | April 23 April 24 |
LinuxFest Northwest | Bellingham, WA, USA |
| February 12 | May 9 May 13 |
ApacheCon North America | Vancouver, Canada |
| February 15 | March 11 March 13 |
Zimowisko Linuksowe TLUG | Puck, Poland |
| February 23 | April 9 April 10 |
OSS Weekend | Bratislava, Slovakia |
| February 28 | April 6 | PostgreSQL and PostGIS, Session #8 | Lyon, France |
| February 28 | May 10 May 12 |
Samba eXPerience 2016 | Berlin, Germany |
| February 28 | April 18 April 19 |
Linux Storage, Filesystem & Memory Management Summit | Raleigh, NC, USA |
| February 28 | June 21 June 22 |
Deutsche OpenStack Tage | Köln, Deutschland |
| February 28 | June 24 June 25 |
Hong Kong Open Source Conference 2016 | Hong Kong, Hong Kong |
| March 1 | April 23 | DevCrowd 2016 | Szczecin, Poland |
| March 6 | July 17 July 24 |
EuroPython 2016 | Bilbao, Spain |
| March 9 | June 1 June 2 |
Apache MesosCon | Denver, CO, USA |
| March 10 | May 14 May 15 |
Open Source Conference Albania | Tirana, Albania |
| March 12 | April 26 | Open Source Day 2016 | Warsaw, Poland |
| March 15 | April 28 May 1 |
Mini-DebCamp & DebConf | Vienna, Austria |
| March 20 | April 28 April 30 |
Linuxwochen Wien 2016 | Vienna, Austria |
If the CFP deadline for your event does not appear here, please tell us about it.
Upcoming Events
SCALE 14X: Event Updates
SCALE runs January 21-24 in Pasadena, CA. There will be a Game Night, a weakest geek contest, ham radio exams, and more.Events: January 21, 2016 to March 21, 2016
The following event listing is taken from the LWN.net Calendar.
| Date(s) | Event | Location |
|---|---|---|
| January 20 January 22 |
O'Reilly Design Conference 2016 | San Francisco, CA, USA |
| January 21 January 22 |
Ubuntu Summit | Pasadena, CA, USA |
| January 21 January 24 |
SCALE 14x - Southern California Linux Expo | Pasadena, CA, USA |
| January 22 January 23 |
XenProject - Cloud Innovators Forum | Pasadena, CA, USA |
| January 25 | Richard Stallman - "A Free Digital Society" | Stockholm, Sweden |
| January 30 January 31 |
Free and Open Source Developers Meeting | Brussels, Belgium |
| February 1 | Sysadmin Miniconf | Geelong, Australia |
| February 1 | MINIXCon 2016 | Amsterdam, Netherlands |
| February 1 February 5 |
linux.conf.au | Geelong, Australia |
| February 5 February 7 |
DevConf.cz 2016 | Brno, Czech Republic |
| February 10 | The Block Chain Conference | San Francisco, CA, USA |
| February 10 February 12 |
netdev 1.1 | Seville, Spain |
| February 12 February 14 |
Linux Vacation / Eastern Europe Winter Edition 2016 | Minsk, Belarus |
| February 24 February 25 |
AGL Member's Meeting | Tokyo, Japan |
| February 27 | Open Source Days | Copenhagen, Denmark |
| March 1 March 6 |
Internet Freedom Festival | Valencia, Spain |
| March 1 | Icinga Camp Berlin | Berlin, Germany |
| March 8 March 10 |
Fluent 2016 | San Francisco, CA, USA |
| March 9 March 11 |
18th German Perl Workshop | Nürnberg, Germany |
| March 10 March 12 |
Studencki Festiwal Informatyczny (Students' Computer Science Festival) | Cracow, Poland |
| March 11 March 13 |
Zimowisko Linuksowe TLUG | Puck, Poland |
| March 11 March 13 |
PyCon SK 2016 | Bratislava, Slovakia |
| March 14 March 17 |
Open Networking Summit | Santa Clara, CA, USA |
| March 14 March 18 |
CeBIT 2016 Open Source Forum | Hannover, Germany |
| March 16 March 17 |
Great Wide Open | Atlanta, GA, USA |
| March 18 March 20 |
FOSSASIA 2016 Singapore | Singapore, Singapore |
| March 19 March 20 |
Chemnitzer Linux Tage 2016 | Chemnitz, Germany |
| March 19 March 20 |
LibrePlanet | Boston, MA, USA |
If your event does not appear here, please tell us about it.
Page editor: Rebecca Sobol
