|
|
Log in / Subscribe / Register

LWN.net Weekly Edition for February 9, 2017

Things that won't change in Python

By Jake Edge
February 8, 2017

A lengthy and strongly opinionated post about Python features to the python-ideas mailing list garnered various responses there, from some agreement to strong disagreement to calling it "trolling", but it may also lead the Python community to better define what Python is. Trolling seems a somewhat unfair characterization, but Simon Lovell's "Python Reviewed" post did call out some of the fundamental attributes of the language and made some value judgments that were seen as either coming from ignorance of the language or simply as opinions that were stated as facts in a brusque way. The thread eventually led to the creation of a document meant to help head off this kind of thread in the future.

Good, bad, and ugly

Lovell's message started off with a short list of "The Good", but quickly moved into a list of "The Bad" that included more explanation of the features he disliked. It included entries for things he was unhappy with: the colons at the end of if, for, and similar statements (which he deemed unnecessary), the lack of an end statement for blocks (less readable), and no do-while loop. He also thought that the else clause for loops should have a different name (and suggested whenFalse), that print() should not have been changed to a function in Python 3 ("adds no positive value that I can see"), etc. As might be guessed, he ended with an entry for "The Ugly". It complained about non-zero integer values being treated as "true" ("crapulence from C" that violates the "explicit is better than implicit" principle from PEP 20).

Some of the entries in the good and bad lists were either incorrect or misunderstandings of how to use Python, which put Lovell on the wrong foot with many python-ideas readers. But his tone also put some off. As Stephen D'Aprano put it: "As a newcomer to this community, and apparently the language as well, do you understand how obnoxious and arrogant it comes across for you to declare what is and isn't 'good' and 'bad' about the language [...]". He warned Lovell that the tone of his message might make responses "blunt or even brusque", but he did reply at some length.

For example, D'Aprano pointed him at the FAQ entry that explains the reasons behind the colons for if, et al. (readability, essentially). He also disagreed strongly with the need for an end statement, but agreed that else for loops was misnamed (though it is too late to change that now). Meanwhile, D'Aprano said that he preferred treating non-zero numbers as true values.

Chris Angelico also thought that the way Python does truth testing is both helpful and non-ugly:

In Python, *everything* is either true or false. Anything that represents "something" is true, and anything that represents "nothing" is false. An empty list is false, but a list with items in it is true.

D'Aprano and Angelico both agreed with the decision to change print() to a function. Angelico said that it "adds heaps of positive value to a lot of people", while D'Aprano was more specific about the advantages:

Consistency: print doesn't need to be a special cased statement. It does nothing special that a function can't do. So why make it a statement?

As a function, it can be passed around as a first class value, used as a callback, monkey-patched or shadowed or mocked as needed. None of these things are possible with a statement.

Overall, the thread proceeded like most in Python mailing lists. Folks were generally helpful even when they were resolute about disagreeing with Lovell. On the other hand, Lovell didn't seem to quite pick up the vibe of the list. For example, responses like "I don't really see how that can be argued" or "I may be arrogant but I can't take it seriously" did not go over all that well.

Trolling?

Eventually, Guido van Rossum started a new thread entitled "How to respond to trolling". In it, he suggested that responding to Lovell's posts was counterproductive: "I think a much more effective response would have been a resounding silence." But several thought that was not entirely fair. Ned Batchelder said:

I don't like to use the term "trolling" except for people who are trying to annoy people. I think the recent thread was misguided, but not malicious. I do agree that the thread should have ended at "unless you are seriously proposing a change to the language, this is not the right list."

There were a number of suggestions in the original thread for Lovell to take the topic to a different mailing list (python-list, which is for more general Python discussions). Since Lovell wasn't really suggesting changes to the language (at least formally), some thought the discussion belonged on a different list. Though Nick Timkovich wasn't sure that even if Lovell had proposed the changes it would be a topic for python-ideas: "If you're proposing throwing half of Python's current syntax in the bin, this isn't the right list either."

However Van Rossum was adamant that the Lovell's post did not deserve a response:

Whether the intent was to annoy or just to provoke, the effect was dozens of messages with people falling over each other trying to engage the OP, who clearly was ignorant of most language design issues and uninterested in learning, and threw some insults in for good measure. The respondents should have known better.

But D'Aprano saw things differently. What Van Rossum had suggested was effectively "shunning" Lovell, which is "a particularly nasty form of passive-aggression, as the person being shunned doesn't even get any hint as to what they have done to bring it on". He also pointed out that it may not have been clear to Lovell that he was crossing a line, because it is an unwritten one:

Giving a newcomer the Silent Treatment because they've questioned some undocumented set of features not open to change is not Open, Considerate or Respectful (the CoC). Even if their ideas are ignorant or ill-thought out, we must give them the benefit of the doubt and assume they are making their comments in good faith rather than trolling.

Things that won't change

In an attempt to rectify the "undocumented" piece, D'Aprano proposed an informational PEP called "Things that won't change in Python". In it, he listed a number of things that are known to be immutable in Python, but that perhaps those outside the community are not aware of. The rationale, as described in the PEP, is to reduce noise on python-ideas and other lists by heading off suggestions that have no chance to be accepted "because the benefit is too little, the cost of changing the language (including backwards compatibility) is too high, or simply because it goes against the design preferred by the BDFL".

Some of the things listed in the PEP are fairly obvious (Python 3 will not be abandoned, there will be no Python 2.8), some were at least partly in response to Lovell's post (colons after if and the like, no end statement, print() will remain a function), and others came from recurring suggestions to the lists (no braces around blocks, significant indentation will remain, the >>> interactive prompt will stay as the default). Each entry comes with a bit of justification, often with links to a FAQ entry or other documentation.

In general the response was positive. There was naturally some wordsmithing (or bikeshedding) over the name and whether it should be a PEP or something else, but there was little disagreement over the listed choices—unsurprisingly. Van Rossum said that he had not followed the discussion closely, but that it made sense to delineate some unchangeable features of the language:

People who come in with enthusiastic proposals to fix some pet peeve usually don't have the experience needed to appreciate the difficulty in maintaining backwards compatibility. (A really weird disconnect from reality happens when this is mentioned in the same breath as "please fix the Python 2 vs. 3 problem". :-)

I would also guess that for things that are actually controversial (meaning some people hate a feature that other people love), it's much easier to explain why it's too late to change than it is to provide an objective argument for why the status quo is better. Often the status quo is not better per se, it's just better because it's the status quo.

Lovell did reply to Van Rossum's "trolling" message, but was still on the offensive ("More than half of what I suggested could have and should be implemented."). He also continued to insist that Python's "truthiness" is ugly: "Calling truthiness of non boolean data 'Ugly' is an insult? It is ugly." Brett Cannon, who is one of the list administrators, tried one more time to explain why Lovell was not getting the response he was seemingly looking for:

It's this sort of attitude which puts people off. It is your opinion that it should be implemented, not a matter of fact as you have stated it. Just because something could be done doesn't mean it should be done. You're allowed to have your opinion, but stating it as anything but your opinion does not engender anyone to your opinion.

That led to a bit of a digression on the term "warts", which is sometimes used in the Python community to describe some missteps and misfeatures that exist in the language. It was mostly agreed that warts are "ugly" at some level, but it is a self-applied term, which makes it a bit more acceptable. And Van Rossum is not even sure they are missteps, exactly:

I believe that most of the warts are not even design missteps -- they are emergent misfeatures, meaning nobody could have predicted how things would work out.

In the end, the situation was handled pretty well, as is always the case in the Python community, it seems. One can imagine a much ruder response that a critical post like that might receive in other free-software communities. So far, it doesn't seem like the PEP has gone anywhere since it was discussed in mid-January, but the exercise and discussion were useful; if nothing else it can serve as a place to point the next person who comes along with "great ideas" that will never become a part of Python.

Comments (22 posted)

A rift in the NTP world

February 8, 2017

This article was contributed by Bruce Byfield

The failure of the Network Time Protocol (NTP) project could be catastrophic. However, what few have noticed is that the attempts to prevent that catastrophe may have created entirely new challenges.

NTP is a Internet Engineering Task Force (IETF) standard, handled by its NTP working group. As Tom Yates described it in an LWN article:

[NTP] quietly and without much fuss performs the critical Internet function of knowing the correct time. Using it, a computer with imperfect communications links may join a distributed community of servers, each of which is either directly attached to a reliable clock, or is trying to best synchronize its clock to one or more better-synchronized members of the community.

First designed in 1985 by David L. Mills, the protocol has been coordinated in recent years by the Network Time Foundation. Today, it develops a number of related standards, including Ntimed, PTPd, Linux PTPd, RADClock, and the General Timestamp API. For most of this time, the primary manager of the project has been Harlan Stenn, who has volunteered thousands of hours at the cost of his own consulting business while his NTP work is only intermittently funded.

Several years ago, the project's inadequate funding became known in the media and Stenn received partial funding from the Linux Foundation's Core Infrastructure Initiative, which was started after the discovery of how the minimal resources of the OpenSSL project left systems vulnerable to the Heartbleed vulnerability. Searching for additional funding, Stenn contacted the Internet Civil Engineering Institute (ICEI) and began working with two of its representatives, Eric S. Raymond and Susan Sons.

However, the collaboration did not go smoothly. According to Stenn, Raymond contributed one patch and had several others rejected, but Stenn's ideas and Raymond's and Sons's were out of sync. "I spent a lot of time trying to work with Susan Sons," Stenn said in a phone interview, "Then all of a sudden I heard they have this great plan to rescue NTP. I wasn't happy with their attitude and approach, because there's a difference between rescuing and offering assistance. [Their plan was] to rescue something, quote unquote, fix it up, and turn it over to a maintenance team." Beside the fact that this plan would eliminate Stenn's role, he considered it impractical because the issue is not merely maintenance, but also continued development of the protocol. The efforts to collaborate finally collapsed when Raymond and Sons created a fork they called Network Time Protocol Secure (NTPsec).

Today, the NTP Foundation lists four main contributors, one of whom is on sabbatical and acknowledges the contributions of 33 in all. In addition, another seven work on related projects. By contrast, NTPsec lists seven contributors, including Sons. Although NTPsec began by using the NTP code, today neither NTP nor NTPsec shares code or patches with the other.

Both projects would probably more or less agree on the general outline of events given above. Yet it is difficult to be sure, since both Sons and Mark Atwood, NTPsec's Project Manager pro-tem, ignored requests for an interview. However, the details of the two project's claims could hardly be farther apart. The two projects differ on the scale and cause of NTP's current problems and the approach that should be taken to address those problems.

The NTPsec version

Sons has publicly described the NTPsec interpretation several times, including in a presentation at OSCON and in a podcast interview with Mac Slocum of O'Reilly. In the podcast, Sons depicted NTP as a faltering project run by out-of-touch developers. According to Sons, the build system was on one server whose root password had been lost. Moreover, "the standard of the code was over sixteen years out of date in terms of C coding standards" and could not be fully assessed by modern tools. "We couldn't even guarantee reproducible results across different systems," she added.

Sons also claimed that "security patches weren't being circulated in a timely manner," taking "months to years" for release. Meanwhile, "security patches were being circulated secretly and leaked," although she did not explain how. Instead she offered an anecdote about a group of script-kiddies who knew that NTP was useful for denial of service attacks while remaining unaware of its function.

However, Sons was most concerned about the aging group of developers who maintain low-level Internet software (including NTP) in general. Most of them, she said, "are older than my father.... [and] are not always up to date on the latest techniques and security issues." Many are burning out from trying to maintain critical code while working full time jobs, and Sons suggested that they "should be retired."

Faced with such chaos, Sons said, she soon realized that "the Internet is going to fall down if I don't fix this." When efforts to gain acceptance for her plans from Stenn and other NTP developers failed, Sons and Raymond started NTPsec, placing the revised code in a Git repository rather than the Bitkeeper one used by the NTP Foundation, rewriting NTP rewriting NTP scripts in Python rather than C various other languages to make attracting new developers easier, and actively promoting the project in order to attract volunteers.

In her OSCON presentation she listed several accomplishments (Sons refers to the original NTP project as "NTP Classic"):

  • Due to a reduction in code of over 2/3 (from 227kLOC to 74kLOC), NTPsec was immune to over 50% of NTP Classic vulns BEFORE discovery in the last year.
  • NTPsec patches security vulnerabilities, on average, within less than 12 hours after discovery. Note that publication is sometimes slowed to coordinate with NTP Classic releases.
  • NTPsec's vulnerability response has pressured NTP Classic to speed up their response from months-to-years to days-to-weeks upon threats of funders pulling out.
  • [...] NTPsec is poised to replace NTP Classic in the coming year in installations around the world.

Sons's perspective on her involvement is summarized by the title of her OSCON presentation: "Saving Time." She has since become president of ICEI; she described herself in the presentation as having "moved on" and is no longer involved with NTPsec on a daily basis.

Meanwhile, a web search shows that media coverage of events accepts Sons's account while rarely attempting to hear NTP's side of the story. Cory Doctorow repeated the NTPsec version, and so did Brady Dale of the Observer, while Steven J. Vaughan-Nichols recommended NTPsec over NTP. The security site UpGuard was equally unquestioning, while CircleID, a site specializing in Internet infrastructure, only revised its coverage after complaints from representatives of NTP. In public, the NTPsec version of events has become the official one.

The NTP side

NTPsec depicted NTP as being in a state of total disorder. However, in communications with me, Stenn offered a radically different story. In Stenn's version of events, NTPsec, far from being the savior of the Internet, has misplaced priorities and its contributors lack the necessary experience to develop the protocol and keep it secure.

Stenn denied many of Sons's statements outright. For example, asked about Sons's story about losing the root password, he dismissed it as "a complete fabrication." Similarly, in response to her remarks about older tools and reproducible results across different systems, Stenn responded: "We build on many dozens of different versions of different operating systems, on a wide variety of hardware architectures [...] If there was a significant problem, why hasn't somebody reported it to us?"

Asked about how current the code is, Stenn stated that "the code has been and continues to be written to compile and run on currently available and currently used systems." Stenn conceded that some code only builds on older machines, yet pointed out that many old machines are still running. "If hardware is still in use, from our point of view there is an actual benefit to doing what we can to make sure folks can build the latest code on older machines."

As for security patches, Stenn acknowledged that NTP currently lacks the funding for a much-needed replacement of Autokey, the code that authenticates NTP servers. However, he noted that NTP released five major patches in 2016, and claimed that it was up to date as of the end of November 2016. He added, "I have no idea what she's talking about [in regard to] secret circulation of patches or leaked patches."

Moreover, Stenn questioned the accomplishments listed in Sons's presentation. In particular, the reduction of NTPsec's code base, even allowing for the relative compactness of code written in Python, becomes less impressive in light of Stenn's explanation that NTP is "the only reference implementation for NTP, and that means we have to provide complete functionality." Stenn claimed that NTPsec has "removed lots of stuff that has zero reported bugs in them, like sntp, the ntpsnmd code, and various refclocks." Although a less than complete implementation might have its uses, Stenn claimed that NTPsec has gone too far in removing code, and that its bug repairs have sometimes been at the cost of reduced functionality.

In general, Stenn wondered if, after only a couple of years work, NTPsec contributors have the experience necessary to work with the code. His own understanding of the protocol has changed several times during his decades of work, and he warned that "if you don't understand how everything works and where it fits into place, when things get busy, horrible things can happen." The NTPsec story frequently spoke of free-software ideals such as openness, transparency, and a welcoming environment to all contributors, "but this isn't a democratic process. It's a scientific process, and this isn't somebody's turn to go ahead and take theirs at the wheel driving the bus."

Still, the NTPsec fork has caused some changes in the NTP project. After NTPsec began, the foundation felt the need to commission regular financial audits, and to continue code audits that were begun in 2006.

"Creative destruction ('let's see what happens if we throw something into the works') is a horrible way to provide core Internet structure," Stenn concluded.

One step forward, two steps back?

For outsiders, which version of events is closer to the truth is difficult to assess. Probably few are competent to judge. However, assigning blame is beside the point.

What is of concern is that acceptance of the two implementations of the NTP protocol has been based largely on the most appealing story, and not on the quality of the code. NTPsec's constant analogy to the need to support OpenSSL evokes an immediate concerned response from free-software supporters, but, if Stenn is correct in his assertions, the situations of NTP and OpenSSL are not usefully comparable.

In particular, having two separate projects may be no more than a duplication of effort. Although having competing projects can sometimes benefit free software, in this case, having two warring projects risks diluting the already limited resources and support being contributed to put the protocol on a reliable footing.

Despite all the efforts of both projects, the possibility remains that the dangers to the protocol are as great today as they were before anyone attempted to address them. Already, where once only Stenn was looking for support, now Raymond is in a somewhat similar position, as NTPsec has lost its Core Infrastructure Initiative funding as of September 2016. It is all too easy to imagine the struggle for survival growing worse for everyone.

[Update: As noted in the comments, it was the scripts that were rewritten in Python for NTPsec.]

Comments (89 posted)

Page editor: Jonathan Corbet

Security

Reliably generating good passwords

February 8, 2017

This article was contributed by Antoine Beaupré

Passwords are used everywhere in our modern life. Between your email account and your bank card, a lot of critical security infrastructure relies on "something you know", a password. Yet there is little standard documentation on how to generate good passwords. There are some interesting possibilities for doing so; this article will look at what makes a good password and some tools that can be used to generate them.

There is growing concern that our dependence on passwords poses a fundamental security flaw. For example, passwords rely on humans, who can be coerced to reveal secret information. Furthermore, passwords are "replayable": if your password is revealed or stolen, anyone can impersonate you to get access to your most critical assets. Therefore, major organizations are trying to move away from single password authentication. Google, for example, is enforcing two factor authentication for its employees and is considering abandoning passwords on phones as well, although we have yet to see that controversial change implemented.

Yet passwords are still here and are likely to stick around for a long time until we figure out a better alternative. Note that in this article I use the word "password" instead of "PIN" or "passphrase", which all roughly mean the same thing: a small piece of text that users provide to prove their identity.

What makes a good password?

A "good password" may mean different things to different people. I will assert that a good password has the following properties:

  • high entropy: hard to guess for machines
  • transferable: easy to communicate for humans or transfer across various protocols for computers
  • memorable: easy to remember for humans

High entropy means that the password should be unpredictable to an attacker, for all practical purposes. It is tempting (and not uncommon) to choose a password based on something else that you know, but unfortunately those choices are likely to be guessable, no matter how "secret" you believe it is. Yes, with enough effort, an attacker can figure out your birthday, the name of your first lover, your mother's maiden name, where you were last summer, or other secrets people think they have.

The only solution here is to use a password randomly generated with enough randomness or "entropy" that brute-forcing the password will be practically infeasible. Considering that a modern off-the-shelf graphics card can guess millions of passwords per second using freely available software like hashcat, the typical requirement of "8 characters" is not considered enough anymore. With proper hardware, a powerful rig can crack such passwords offline within about a day. Even though a recent US National Institute of Standards and Technology (NIST) draft still recommends a minimum of eight characters, we now more often hear recommendations of twelve characters or fourteen characters.

A password should also be easily "transferable". Some characters, like & or !, have special meaning on the web or the shell and can wreak havoc when transferred. Certain software also has policies of refusing (or requiring!) some special characters exactly for that reason. Weird characters also make it harder for humans to communicate passwords across voice channels or different cultural backgrounds. In a more extreme example, the popular Signal software even resorted to using only digits to transfer key fingerprints. They outlined that numbers are "easy to localize" (as opposed to words, which are language-specific) and "visually distinct".

But the critical piece is the "memorable" part: it is trivial to generate a random string of characters, but those passwords are hard for humans to remember. As xkcd noted, "through 20 years of effort, we've successfully trained everyone to use passwords that are hard for human to remember but easy for computers to guess". It explains how a series of words is a better password than a single word with some characters replaced.

Obviously, you should not need to remember all passwords. Indeed, you may store some in password managers (which we'll look at in another article) or write them down in your wallet. In those cases, what you need is not a password, but something I would rather call a "token", or, as Debian Developer Daniel Kahn Gillmor (dkg) said in a private email, a "high entropy, compact, and transferable string". Certain APIs are specifically crafted to use tokens. OAuth, for example, generates "access tokens" that are random strings that give access to services. But in our discussion, we'll use the term "token" in a broader sense.

Notice how we removed the "memorable" property and added the "compact" one: we want to efficiently convert the most entropy into the shortest password possible, to work around possibly limiting password policies. For example, some bank cards only allow 5-digit security PINs and most web sites have an upper limit in the password length. The "compact" property applies less to "passwords" than tokens, because I assume that you will only use a password in select places: your password manager, SSH and OpenPGP keys, your computer login, and encryption keys. Everything else should be in a password manager. Those tools are generally under your control and should allow large enough passwords that the compact property is not particularly important.

Generating secure passwords

We'll look now at how to generate a strong, transferable, and memorable password. These are most likely the passwords you will deal with most of the time, as security tokens used in other settings should actually never show up on screen: they should be copy-pasted or automatically typed in forms. The password generators described here are all operated from the command line. Password managers often have embedded password generators, but usually don't provide an easy way to generate a password for the vault itself.

The previously mentioned xkcd cartoon is probably a common cultural reference in the security crowd and I often use it to explain how to choose a good passphrase. It turns out that someone actually implemented xkcd author Randall Munroe's suggestion into a program called xkcdpass:

    $ xkcdpass
    estop mixing edelweiss conduct rejoin flexitime

In verbose mode, it will show the actual entropy of the generated passphrase:

    $ xkcdpass -V
    The supplied word list is located at /usr/lib/python3/dist-packages/xkcdpass/static/default.txt.
    Your word list contains 38271 words, or 2^15.22 words.
    A 6 word password from this list will have roughly 91 (15.22 * 6) bits of entropy,
    assuming truly random word selection.
    estop mixing edelweiss conduct rejoin flexitime

Note that the above password has 91 bits of entropy, which is about what a fifteen-character password would have, if chosen at random from uppercase, lowercase, digits, and ten symbols:

    log2((26 + 26 + 10 + 10)^15) = approx. 92.548875

It's also interesting to note that this is closer to the entropy of a fifteen-letter base64 encoded password: since each character is six bits, you end up with 90 bits of entropy. xkcdpass is scriptable and easy to use. You can also customize the word list, separators, and so on with different command-line options. By default, xkcdpass uses the 2 of 12 word list from 12 dicts, which is not specifically geared toward password generation but has been curated for "common words" and words of different sizes.

Another option is the diceware system. Diceware works by having a word list in which you look up words based on dice rolls. For example, rolling the five dice "1 4 2 1 4" would give the word "bilge". By rolling those dice five times, you generate a five word password that is both memorable and random. Since paper and dice do not seem to be popular anymore, someone wrote that as an actual program, aptly called diceware. It works in a similar fashion, except that passwords are not space separated by default:

    $ diceware
    AbateStripDummy16thThanBrock

Diceware can obviously change the output to look similar to xkcdpass, but can also accept actual dice rolls for those who do not trust their computer's entropy source:

    $ diceware -d ' ' -r realdice -w en_orig
    Please roll 5 dice (or a single dice 5 times).
    What number shows dice number 1? 4
    What number shows dice number 2? 2
    What number shows dice number 3? 6
    [...]
    Aspire O's Ester Court Born Pk

The diceware software ships with a few word lists, and the default list has been deliberately created for generating passwords. It is derived from the standard diceware list with additions from the SecureDrop project. Diceware ships with the EFF word list that has words chosen for better recognition, but it is not enabled by default, even though diceware recommends using it when generating passwords with dice. That is because the EFF list was added later on. The project is currently considering making the EFF list be the default.

One disadvantage of diceware is that it doesn't actually show how much entropy the generated password has — those interested need to compute it for themselves. The actual number depends on the word list: the default word list has 13 bits of entropy per word (since it is exactly 8192 words long), which means the default 6 word passwords have 78 bits of entropy:

    log2(8192) * 6 = 78

Both of these programs are rather new, having, for example, entered Debian only after the last stable release, so they may not be directly available for your distribution. The manual diceware method, of course, only needs a set of dice and a word list, so that is much more portable, and both the diceware and xkcdpass programs can be installed through pip. However, if this is all too complicated, you can take a look at Openwall's passwdqc, which is older and more widely available. It generates more memorable passphrases while at the same time allowing for better control over the level of entropy:

    $ pwqgen
    vest5Lyric8wake
    $ pwqgen random=78
    Theme9accord=milan8ninety9few

For some reason, passwdqc restricts the entropy of passwords between the bounds of 24 and 85 bits. That tool is also much less customizable than the other two: what you see here is pretty much what you get. The 4096-word list is also hardcoded in the C source code; it comes from a Usenet sci.crypt posting from 1997.

A key feature of xkcdpass and diceware is that you can craft your own word list, which can make dictionary-based attacks harder. Indeed, with such word-based password generators, the only viable way to crack those passwords is to use dictionary attacks, because the password is so long that character-based exhaustive searches are not workable, since they would take centuries to complete. Changing from the default dictionary therefore brings some advantage against attackers. This may be yet another "security through obscurity" procedure, however: a naive approach may be to use a dictionary localized to your native language (for example, in my case, French), but that would deter only an attacker that doesn't do basic research about you, so that advantage is quickly lost to determined attackers.

One should also note that the entropy of the password doesn't depend on which word list is chosen, only its length. Furthermore, a larger dictionary only expands the search space logarithmically; in other words, doubling the word-list length only adds a single bit of entropy per word in the password. It is actually much better to add a word to your password than words to the word list that generates it.

Generating security tokens

As mentioned before, most password managers feature a way to generate strong security tokens, with different policies (symbols or not, length, etc). In general, you should use your password manager's password-generation functionality to generate tokens for sites you visit. But how are those functionalities implemented and what can you do if your password manager (for example, Firefox's master password feature) does not actually generate passwords for you?

pass, the standard UNIX password manager, delegates this task to the widely known pwgen program. It turns out that pwgen has a pretty bad track record for security issues, especially in the default "phoneme" mode, which generates non-uniformly distributed passwords. While pass uses the more "secure" -s mode, I figured it was worth removing that option to discourage the use of pwgen in the default mode. I made a trivial patch to pass so that it generates passwords correctly on its own. The gory details are in this email. It turns out that there are lots of ways to skin this particular cat. I was suggesting the following pipeline to generate the password:

    head -c $entropy /dev/random | base64 | tr -d '\n='

The above command reads a certain number of bytes from the kernel (head -c $entropy /dev/random) encodes that using the base64 algorithm and strips out the trailing equal sign and newlines (for large passwords). This is what Gillmor described as a "high-entropy compact printable/transferable string". The priority, in this case, is to have a token that is as compact as possible with the given entropy, while at the same time using a character set that should cause as little trouble as possible on sites that restrict the characters you can use. Gillmor is a co-maintainer of the Assword password manager, which chose base64 because it is widely available and understood and only takes up 33% more space than the original 8-bit binary encoding. After a lengthy discussion, the pass maintainer, Jason A. Donenfeld, chose the following pipeline:

    read -r -n $length pass < <(LC_ALL=C tr -dc "$characters" < /dev/urandom)

The above is similar, except it uses tr to directly to read characters from the kernel, and selects a certain set of characters ($characters) that is defined earlier as consisting of [:alnum:] for letters and digits and [:graph:] for symbols, depending on the user's configuration. Then the read command extracts the chosen number of characters from the output and stores the result in the pass variable. A participant on the mailing list, Brian Candler, has argued that this wastes entropy as the use of tr discards bits from /dev/urandom with little gain in entropy when compared to base64. But in the end, the maintainer argued that reading "reading from /dev/urandom has no [effect] on /proc/sys/kernel/random/entropy_avail on Linux" and dismissed the objection.

Another password manager, KeePass uses its own routines to generate tokens, but the procedure is the same: read from the kernel's entropy source (and user-generated sources in case of KeePass) and transform that data into a transferable string.

Conclusion

While there are many aspects to password management, we have focused on different techniques for users and developers to generate secure but also usable passwords. Generating a strong yet memorable password is not a trivial problem as the security vulnerabilities of the pwgen software showed. Furthermore, left to their own devices, users will generate passwords that can be easily guessed by a skilled attacker, especially if they can profile the user. It is therefore essential we provide easy tools for users to generate strong passwords and encourage them to store secure tokens in password managers.

Comments (45 posted)

Brief items

Security quotes of the week

According to Willy Allison, a Las Vegas–based casino security consultant who has been tracking the Russian scam for years, the operatives use their phones to record about two dozen spins on a game they aim to cheat. They upload that footage to a technical staff in St. Petersburg, who analyze the video and calculate the machine’s pattern based on what they know about the model’s pseudorandom number generator. Finally, the St. Petersburg team transmits a list of timing markers to a custom app on the operative’s phone; those markers cause the handset to vibrate roughly 0.25 seconds before the operative should press the spin button.

“The normal reaction time for a human is about a quarter of a second, which is why they do that,” says Allison, who is also the founder of the annual World Game Protection Conference. The timed spins are not always successful, but they result in far more payouts than a machine normally awards: Individual scammers typically win more than $10,000 per day. (Allison notes that those operatives try to keep their winnings on each machine to less than $1,000, to avoid arousing suspicion.) A four-person team working multiple casinos can earn upwards of $250,000 in a single week.

Brendan I. Koerner in Wired

The Linux kernel does include mechanisms intended to prevent the leakage of such pointers to user-space. One such mitigation is enforced by ensuring that every time a pointer’s value is written by the kernel, it is printed using a special format specifier: “%pK”. Then, depending on the value of kptr_restrict, the kernel may anonymise the printed pointer. In all Android devices that I’ve encountered, kptr_restrict is configured correctly, indeed ensuring the “%pK” pointers are anonymised. [...]

Unfortunately, the anonymisation format specifier is case sensitive… Using a lowercase “k”, like the code above, causes the code above to output the pointer without applying the anonymisation offered by “%pK” (perhaps this serves as a good example of how fragile KASLR [kernel address-space layout randomization] is). Regardless, this allows us to simply read the contents of pm_qos, and subtract the pointer’s value from it’s known offset from the kernel’s base address, thus giving us the value of the KASLR slide.

Gal Beniamini on Google's Project Zero blog in a lengthy look at bypassing Samsung's realtime kernel protection (RKP) for Android

The dump reveals that Cellebrite seemingly repackages untested and unaudited jailbreaking tools as lawful interception products and sells them to repressive regimes. It also reveals that suppressing disclosure of security vulnerabilities in commonly used tools does not prevent those vulnerabilities from being independently discovered and weaponized -- it just means that users, white-hat hackers and customers are kept in the dark about lurking vulnerabilities, even as they are exploited in the wild, which only end up coming to light when they are revealed by extraordinary incidents like this week's dump.
Cory Doctorow

Comments (5 posted)

Dz: Seccomp sandboxing not enabled for acme-client

In the acme-client-portable repository at GitHub, developer Kristaps Dz has a rather stinging indictment of trying to use seccomp sandboxing for the portable version of acme-client, which is a client program for getting Let's Encrypt certificates. He has disabled seccomp filtering in the default build for a number of reasons. "So I might use mmap, but the system call is mmap2? Great. This brings us to the second and larger problem. The C library. There are several popular ones on Linux: glibc, musl, uClibc, etc. Each of these is free to implement any standard function (like mmap, above) in any way. So while my code might say read, the C library might also invoke fstat. Great. In general, section 2 calls (system calls) map evenly between system call name and function name. (Except as noted above... and maybe elsewhere...) However, section 3 is all over the place. The strongest differences were between big functions like getaddrinfo(2). Then there's local modifications. And not just between special embedded systems. But Debian and Arch, both using glibc and both on x86_64, have different kernels installed with different features. Great. Less great for me and seccomp." (Thanks to Paul Wise.)

Comments (71 posted)

The grsecurity "RAP" patch set

The grsecurity developers have announced the first release of the "Reuse Attack Protector" (RAP) patch set, aimed at preventing return-oriented programming and other attacks. "RAP is our patent-pending and best-in-breed defense mechanism against code reuse attacks. It is the result of years of research and development into Control Flow Integrity (CFI) technologies by PaX. The version of RAP present in the test patch released to the public today under the GPLv2 is now feature-complete."

Comments (17 posted)

New vulnerabilities

bzrtp: man-in-the-middle vulnerability

Package(s):bzrtp CVE #(s):CVE-2016-6271
Created:February 2, 2017 Updated:February 8, 2017
Description: From the openSUSE advisory:

CVE-2016-6271: missing HVI check on DHPart2 packet reception may have allowed man-in-the-middle attackers to conduct spoofing attacks

Alerts:
openSUSE openSUSE-SU-2017:0363-1 bzrtp 2017-02-02

Comments (none posted)

calibre: information leak

Package(s):calibre CVE #(s):CVE-2016-10187
Created:February 8, 2017 Updated:February 13, 2017
Description: From the Red Hat bugzilla:

A vulnerability was found in Calibre. It was found that a javascript present in the book can access files on the computer using XMLHttpRequest.

Alerts:
Mageia MGASA-2017-0047 calibre 2017-02-12
Fedora FEDORA-2017-efed73a87c calibre 2017-02-07
Fedora FEDORA-2017-07d308fd81 calibre 2017-02-08

Comments (none posted)

epiphany: multiple vulnerabilities

Package(s):epiphany CVE #(s):
Created:February 6, 2017 Updated:February 13, 2017
Description: From the Fedora advisory:

Update to 3.22.6:

* Fix minor memory leak [#682723]

* Fix serious password extraction sweep attack on password manager [#752738]

* Fix adblocker blocking too much stuff, breaking Twitter [#777714]

Alerts:
Fedora FEDORA-2017-6938ef7591 epiphany 2017-02-12
Fedora FEDORA-2017-6792542f47 epiphany 2017-02-05

Comments (none posted)

gnome-boxes: password disclosure

Package(s):gnome-boxes CVE #(s):
Created:February 8, 2017 Updated:February 10, 2017
Description: From the Fedora advisory:

gnome-boxes 3.22.4 release, fixing a possible security issue with storing the express installation password in clear text. - Store the user password in the keyring during an express installation.

Alerts:
Fedora FEDORA-2017-42df4eeb59 gnome-boxes 2017-02-09
Fedora FEDORA-2017-fc0140d4c5 gnome-boxes 2017-02-08

Comments (none posted)

GraphicsMagick: multiple vulnerabilities

Package(s):GraphicsMagick CVE #(s):CVE-2016-10048 CVE-2016-10050 CVE-2016-10051 CVE-2016-10052 CVE-2016-10068 CVE-2016-10070
Created:February 6, 2017 Updated:February 8, 2017
Description: From the openSUSE advisory:

This update for GraphicsMagick fixes several issues.

  • CVE-2016-10048: Arbitrary module could have been load because relative path were not escaped (bsc#1017310)
  • CVE-2016-10050: Corrupt RLE files could have overflowed a heap buffer due to a missing offset check (bsc#1017312)
  • CVE-2016-10051: Fixed use after free when reading PWP files (bsc#1017313)
  • CVE-2016-10052: Added bound check to exif parsing of JPEG files (bsc#1017314)
  • CVE-2016-10068: Prevent NULL pointer access when using the MSL interpreter (bsc#1017324)
  • CVE-2016-10070: Prevent allocating the wrong amount of memory when reading mat files (bsc#1017326)
Alerts:
openSUSE openSUSE-SU-2017:0391-1 GraphicsMagick 2017-02-06
openSUSE openSUSE-SU-2017:0399-1 GraphicsMagick 2017-02-06

Comments (none posted)

GraphicsMagick: multiple vulnerabilities

Package(s):GraphicsMagick CVE #(s):CVE-2016-10059 CVE-2016-10064 CVE-2016-10065 CVE-2016-10069
Created:February 6, 2017 Updated:February 8, 2017
Description: From the openSUSE advisory:

This update for GraphicsMagick fixes several issues.

  • CVE-2016-10059: Unchecked calculation when reading TIFF files could have lead to a buffer overflow (bsc#1017318)
  • CVE-2016-10064: Improved checks for buffer overflow when reading TIFF files (bsc#1017321)
  • CVE-2016-10065: Unchecked calculations when reading VIFF files could have lead to out of bound reads (bsc#1017322)
  • CVE-2016-10069: Add check for invalid mat file (bsc#1017325)
Alerts:
openSUSE openSUSE-SU-2017:0391-1 GraphicsMagick 2017-02-06

Comments (none posted)

gst-plugins-bad: two vulnerabilities

Package(s):gst-plugins-bad CVE #(s):CVE-2017-5843 CVE-2017-5848
Created:February 6, 2017 Updated:February 21, 2017
Description: From the Arch Linux advisory:

- CVE-2017-5843 (arbitrary code execution): A double-free issue has been found in gstreamer before 1.10.3, in gst_mxf_demux_update_essence_tracks.

- CVE-2017-5848 (denial of service): An out-of-bounds read has been found in gstreamer before 1.10.3, in gst_ps_demux_parse_psm.

Alerts:
Fedora FEDORA-2017-216f4b9f9d mingw-gstreamer1-plugins-bad-free 2017-02-20
Debian-LTS DLA-830-1 gst-plugins-bad0.10 2017-02-18
Arch Linux ASA-201702-5 gst-plugins-bad 2017-02-05

Comments (none posted)

gst-plugins-base-libs: multiple vulnerabilities

Package(s):gst-plugins-base-libs CVE #(s):CVE-2017-5837 CVE-2017-5839 CVE-2017-5842 CVE-2017-5844
Created:February 6, 2017 Updated:February 21, 2017
Description: From the Arch Linux advisory:

- CVE-2017-5837 (denial of service): A floating point exception issue has been found in gstreamer before 1.10.3, in gst_riff_create_audio_caps.

- CVE-2017-5839 (denial of service): An endless recursion issue leading to stack overflow has been found in gstreamer before 1.10.3, in gst_riff_create_audio_caps.

- CVE-2017-5842 (arbitrary code execution): An off-by-one write has been found in gstreamer before 1.10.3, in html_context_handle_element.

- CVE-2017-5844 (denial of service): A floating point exception issue has been found in gstreamer before 1.10.3, in gst_riff_create_audio_caps.

Alerts:
Fedora FEDORA-2017-a56d78acb8 mingw-gstreamer1-plugins-base 2017-02-20
Debian-LTS DLA-827-1 gst-plugins-base0.10 2017-02-18
Arch Linux ASA-201702-4 gst-plugins-base-libs 2017-02-05

Comments (none posted)

gst-plugins-good: multiple vulnerabilities

Package(s):gst-plugins-good CVE #(s):CVE-2016-10198 CVE-2016-10199 CVE-2017-5840 CVE-2017-5841 CVE-2017-5845
Created:February 6, 2017 Updated:February 21, 2017
Description: From the Arch Linux advisory:

- CVE-2016-10198 (denial of service): An invalid memory read flaw has been found in gstreamer before 1.10.3, in gst_aac_parse_sink_setcaps.

- CVE-2016-10199 (denial of service): An out of bounds read has been found in gstreamer before 1.10.3, in qtdemux_tag_add_str_full.

- CVE-2017-5840 (denial of service): An out-of-bounds read has been found in gstreamer before 1.10.3, in qtdemux_parse_samples.

- CVE-2017-5841 (denial of service): An out-of-bounds read has been found in gstreamer before 1.10.3, in gst_avi_demux_parse_ncdt.

- CVE-2017-5845 (denial of service): An out-of-bounds read has been found in gstreamer before 1.10.3, in gst_avi_demux_parse_ncdt.

Alerts:
Fedora FEDORA-2017-1fc4026d15 mingw-gstreamer1-plugins-good 2017-02-20
Debian-LTS DLA-828-1 gst-plugins-good0.10 2017-02-18
Arch Linux ASA-201702-3 gst-plugins-good 2017-02-05

Comments (none posted)

gst-plugins-ugly: two vulnerabilities

Package(s):gst-plugins-ugly CVE #(s):CVE-2017-5846 CVE-2017-5847
Created:February 6, 2017 Updated:February 20, 2017
Description: From the Arch Linux advisory:

- CVE-2017-5846 (denial of service): An out-of-bounds read has been found in gstreamer before 1.10.3, in gst_asf_demux_process_ext_stream_props.

- CVE-2017-5847 (denial of service): An out-of-bounds read has been found in gstreamer before 1.10.3, in gst_asf_demux_process_ext_content_desc.

Alerts:
Debian-LTS DLA-829-1 gst-plugins-ugly0.10 2017-02-18
Arch Linux ASA-201702-6 gst-plugins-ugly 2017-02-05

Comments (none posted)

gstreamer: denial of service

Package(s):gstreamer CVE #(s):CVE-2017-5838
Created:February 6, 2017 Updated:February 21, 2017
Description: From the Arch Linux advisory:

An out of bounds read has been found in gstreamer before 1.10.3, in gst_date_time_new_from_iso8601_string.

Alerts:
Fedora FEDORA-2017-c0564718ea mingw-gstreamer1 2017-02-20
Arch Linux ASA-201702-7 gstreamer 2017-02-05

Comments (none posted)

iio-sensor-proxy: authentication bypass

Package(s):iio-sensor-proxy CVE #(s):
Created:February 6, 2017 Updated:February 10, 2017
Description: The 2.1 iio-sensor-proxy release contains this commit fixing a problem whereby any process in the system could make calls to processes intended to be accessible only by root.
Alerts:
Fedora FEDORA-2017-b3130f212a iio-sensor-proxy 2017-02-10
Fedora FEDORA-2017-2f4e97fdfb iio-sensor-proxy 2017-02-03

Comments (none posted)

irssi: memory leak

Package(s):irssi CVE #(s):
Created:February 8, 2017 Updated:February 13, 2017
Description: From the SUSE bug report:

Joseph Bisch has detected a remote memory leak in some cases where a hostile server would send certain incomplete SASL replies. According to his calculations, the server would need to send 13 times the amount of memory it wants to leak. The issue is a missing free of the base64 data.

Alerts:
openSUSE openSUSE-SU-2017:0447-1 irssi 2017-02-11
openSUSE openSUSE-SU-2017:0413-1 irssi 2017-02-07

Comments (none posted)

iucode-tool: code execution

Package(s):iucode-tool CVE #(s):CVE-2017-0357
Created:February 2, 2017 Updated:February 8, 2017
Description: From the Ubuntu advisory:

It was discovered that iucode-tool incorrectly handled certain microcodes when using the -tr loader. If a user were tricked into processing a specially crafted microcode, a remote attacker could use this issue to cause iucode-tool to crash, resulting in a denial of service, or possibly execute arbitrary code.

Alerts:
Ubuntu USN-3186-1 iucode-tool 2017-02-01

Comments (none posted)

jasper: code execution

Package(s):jasper CVE #(s):CVE-2016-9583
Created:February 2, 2017 Updated:February 8, 2017
Description: From the jasper advisory:

The vulnerability is introduced from version 2.0.0 and affects all later versions. The vulnerability is a heap buffer overflow vulnerability (out-of-bound read) and can be changed to a Null-pointer-dereference vulnerability by updating one byte of the PoC file. The related code was used to check for potential overflow and becomes useless due to the vulnerability, i.e. it is possible to trigger other overflow by bypassing the check. The vulnerability is probably caused by a programming mistake. It can cause Denial-of-Service and maybe cause other impact if other overflow is triggered.

Alerts:
Fedora FEDORA-2017-d90fac5c8f jasper 2017-02-03
Fedora FEDORA-2017-78a77d2450 jasper 2017-02-01

Comments (none posted)

kernel: two vulnerabilities

Package(s):kernel CVE #(s):CVE-2016-10147 CVE-2016-10150
Created:February 3, 2017 Updated:February 8, 2017
Description: From the Ubuntu advisory:

Mikulas Patocka discovered that the asynchronous multibuffer cryptographic daemon (mcryptd) in the Linux kernel did not properly handle being invoked with incompatible algorithms. A local attacker could use this to cause a denial of service (system crash). (CVE-2016-10147)

It was discovered that a use-after-free existed in the KVM susbsystem of the Linux kernel when creating devices. A local attacker could use this to cause a denial of service (system crash). (CVE-2016-10150)

Alerts:
openSUSE openSUSE-SU-2017:0458-1 kernel 2017-02-13
Ubuntu USN-3190-2 linux-raspi2 2017-02-09
Ubuntu USN-3189-2 linux-lts-xenial 2017-02-03
Ubuntu USN-3189-1 linux, linux-raspi2, linux-snapdragon 2017-02-03
Ubuntu USN-3190-1 kernel 2017-02-03

Comments (none posted)

kernel: denial of service

Package(s):kernel CVE #(s):CVE-2017-2596
Created:February 7, 2017 Updated:February 8, 2017
Description: From the Red Hat bugzilla:

Linux kernel built with the KVM virtualisation support(CONFIG_KVM), with nested virtualisation(nVMX) feature enabled(nested=1), is vulnerable to host memory leakage issue. It could occur while emulating VMXON instruction in 'handle_vmon'.

A L1 guest user could use this flaw to leak host memory potentially resulting in DoS.

Alerts:
Fedora FEDORA-2017-392b319bb5 kernel 2017-02-07
Fedora FEDORA-2017-472052ebe5 kernel 2017-02-07

Comments (none posted)

kernel: information leak

Package(s):kernel CVE #(s):CVE-2017-2584
Created:February 7, 2017 Updated:February 8, 2017
Description: From the CVE entry:

arch/x86/kvm/emulate.c in the Linux kernel through 4.9.3 allows local users to obtain sensitive information from kernel memory or cause a denial of service (use-after-free) via a crafted application that leverages instruction emulation for fxrstor, fxsave, sgdt, and sidt.

Alerts:
Ubuntu USN-3208-2 linux-lts-xenial 2017-02-22
Ubuntu USN-3208-1 linux, linux-snapdragon 2017-02-22
SUSE SUSE-SU-2017:0471-1 kernel 2017-02-15
SUSE SUSE-SU-2017:0464-1 kernel 2017-02-15
openSUSE openSUSE-SU-2017:0456-1 kernel 2017-02-13
SUSE SUSE-SU-2017:0407-1 kernel 2017-02-06

Comments (none posted)

moodle: multiple vulnerabilities

Package(s):moodle CVE #(s):CVE-2016-8642 CVE-2016-8643 CVE-2016-8644 CVE-2017-2576 CVE-2017-2578
Created:February 2, 2017 Updated:February 8, 2017
Description: From the Red Hat bugzilla entry:

CVE-2016-8642: Question engine allows access to files that should not be available

CVE-2016-8643: Non-admin site managers may accidentally edit admins via web services

CVE-2016-8644: Capability to view course notes is checked in the wrong context

From the Red Hat bugzilla entry:

Incorrect sanitation of attributes in forums - CVE-2017-2576

XSS in assignment submission page - CVE-2017-2578

Alerts:
Fedora FEDORA-2017-ae7a707032 moodle 2017-02-07
Fedora FEDORA-2017-6681f94e10 moodle 2017-02-01

Comments (none posted)

mupdf: three vulnerabilities

Package(s):mupdf CVE #(s):CVE-2016-10132 CVE-2016-10133 CVE-2016-10141
Created:February 3, 2017 Updated:February 8, 2017
Description: From the openSUSE advisory:

CVE-2016-10132: Null pointer dereference in regexp because of a missing check after allocating memory allowing for DoS

CVE-2016-10133: Heap buffer overflow write in js_stackoverflow allowing for DoS or possible code execution

CVE-2016-10141: An integer overflow vulnerability triggered by a regular expression with nested repetition. A successful exploitation of this issue can lead to code execution or a denial of service (buffer overflow) condition

Alerts:
openSUSE openSUSE-SU-2017:0369-1 mupdf 2017-02-03
openSUSE openSUSE-SU-2017:0373-1 mupdf 2017-02-03

Comments (none posted)

ntfs-3g: privilege escalation

Package(s):ntfs-3g CVE #(s):CVE-2017-0358
Created:February 2, 2017 Updated:February 20, 2017
Description: From the Debian advisory:

Jann Horn of Google Project Zero discovered that NTFS-3G, a read-write NTFS driver for FUSE, does not scrub the environment before executing modprobe with elevated privileges. A local user can take advantage of this flaw for local root privilege escalation.

Alerts:
Gentoo 201702-10 ntfs3g 2017-02-19
Debian-LTS DLA-815-1 ntfs-3g 2017-02-02
Ubuntu USN-3182-1 ntfs-3g 2017-02-01
Debian DSA-3780-1 ntfs-3g 2017-02-01

Comments (none posted)

php: multiple vulnerabilities

Package(s):php CVE #(s):CVE-2016-10158 CVE-2016-10159 CVE-2016-10160 CVE-2016-10161
Created:February 6, 2017 Updated:February 9, 2017
Description: From the CVE entries:

The exif_convert_any_to_int function in ext/exif/exif.c in PHP before 5.6.30, 7.0.x before 7.0.15, and 7.1.x before 7.1.1 allows remote attackers to cause a denial of service (application crash) via crafted EXIF data that triggers an attempt to divide the minimum representable negative integer by -1. (CVE-2016-10158)

Integer overflow in the phar_parse_pharfile function in ext/phar/phar.c in PHP before 5.6.30 and 7.0.x before 7.0.15 allows remote attackers to cause a denial of service (memory consumption or application crash) via a truncated manifest entry in a PHAR archive. (CVE-2016-10159)

Off-by-one error in the phar_parse_pharfile function in ext/phar/phar.c in PHP before 5.6.30 and 7.0.x before 7.0.15 allows remote attackers to cause a denial of service (memory corruption) or possibly execute arbitrary code via a crafted PHAR archive with an alias mismatch. (CVE-2016-10160)

The object_common1 function in ext/standard/var_unserializer.c in PHP before 5.6.30, 7.0.x before 7.0.15, and 7.1.x before 7.1.1 allows remote attackers to cause a denial of service (buffer over-read and application crash) via crafted serialized data that is mishandled in a finish_nested_data call. (CVE-2016-10161)

Alerts:
SUSE SUSE-SU-2017:0534-1 php7 2017-02-22
Gentoo 201702-29 php 2017-02-21
Ubuntu USN-3196-1 php5 2017-02-14
Slackware SSA:2017-041-03 php 2017-02-10
Debian DSA-3783-1 php5 2017-02-09
Debian-LTS DLA-818-1 php5 2017-02-07
Mageia MGASA-2017-0040 php 2017-02-04

Comments (none posted)

phpmyadmin: multiple vulnerabilities

Package(s):phpMyAdmin CVE #(s):CVE-2015-8980
Created:February 3, 2017 Updated:February 8, 2017
Description: From the openSUSE advisory:

- CVE-2015-8980: php-gettext code execution (PMASA-2017-2)
- DOS vulnerability in table editing (PMASA-2017-3)
- CSS injection in themes (PMASA-2017-4)
- SSRF in replication (PMASA-2017-6)
- DOS in replication status (PMASA-2017-7)

Alerts:
Fedora FEDORA-2017-294c23bb1d phpMyAdmin 2017-02-07
Fedora FEDORA-2017-360e912fdb phpMyAdmin 2017-02-07
Mageia MGASA-2017-0038 phpmyadmin 2017-02-03
openSUSE openSUSE-SU-2017:0372-1 phpMyAdmin 2017-02-03

Comments (none posted)

rabbitmq-server: denial of service

Package(s):rabbitmq-server CVE #(s):CVE-2015-8786
Created:February 2, 2017 Updated:February 8, 2017
Description: From the Red Hat advisory:

A resource-consumption flaw was found in RabbitMQ Server, where the lengths_age or lengths_incr parameters were not validated in the management plugin. Remote, authenticated users with certain privileges could exploit this flaw to cause a denial of service by passing values which were too large. (CVE-2015-8786)

Alerts:
Red Hat RHSA-2017:0226-01 rabbitmq-server 2017-02-01

Comments (none posted)

rtmpdump: multiple vulnerabilities

Package(s):rtmpdump CVE #(s):
Created:February 6, 2017 Updated:February 8, 2017
Description: From the Gentoo advisory:

The following is a list of vulnerabilities fixed:

  • Additional decode input size checks
  • Ignore zero-length packets
  • Potential integer overflow in RTMPPacket_Alloc().
  • Obsolete RTMPPacket_Free() call left over from original C++ to C rewrite
  • AMFProp_GetObject must make sure the prop is actually an object

A remote attacker could entice a user to open a specially crafted media flash file using RTMPDump. This could possibly result in the execution of arbitrary code with the privileges of the process or a Denial of Service condition.

Alerts:
Gentoo 201702-02 rtmpdump 2017-02-06

Comments (none posted)

spice: two vulnerabilities

Package(s):spice CVE #(s):CVE-2016-9577 CVE-2016-9578
Created:February 6, 2017 Updated:February 21, 2017
Description: From the Red Hat advisory:

* A vulnerability was discovered in spice in the server's protocol handling. An authenticated attacker could send crafted messages to the spice server causing a heap overflow leading to a crash or possible code execution. (CVE-2016-9577)

* A vulnerability was discovered in spice in the server's protocol handling. An attacker able to connect to the spice server could send crafted messages which would cause the process to crash. (CVE-2016-9578)

Alerts:
Ubuntu USN-3202-1 spice 2017-02-20
Debian-LTS DLA-825-1 spice 2017-02-17
Debian DSA-3790-1 spice 2017-02-16
Fedora FEDORA-2017-05793780f0 spice 2017-02-09
Fedora FEDORA-2017-5972ebe591 spice 2017-02-09
openSUSE openSUSE-SU-2017:0421-1 spice 2017-02-08
openSUSE openSUSE-SU-2017:0419-1 spice 2017-02-08
SUSE SUSE-SU-2017:0396-1 spice 2017-02-06
SUSE SUSE-SU-2017:0393-1 spice 2017-02-06
SUSE SUSE-SU-2017:0400-1 spice 2017-02-06
SUSE SUSE-SU-2017:0392-1 spice 2017-02-06
Scientific Linux SLSA-2017:0253-1 spice-server 2017-02-06
Scientific Linux SLSA-2017:0254-1 spice 2017-02-06
Oracle ELSA-2017-0253 spice-server 2017-02-05
Oracle ELSA-2017-0254 spice 2017-02-05
CentOS CESA-2017:0253 spice-server 2017-02-06
CentOS CESA-2017:0254 spice 2017-02-06
Red Hat RHSA-2017:0253-01 spice-server 2017-02-05
Red Hat RHSA-2017:0254-01 spice 2017-02-05

Comments (none posted)

svgsalamander: server-side request forgery

Package(s):svgsalamander CVE #(s):CVE-2017-5617
Created:February 3, 2017 Updated:February 8, 2017
Description: From the Debian-LTS advisory:

Luc Lynx discovered a Server-Side Request Forgery in svgSalamander allowing access to the trusted network with specially crafted SVG files.

Alerts:
Debian DSA-3781-1 svgsalamander 2017-02-05
Debian-LTS DLA-816-1 svgsalamander 2017-02-03

Comments (none posted)

tiff: regression in previous update

Package(s):tiff CVE #(s):
Created:February 7, 2017 Updated:February 8, 2017
Description: From the Debian LTS advisory:

Version 4.0.2-6+deb7u7 introduced changes that resulted in libtiff being unable to write out tiff files when the compression scheme in use relies on codec-specific TIFF tags embedded in the image.

This problem manifested itself with errors like those:

$ tiffcp -r 16 -c jpeg sample.tif out.tif
_TIFFVGetField: out.tif: Invalid tag "Predictor" (not supported by codec).
_TIFFVGetField: out.tif: Invalid tag "BadFaxLines" (not supported by codec).
tiffcp: tif_dirwrite.c:687: TIFFWriteDirectorySec: Assertion `0' failed.
Alerts:
Debian-LTS DLA-693-2 tiff 2017-02-07

Comments (none posted)

wavpack: multiple vulnerabilities

Package(s):wavpack CVE #(s):CVE-2016-10172 CVE-2016-10171 CVE-2016-10170 CVE-2016-10169
Created:February 3, 2017 Updated:February 21, 2017
Description: From the Fedora advisory:

CVE-2016-10172 wavpack: Heap out of bounds read in read_new_config_info / open_utils.c https://bugzilla.redhat.com/show_bug.cgi?id=1417853

CVE-2016-10171 wavpack: Heap out of bounds read in unreorder_channels / wvunpack.c https://bugzilla.redhat.com/show_bug.cgi?id=1417852

CVE-2016-10170 wavpack: Heap out of bounds read in WriteCaffHeader / caff.c https://bugzilla.redhat.com/show_bug.cgi?id=1417851

CVE-2016-10169 wavpack: Global buffer overread in read_code / read_words.c https://bugzilla.redhat.com/show_bug.cgi?id=1417850

Alerts:
Fedora FEDORA-2017-3893b6e15b mingw-wavpack 2017-02-20
Fedora FEDORA-2017-16f06ee9d8 mingw-wavpack 2017-02-20
Fedora FEDORA-2017-9d7f592a03 wavpack 2017-02-04
Fedora FEDORA-2017-ab4f51572f wavpack 2017-02-02

Comments (none posted)

wireshark: two denial of service flaws

Package(s):wireshark CVE #(s):CVE-2017-5596 CVE-2017-5597
Created:February 2, 2017 Updated:February 10, 2017
Description: From a Wireshark advisory:

The ASTERIX dissector could go into an infinite loop. Discovered by Antti Levomäki and Christian Jalio, Forcepoint. Impact It may be possible to make Wireshark consume excessive CPU resources by injecting a malformed packet onto the wire or by convincing someone to read a malformed packet trace file.

From a Wireshark advisory:

The DHCPv6 dissector could go into a large loop. Discovered by Antti Levomäki and Christian Jalio, Forcepoint. Impact It may be possible to make Wireshark consume excessive CPU resources by injecting a malformed packet onto the wire or by convincing someone to read a malformed packet trace file.

Alerts:
Fedora FEDORA-2017-541aea2890 wireshark 2017-02-09
openSUSE openSUSE-SU-2017:0364-1 Wireshark 2017-02-02
Mageia MGASA-2017-0034 wireshark 2017-02-02

Comments (none posted)

Page editor: Jake Edge

Kernel development

Brief items

Kernel release status

The current development kernel is 4.10-rc7, released on February 5. Linus said: "Hey, look at that - it's all been very quiet, and unless anything bad happens, we're all back to the regular schedule with this being the last rc."

Stable updates: 4.9.7 and 4.4.46 were released on February 2, followed by 4.9.8 and 4.4.47 on February 4. There was also a surprise 3.18.48 update, despite that kernel's end-of-life status, on February 8.

The 4.9.9 and 4.4.48 updates are in the review process as of this writing; they can be expected on or after February 9.

Comments (none posted)

Quote of the week

Oh, if you are _stuck_ on 3.18 (/me eyes his new phone), well, I might have a plan for you, that first involves you yelling very loudly at your hardware vendor and refusing to buy from them again unless they cut this crap out. After you properly vent to them, drop me an email and let's see what we can come up with, you aren't in this sinking ship alone, and it's obvious your vendor isn't going to help out...
Greg Kroah-Hartman

Comments (7 posted)

Kernel development news

Some 4.10 Development statistics

By Jonathan Corbet
February 8, 2017
If Linus Torvalds is to be believed, the final 4.10 kernel release will happen on February 12. This development cycle has been described as "quiet", but that term really only applies if one looks at it in comparison with the record-setting 4.9 cycle. As will be seen below, there was still quite a bit of activity in this "quiet" cycle; the kernel community is never truly quiet anymore, it would seem.

As of this writing, 12,811 non-merge changesets have been pulled into the mainline repository for the 4.10 development cycle. Those changes were contributed by 1,647 developers, of whom 251 made their first-ever contribution in 4.10. These numbers put this development cycle firmly in line with its predecessors:

Release Changesets Developers
4.010,3461,458
4.111,9161,539
4.213,6941,591
4.311,8941,625
4.413,0711,575
4.512,0801,538
4.613,5171,678
4.712,2831,582
4.813,3821,597
4.916,2141,729
4.1012,8111,647

The trend toward increasing numbers of changesets clearly continues, with numbers that are now routinely higher than were seen even in the 4.0 kernel, less than two years ago.

The most active developers this time around were:

Most active 4.10 developers
By changesets
Mauro Carvalho Chehab2311.8%
Chris Wilson1931.5%
Arnd Bergmann1341.0%
Christoph Hellwig1150.9%
Ben Skeggs950.7%
Jiri Olsa920.7%
Geert Uytterhoeven860.7%
Wei Yongjun850.7%
Thomas Gleixner830.6%
Ville Syrjälä820.6%
Felipe Balbi790.6%
Javier Martinez Canillas790.6%
Masahiro Yamada770.6%
Trond Myklebust760.6%
Tvrtko Ursulin760.6%
Dan Carpenter730.6%
Sergio Paracuellos730.6%
Walt Feasel720.6%
Neil Armstrong700.5%
Eric Dumazet670.5%
By changed lines
Andi Kleen835609.7%
Tom St Denis555906.4%
Mauro Carvalho Chehab441205.1%
Edward Cree191642.2%
Zhi Wang160771.9%
Christoph Hellwig138721.6%
Takashi Iwai127071.5%
Neil Armstrong118091.4%
Chris Wilson90421.0%
Thomas Lendacky86931.0%
Bard Liao81890.9%
Tony Lindgren81830.9%
Jani Nikula80590.9%
James Smart76550.9%
Manish Rangankar74700.9%
Ard Biesheuvel69960.8%
Raghu Vatsavayi67530.8%
Ben Skeggs64820.7%
Sukadev Bhattiprolu64150.7%
Rob Clark60170.7%

Mauro Carvalho Chehab is the media subsystem maintainer, and much of his work this time around was focused there. He also, however, did a lot of work in the ongoing process of converting the kernel's documentation to Sphinx and organizing it. Chris Wilson works on the Intel i915 driver, Arnd Bergmann made fixes all over the kernel tree, Christoph Hellwig contributed a lot of changes in the block and filesystem areas, and Ben Skeggs works on the Nouveau graphics driver.

In the "changed lines" column, Andi Kleen ended up at the top of the list with a bunch of work in the perf events subsystem. Tom St. Denis added a bunch of code to the amdgpu driver, Edward Cree enhanced the sfc network driver, and Zhi Wang, once again, works in the i915 driver.

These lists are often dominated by developers working in the staging tree but, this time, nobody in the top five of either list was creating staging patches. Indeed, Sergio Paracuellos is the first staging-focused developer in the left column, while no staging work features in the right column at all. The staging tree itself was busy enough, with 957 changes in 4.10, but that work was spread across 158 developers.

Work on 4.10 was supported by 218 employers that can be identified. The list of the most active employers looks pretty much like it usually does:

Most active 4.10 employers
By changesets
Intel175213.7%
(Unknown)11989.4%
Red Hat9077.1%
(None)7656.0%
Samsung5454.3%
Linaro4963.9%
SUSE4713.7%
IBM3813.0%
(Consultant)3372.6%
AMD3162.5%
Google3062.4%
Mellanox2972.3%
Renesas Electronics2361.8%
Texas Instruments2261.8%
Huawei Technologies2021.6%
Broadcom1991.6%
Oracle1831.4%
ARM1761.4%
Linutronix1541.2%
NXP Semiconductors1511.2%
By lines changed
Intel17654920.4%
AMD749658.7%
Samsung575296.6%
Red Hat411714.8%
(Unknown)347484.0%
Linaro326703.8%
SUSE315703.6%
(None)280023.2%
IBM262383.0%
(Consultant)257443.0%
Solarflare Comm.202112.3%
MediaTek159791.8%
Cavium158121.8%
Broadcom156951.8%
BayLibre145971.7%
Mellanox127701.5%
NXP Semiconductors117921.4%
NVidia112791.3%
Texas Instruments104201.2%
Facebook88961.0%

Another way to look at the employer information is to see how many developers are associated with each company:

Companies with the most developers
CompanyDevsPct
(Unknown)34920.5%
Intel18210.7%
(None)1036.1%
Red Hat965.6%
IBM663.9%
Google533.1%
Mellanox422.5%
Linaro402.4%
Samsung372.2%
SUSE331.9%
Texas Instruments281.6%
AMD271.6%
Oracle261.5%
Code Aurora Forum261.5%
Huawei Technologies251.5%
NXP Semiconductors221.3%
ARM211.2%
Broadcom201.2%
Renesas Electronics171.0%
Rockchip150.9%

Here we see that nearly 11% of the developers who contributed to the 4.10 kernel were working for Intel. Over 20% were of unknown affiliation; they contributed 9.4% of the changes merged in this cycle.

Normal practice in these summaries is to look at the "most active employers" table above and conclude that (in this case) if all of the unknowns are working on their own time, then a maximum of just over 15% of the changes in this development cycle came from volunteers. The above table paints a slightly different picture; if, once again, the unknowns are all volunteers, then nearly 27% of the community is made up of volunteers. The difference between the numbers is almost certainly explained by the unsurprising observation that developers doing kernel work for their job will be able to spend more time on that work and, as a result, be more productive.

As of this writing, there are just over 7,500 changesets in the linux-next repository. Those changes are the beginning of what will be merged for 4.11; history suggests that this number is likely to grow significantly between now and the opening of the 4.11 merge window. Still, it seems clear that 4.11 is unlikely to set any new records for patch volume. For the definitive answer, look forward to the 4.11 summary article, to be published in 63-70 days.

Comments (1 posted)

Unscheduled maintenance for sched.h

By Jonathan Corbet
February 8, 2017
The kernel contains a large number of header files used to declare data structures and functions needed in more than one source file. Many are small and only used in a few places; others are large and frequently included. Header files have a tendency to build up over time since they often do not get as much attention as regular C source files. But there can be costs associated with bloated and neglected header files, as a current project to clean up one of the biggest ones shows.

The 0.01 kernel release contained a grand total of 31 header files, nine of which lived in include/linux. One of those, <linux/sched.h>, weighed in at all of 230 lines — the largest header file in that directory. Things have changed just a little bit since then. The upcoming 4.10 kernel contains 18,407 header files, just under 10,000 of which are intended for use outside of a specific subsystem. The 4.10 version of <linux/sched.h> is 3,674 lines, but that understates its true weight: it directly includes 50 other header files, many of which will have further includes of their own. This is not the 0.01 <linux/sched.h> anymore.

Ingo Molnar has decided that it is time to do something about this header file. A large header has its costs, especially when it is (by your editor's count) directly included into 2,500 other files in the kernel. An extra 1,000 lines of bloat expands into 2.5 million lines more code that must be compiled in a (full) kernel build, slowing compilation times significantly. A large and complex header file is also difficult to maintain and difficult to change; there are too many subtle dependencies on it throughout the kernel.

How did this file get into this condition? As Molnar put it:

The main reason why it's so large is that since Linux 0.01 it had been the Rome of the kernel: all headers lead to it, due to almost every kernel subsystem having fields embedded in task_struct. sched.h has to know about the various structure definitions of various kernel subsystems - even if the scheduler never makes direct use of 90% of those fields.

Molnar's response is a petite 89-part patch set intended to disentangle the sched.h mess. It starts by splitting out many of the more esoteric scheduler interfaces that are not needed by most users of <linux/sched.h>. This header is often included by driver code, which typically needs a small subset of the available interfaces, but which has no use for CPU frequency management, CPU hotplugging, accounting, or many other scheduler details. Code that needs the more specialized interfaces can find a set of smaller header files under include/linux/sched, but, Molnar says, 90% of users have no need for those other files.

Beyond the split-up, the patch set cleans up the interfaces with a number of other "entangled", heavily-used header files so that each can be included separately. That eliminates the need to include those headers in sched.h. There was also a certain amount of historical cruft: header files that may have been needed at one time, but which were never removed from sched.h when that need went away.

The result is a leaner sched.h that, Molnar says, can save 30 seconds on an allyesconfig kernel build. There are some details to be taken care of, though, beyond fixing source files that need the interfaces that have been split out to their own files. Since sched.h included so many other files, code that included it could get away without including the others, even if it needed them. Kernel code is supposed to explicitly include every header it needs and not rely on secondary inclusions but, if the code compiles anyway, it is easy to overlook a missing #include line. Taking those inclusions out of sched.h meant fixing up code elsewhere in the kernel that stopped compiling.

After this work is done, the resulting patch set touches nearly 1,200 files; it is not a lightweight change, in other words. Molnar suggested that the patch set should be applied at the end of the merge window in the hope of minimizing the effects on other outstanding patches. He did not specify which merge window he was targeting; 4.11 might still be possible and might be as reasonable a choice as any. Most patch sets are expected to spend some time in linux-next for wider testing, but this set almost certainly cannot go there without creating a massive patch-conflict nightmare.

There are some changes that will need to be made before this work can be merged, though. Linus Torvalds liked the end result, but was not pleased with how the patch set is organized. The changes are mixed together in a way that makes the patches hard to review and which, as was seen in a couple of cases, makes it easy for mistakes to slip in.

He suggested that, instead, the series should start by splitting out parts of sched.h, but leaving things externally the same by including the split-out files back into sched.h. These changes could thus be made without changing code elsewhere in the kernel. After that, the back-includes could be removed, one by one, with the necessary fixes being applied elsewhere. The patches in this part of the series would consist of only #include changes and would, thus, be quick to review and verify. Molnar agreed to rework the patches along these lines, though he warned that this work "will increase the patch count by at least 50%". Making the patch set easier to review (and to bisect) will, hopefully, more than make up for the increased patch count.

If this work can be completed in a convincing way before the close of the merge window, it may well make sense to apply it right away, even though the combination of big, intrusive, and new normally suggests that it may be better to wait. Causing this work to sit out for another development cycle would force much of it to be redone, and the end result may not be any more "ready" in 4.12 than it would be for 4.11. Of course, once this patch set is merged and the final loose ends tied down, the work is not yet done; there are a number of other large and messy header files in the kernel tree. The next target for a split-up may be another huge header file present since the 0.01 release: <linux/mm.h>.

Comments (none posted)

Patches and updates

Kernel trees

Linus Torvalds Linux 4.10-rc7 Feb 05
Greg KH Linux 4.9.8 Feb 04
Greg KH Linux 4.9.7 Feb 02
Greg KH Linux 4.4.47 Feb 04
Steven Rostedt 4.4.47-rt58 Feb 08
Greg KH Linux 4.4.46 Feb 02
Steven Rostedt 4.1.38-rt44 Feb 08
Greg KH Linux 3.18.48 Feb 08
Steven Rostedt 3.18.47-rt51 Feb 08
Steven Rostedt 3.12.70-rt93 Feb 08

Architecture-specific

Core kernel code

Device drivers

Anup Patel Broadcom SBA RAID support Feb 02
Mylène Josserand Add sun8i A33 audio driver Feb 02
Agustin Vega-Frias irqchip: qcom: Add IRQ combiner driver Feb 02
Chris Zhong Rockchip dw-mipi-dsi driver Feb 08
Steve Longerbeam i.MX Media Driver Feb 03
Andrey Smirnov i.MX7 PCI support Feb 07
Ramiro Oliveira Add support for Omnivision OV5647 Feb 03
Jan Glauber Cavium MMC driver Feb 06
Geert Uytterhoeven Add HD44780 Character LCD support Feb 06
Stanimir Varbanov Qualcomm video decoder/encoder driver Feb 07
Ramesh Shanmugasundaram Add V4L2 SDR (DRIF & MAX2175) driver Feb 07
lis8215@gmail.com Add the Allwinner A31/A31s PWM driver Feb 07
sean.wang@mediatek.com leds: add leds-mt6323 support on MT7623 SoC Feb 08
Vishwanathapura, Niranjana HFI Virtual Network Interface Controller (VNIC) Feb 07

Device driver infrastructure

Sakari Ailus ACPI graph support Feb 02
Christoph Hellwig automatic IRQ affinity for virtio V3 Feb 05
Jarkko Sakkinen in-kernel resource manager Feb 08

Documentation

Filesystems and block I/O

Memory management

Security-related

Djalal Harouni introduce Timgad LSM Feb 02
Tyler Hicks Improved seccomp logging Feb 03

Virtualization and containers

Marcelo Tosatti KVM CPU frequency change hypercalls Feb 02
Boris Ostrovsky PVH v2 support (domU) Feb 06

Miscellaneous

Page editor: Jonathan Corbet

Distributions

Type-driven configuration management with Propellor

By Jonathan Corbet
February 6, 2017

linux.conf.au 2017
One often hears the "infrastructure as code" refrain when configuration-management systems are discussed. Normally, though, that phrase doesn't bring into mind an image of infrastructure as Haskell code. In his 2017 linux.conf.au talk, Joey Hess described his Propellor system and the interesting features that a Haskell implementation makes possible, with a special focus on how Haskell's type-checking system can be pressed into service to detect configuration errors.

What are, Hess asked, the best practices for configuration management these days? Configuration files should have a simple format, to begin with. A declarative approach is better than an imperative approach; one of the good things that systemd brought to the table was declarative configuration. It should be compositional, since we tend to configure systems by composing [Joey Hess] various components together. He suggested that these points should be kept in mind during the talk.

Take two well-known systems for comparison: Ansible and Puppet. They both use a simple file format, though he put "simple" in quotes when describing Ansible. This format is extended, in either case, with features like variables. Ansible adds conditionals and loops, while Puppet uses a separate language for control structures. The Ansible configuration file format is Turing-complete, while the Puppet language may not be.

In general, Turing-complete configuration files are not seen as being a good idea. That leads to the classic Turing tarpit situation where everything is possible but nothing of interest is easy. The sendmail.cf format and others have taught us that we don't want Turing-complete configuration languages; if you can write the Towers of Hanoi in your configuration language, he said, you're doing something wrong. A Turing-complete language is also not declarative, violating another one of the best practices listed above.

That said, there are some advantages that can come from a Turing-complete language. You can create embedded, domain-specific languages that make common tasks easy. And, central to this talk, you can use the language's type-checking system to help avoid the creation of bad configurations.

Introducing Propellor

Propellor is a system that he wrote, similar to Ansible or Puppet. It goes out to hosts and does things to them to make those hosts look the way they are supposed to be. But it's all done in Haskell. To drive that point home, Hess jumped right in by putting up a slide containing this code:

    main :: IO ()
    main = defaultMain hosts

    hosts :: [Host]
    hosts = [foo, bar]

    foo :: Host
    foo = host "foo.example.com" $ props
        & osDebian (Stable "jessie") X86_64
	& Apt.stdSourcesList
	& Apt.installed ["openssh-server"]

That is, he acknowledged, a lot of code, "but don't worry, there will be lots more later". The first two lines just indicate that this is a main program and can be ignored. The next two indicate that there are two hosts to be managed, foo and bar. The last group describes the host foo in more detail, giving its host name and a number of properties. It's an X86_64 server running Debian jessie, with the standard list of package sources set up and the OpenSSH server installed. In the above example, the double-colon (::) is a declaration with a type. So hosts is a list of Host, while foo is a singleton Host.

Properties are the basic building block of the configuration system; they are something you can say about a system. There are a number of other data types built into Propellor to describe host architectures, packages, users, groups, port numbers, and around 150 other attributes. Using so many types brings a number of advantages, starting with the fact that the compiler can catch and flag typos.

The real reason for using types, though, is that Haskell types let you prove things about programs. In this setting, it lets you prove things about configured systems and avoid a lot of problems.

Composition and types

Systems are built through the composition of multiple components and configurations, so system descriptions consist of composed properties. Properties can be composed in four different ways, each of which is expressed as a function that takes two properties and returns yet another property. The composition functions in Propellor are requires, before, fallback, and onChange. He showed the definition of a securefoo property that looked like:

    securefoo :: Property
    securefoo =
        Apt.installed ["foo"]
	   `requires` File.containsLines "/etc/foo" ["secure=1"]
	   `onChange` Service.restarted "foo"

Here, the Apt.installed property ensures that the foo package is installed. It is then composed, using the infix requires function, with a property requiring that the secure=1 line appears in /etc/foo. That is then composed with another property causing foo to be restarted when a change is made.

This kind of composition is powerful, he said, but all compositions are the same; there's nothing here yet that is helping to prevent problems. It would be nice to do better, and make bad combinations of properties be a type error. To that end, he started adding types, the first of which was RevertableProperty, indicating a property that can be reverted. The installation of a package is revertable, while the architecture of the system is not, for example. Consider the following:

    bar :: RevertableProperty
    bar = Apt.installed ["bar"] <!> Apt.removed ["bar"]

This defines bar as a revertable property. Normally, it directs that the bar package should be installed:

    foo :: Host
    foo = Host "foo.example.com $ props
        & osDebian (stable "jessie") X86_64
	& Apt.stdSourcesList
	& bar

If, however, the desire is to ensure that the bar package is absent from the system known as foo, one could write instead:

    foo :: Host
    foo = Host "foo.example.com $ props
        & osDebian (stable "jessie") X86_64
	& Apt.stdSourcesList
	! bar

That will cause bar to be removed from the target system, should it be present. It would be a mistake, though, to try to revert the osDebian property:

    foo :: Host
    foo = Host "foo.example.com $ props
        ! osDebian (stable "jessie") X86_64
	& Apt.stdSourcesList
	! bar

The above code would cause the compiler to complain since that property is not revertable. Composition can also be type-checked in this way; if a revertable property is created with a requires composition, and one of the component properties is not revertable, a type error will result. This mechanism isn't perfect, he said, but it's good enough to model the system without trying to pin down every detail.

Containers

Propellor supports four different container types: Docker, systemd, chroot, and FreeBSD jails; it can create images for any of those types. Creating a container would be done with language like this:

    webserver :: Systemd.container
    webserver = systemd.debContainer "webserver" $ props
        & osDebian Testing X86_64
	& Apt.installedRunning "apache2"
	& Systemd.bind "/var/www"

Here, webserver is defined as being a systemd container and given various properties consistent with running a web server. Systemd.container is, in essence, another way of composing properties describing a desired container. This container could then be built and deployed on host foo with:

    foo :: Host
    foo = host "foo.example.com $ props
        & Systemd.nspawned webserver

"Just like that" you have a web-serving container running on foo. If the configuration file is later edited, Propellor will reach inside this container and make any needed changes; it doesn't need to rebuild the container from scratch.

More types

An early addition to Propellor was domain name system (DNS) configuration. The natural thing to do was to make an IP address be a property; each host has its IP address attached to it as one of its many properties. The configuration for the DNS server can then just look through the list of hosts using type introspection, find all the IP-address properties, and construct the DNS zone file accordingly. This works by adding an Info type to properties that need to provide information about a host.

That leads to a composition problem when containers are being used. Properties can add Info to both hosts and containers. When the two are composed, the Info must be propagated accordingly. Making that work required splitting the Property type into two variants, one that carries Info and one that does not. That was "the gateway drug" that led to further refinements of the Property type.

Once FreeBSD support was added, it became clear that some refinements were indeed needed. One wouldn't want to compose Debian-specific properties into a FreeBSD system, after all. So he added a "DebianLike" property type, along with "FreeBSD", "UnixLike", and more. Even more complexity can happen; for example:

    Property (HasInfo + DebianLike)

describes a property that carries Info and is applicable to Debian systems. The plus sign above indicates the addition of types — which is the sort of thing that Haskell developers apparently like to do. With this structure in place, the type checker can catch a new class of errors, such as using a FreeBSD-specific property on a Debian host.

Of course, it is nicer to make things just work than it is to flag errors. The system as described so far can catch an attempt to use the wrong package manager for a given host, but cannot yet express concepts like "just install this package" in a host-independent way. That sort of abstraction can be created with code like:

    Apt.upgraded :: Property DebianLike
    Pkg.upgraded :: Property FreeBSD

    upgraded :: Property (Debianlike + FreeBSD)
    upgraded = Apt.upgraded `pickOS` Pkg.upgraded

The result is a property (upgraded) that is both DebianLike and FreeBSD. The use of the pickOS composition function here allows that property to ensure that a package is current regardless of the target operating system.

Ongoing work

For a final example, involving code that isn't yet in the stable Propellor release, Hess delved into the detection of port-number conflicts. He often runs Tor bridges on his hosts, if he has the bandwidth available and port 443 (HTTPS) is not being used. If he later decides he needs a web server running on one of those hosts, he could end up with a runtime conflict over that port. Avoiding such conflicts is part of why Propellor exists in the first place, so some sort of solution is needed.

That solution looks like this:

    webserver :: Property (UsingPort 80 + UsingPort 443 + DebianLike)
    torBridge :: Property (UsingPort 443 + DebianLike)

A bit of programming at the Haskell type level ensures that an attempt to combine two properties using the same port will fail (while combining two DebianLike properties is fine). It works, but it has led to a situation where the type and the configuration need to be kept in sync. It could maybe be fixed by automatically generating the web-server configuration from the type information, but he hasn't gotten that far. There could also be problems with virtual hosts; that seems like it could get "really hairy" and he hasn't gotten there yet.

He concluded by putting up a pie chart displaying the number of errors avoided by each the techniques described above — before admitting that it was all made up. But the type checking has helped him to avoid a lot of mistakes; it is "a big win". (Readers wanting all the details, including a fair amount of discussion and bonus material in the question-and-answer period, may want to watch the video of the talk).

[Your editor would like to thank linux.conf.au and the Linux Foundation for assisting with his travel to the event.]

Comments (11 posted)

Brief items

Distribution quotes of the week

This means a small topic like medicine and live science which makes a small fraction of Debian usage and is honestly speaking in the end irrelevant for the overall importance of Debian in general was able to gather more than 1% of the active Debian developers.
Andreas Tille (Thanks to Paul Wise)

On 02/01/2017 08:53 AM, Ian Stakenvicius wrote:
>
> Pre-emptive strike is all. We all know systemd gets blamed for
> everything. :D

My bathtub drains too fast, making it hard to take a bath instead of a shower.

Freaking systemd.. ;)

Austin English

Comments (none posted)

NethServer 7 Final released

NethServer, a CentOS-based distribution for system administrators, has released version 7. NethServer is now able to act as a Samba Active Directory Controller, Nextcloud 10 is included, the new firewall features deep packet inspection, and much more.

Comments (none posted)

Distribution News

Debian GNU/Linux

Debian contributors survey - preliminary analysis available

A preliminary analysis of the results of the Debian contributors survey are now available. The preliminary analysis "gives a statistical overview of the entire response set, as well as drilling down into the specific sub-group of (uploading) Debian Developers."

Full Story (comments: none)

Bits from the Release Team: stretch is frozen

Debian 9.0 "stretch" has entered the final phase of development and is now frozen. "Britney will no longer migrate packages automatically. All migrations will require an explicit unblock from the Release Team."

Full Story (comments: none)

BSP (Bug Squashing Party) in Japan

There will be a bug squashing party in Japan in Tokyo on February 11 and in Kyoto on February 12.

Full Story (comments: none)

openSUSE

Statement Regarding the openSUSE Board Election

The openSUSE Board election remains on hold due to technical difficulties. "We have wanted to replace connect.opensuse.org for some while, but despite this desire no one has stepped up with a viable alternative. As we need to elect a new board with some urgency, we do not have the luxury of time to continue waiting. Therefore Martin Pluskal will be urgently investigating the use of an alternative service for this particular election, such as SurveyMonkey. As only openSUSE Members can vote, this will likely require using the email addresses in connect.opensuse.org to email members and direct them with an individual, one-time-use, link to the poll."

Full Story (comments: none)

Newsletters and articles of interest

Distribution newsletters

Comments (none posted)

Catanzaro: An Update on WebKit Security Updates

Michael Catanzaro looks at how distributors have improved (or not) their security support for the WebKit browser engine in the last year. "So results are clearly mixed. Some distros are clearly doing well, and others are struggling, and Debian is Debian. Still, the situation on the whole seems to be much better than it was one year ago. Most importantly, Ubuntu’s decision to start updating WebKitGTK+ means the vast majority of Linux users are now receiving updates."

Comments (18 posted)

Arch Linux: The Simple Linux (Linux.com)

Carla Schroder reviews Arch Linux over at Linux.com. "Arch's being simpler means more work for you. Installation is a lengthy manual process, and you'll have a lot of post-installation chores such as creating a non-root user, setting up networking, configuring software repositories, configuring mountpoints, and installing whatever software you want. The main reason I see for using Arch is to have more control over your Linux system than other distros give you. You can use Arch for anything you want, just like any other Linux: server, laptop, desktop. I had Arch on an ancient Thinkpad that was left behind by modern Linux distributions."

Comments (none posted)

Try Raspberry Pi's PIXEL OS on your PC (Opensource.com)

Raspberry Pi Community Manager Ben Nuttall introduces PIXEL (Pi Improved Xwindows Environment, Lightweight), a desktop environment for Raspbian, now also available for x86 PCs. "We released Raspberry Pi's OS for PCs to remove the barrier to entry for people looking to learn computing. This release is even cheaper than buying a Raspberry Pi because it is free and you can use it on your existing computer. PIXEL is the Linux desktop we've always wanted, and we want it to be available to everyone."

Comments (none posted)

Kali Linux on the Raspberry Pi: 3, 2, 1, and Zero (ZDNet)

Over at ZDNet, Jamie Watson installs Kali Linux on a variety of Raspberry Pi devices. "The installation images are available on the Offensive Security ARM Images Downloads area, where you will find custom images not only for the Raspberry Pi, but for a variety of other ARM SBC systems (Beaglebone, BananaPi, etc.) and even ARM-powered Chromebooks from HP, Samsung and Acer. The really exciting news for me, though, is that there are images not only for the Pi 2/3, but also for the original Pi."

Comments (none posted)

Page editor: Rebecca Sobol

Development

User-space networking with Snabb

By Jonathan Corbet
February 8, 2017

linux.conf.au 2017
High-speed networking was once, according to Andy Wingo in his 2017 linux.conf.au presentation, the domain of "the silicon people". But that situation is changing, and now any hacker can work with networking at the highest speeds. There is one little catch: one must dispense with the kernel's network stack and do the work in user space. Happily, not all of the solutions in this area are proprietary; he was there to talk about the Snabb networking toolkit and what can be done with it.

Imagine, he said, that you are running an Internet service provider (ISP) in the year 2000 — the distant past. To set up your business you need to arrange for bandwidth, core routers, and some DSL hardware, and the job is done. This is an idealized picture, he acknowledged, but the fact remains that the core business in that era was simply providing access to the Internet.

Fast forward to 2005, and an aspiring ISP must do all of the above, plus it must buy some boxes to provide voice-over-IP service. By 2010, that ISP must also handle television, video on demand, protection against denial-of-service attacks, [Andy Wingo] and cope with an increasingly constrained IPv4 address space. But, over this period of time where the job of running an ISP has gotten harder, the basic subscriber fee has remained about the same.

That is the trend he has been seeing in the ISP area: providers have to do more with the same budget. "Doing more" means putting a bunch of expensive boxes in their racks; each function requires a specialized box with a high price. This problem doesn't just apply to ISPs; it pops up in many other networking environments. It seems like there should be a better way to handle this problem.

While all this was happening, he said, there was another trend: commodity hardware caught up with the fancy networking boxes. It's possible to buy a dual-socket, Xeon-based server with twelve cores or more per socket; this machine can then be equipped with many high-speed PCIe network interfaces. The result is hardware that can handle data rates of up to 200Gb/second — if each core/interface pair can handle up to 15 million packets per second. That gives a processing-time budget of about 70ns per packet.

What is the software on such a system going to look like? The conventional wisdom is that Linux is taking over, so one would expect an ISP's racks to be full of Linux servers. That turns out to be true, but not in the way that one might expect. The Linux kernel is not ready to handle rates of 10-15 million packets per second; the networking stack is too heavy. Despite its weight, the networking stack does not normally do everything that is needed, so there must be a user-space application running as well. The split between kernel and user space adds another barrier and slows things down further.

User-space networking

The way to actually reach the desired level of performance, he said, is to remove the kernel from the picture and do the entire networking job in user space. A simple user-space program can map the interface's control registers into its address space, set up a ring buffer for transmission and reception of packets, and do whatever simple processing is required. At the end you have a user-space network driver. There are a number of toolkits to help with writing this kind of driver. One of them is Snabb, which was started in 2012. Others include DPDK, also started in 2012, and VPP, which got going in 2016.

Network operators have been trying to regain some control over the systems they have to buy to provide services. For example, Deutsche Telekom's TeraStream architecture is intended to move network functions into software rather than keeping those functions in separate physical machines. Instead of buying a box to provide a certain function, they want to buy a virtual machine that can be installed on a commodity server.

These functions can be implemented with a system like Snabb. The idea behind Snabb, he said, is "rewritable software" — as in "I could rewrite that in a weekend". The hard part is finding elegant hacks; the implementation should then be easy.

A Snabb program consists of a graph of apps, connected by directional links. The basic processing cycle is called a "breath"; during each breath, a batch of packets will be processed. The whole thing is written in the Lua language. To illustrate how it works, he put up a slide with a simple program:

    local Intel82599 = require("apps.intel.intel_app").Intel82599
    local PcapFilter = require("apps.packet_filter.pcap_filter).PcapFilter

    local = config.new()
    config.app(c, "nic", Intel82599, {pciaddr="82:00.0"})
    config.app(c, "filter", PcapFilter, {filter="tcp port 80"})

    config.link(c, "nic.tx -> filter.input")
    config.link(c, "filter.output -> nic.rx")

    engine.configure(c)
    while true do engine.breathe() end

This program starts by importing two modules that will implement the apps; the Intel82599 module drives the network interface, while PcapFilter allows the expression of packet filters using the tcpdump language. Two apps are instantiated with configurations telling them what to do; the nic app is given the PCI address of the interface, and filter is told to accept packets addressed to TCP port 80. The two links route packets from the interface, through the filter, and back out the interface again.

The final line actually runs this program. Each breath consists of two phases. In the first phase, each app "inhales" the packets that are available to it; that is driven by a call to each app's pull() function. A pull() function will typically bring in a maximum of 100 packets in a single invocation. Then each app processes those packets and pushes them into its outbound link once directed via a call to its push() function.

The definition of a packet in Snabb might be surprising to people who have worked on networking in the kernel, he said; it looks like this:

    struct packet {
        uint16_t length;
	unsigned char data[10*1024];
    };

There is none of the overhead that accompanies the kernel's SKB structure. "This must be a relief", he said. No attempt is made to keep packets on the device; Snabb relies on the device transferring the packet and getting it into the L3 cache. That lets it avoid a lot of complexity around tracking packets in different locations. It does put some headroom at the beginning of the packet so that headers can be prepended without copying if needed. A link is not much more complicated; it is a simple circular buffer. And that, he said, is all there is.

Design principles

Snabb was built around a set of three simple design principles:

  • Simple > Complex
  • Small > Large
  • Commodity > Proprietary

With regard to "simple", Snabb is built around the ability to compose network functions from small parts; the apps can be independently developed, and they all connect together easily with links. Snabb can be thought of as an implementation of the Unix pipeline metaphor. The simple packet and link data structures are also an expression of this design goal. One could make these structures more complicated in an attempt to optimize things, and it might even lead to better results on some benchmarks, but there would be a cost to pay in the ability to understand and change the system as a whole.

For small: the original Snabb implementation had a code budget. Snabb as a whole was meant to be less than 10,000 lines of code and build in less than one minute. These constraints, it was hoped, would lead to problems being solved in a creative way. They got a lot of help from their use of LuaJIT, which makes it easy to write code at a high level of abstraction that still performs well. The Snabb project also worked to minimize its dependencies, and those that are needed (such as LuaJIT itself) are included with the source and must fit within the build-time constraint.

To stay small, Snabb also avoids depending on big projects. Rather than use the DPDK drivers, Snabb's developers have written their own. The DPDK drivers have some appeal; there are a lot of them, and the project has a great deal of vendor participation. But Snabb wants to own the entire data plane, including the drivers, so that things can be changed at any point.

Snabb's drivers are typically less than 1,000 lines of code, much lighter than a typical, abstraction-heavy vendor driver. Writing the driver for the Intel 82599 was easy, since there is a good data sheet available. They refused to write drivers for the Mellanox ConnectX-4 interfaces until Mellanox provided an open data sheet — which Mellanox eventually did in response to customers wanting Snabb support.

The approach to drivers shows Snabb's adherence to the "commodity" principle. The project seeks simple drivers that are easily interchangeable; it is preferable to do work in the CPU rather than in the interface whenever possible. TCP checksum offloading comes up every couple of years on LWN, he said, but they don't bother with it; they write a simple checksum routine in Lua and move on. When their offload features are unused, network interfaces become commodities.

Present and future

The project has gotten patches from 27 authors since 2012, and it has been deployed in "a dozen sites or so". Some of the biggest programs so far are an NFV virtual switch, an lwAFTR IPv6 transition router, and a virtual private network at SWITCH.ch. New work includes control-plane integration, support for running as a virtualized guest, better multi-process support, and more.

Igalia (Wingo's employer) developed the lwAFTR router mentioned above. This router is the central component of a "lightweight 4-over-6" transitional system. It can be thought of as a big network-address translation (NAT) box. If an ISP deploys a box like this, it will be carrying all of that ISP's IPv4 traffic, which is "a bit of stress". The goal was to carry 10Gb/second, using two interfaces.

They were able to reach the speed goal, partly because, LuaJIT does a good job of making things fast. The "graph of apps" architecture plays well to the LuaJIT optimizer, since it causes a program to be composed of a number of small loops. LuaJIT's trace optimization helps to optimize those loops further. Using LuaJIT's FFI mechanisms to define data structures using C syntax provides exact control over how things are laid out, and the result is easily accessible from within Lua. It is important to avoid data-dependency chains, like those found in linked lists or hash tables; even a single cache miss will take a big dent out of the processing-time budget.

The project's latency goals were met by avoiding memory allocations (another thing that LuaJIT's optimizer helps with) and avoiding system calls whenever possible. Running on reserved CPUs eliminates preemption, which would otherwise be another source of latency.

He concluded by noting that scalability work is ongoing. 2017 is "the year of 100G in production" with Snabb. To get there, Snabb will need to support multiple processes servicing the same interface. The interface cards themselves will have to get a little better to hit that goal. There is work toward supporting horizontal scaling via the BGP and ECMP protocols. Many other projects are underway as well.

The video of this talk is available on YouTube.

[Your editor would like to thank linux.conf.au and the Linux Foundation for assisting with his travel to the event.]

Comments (5 posted)

Brief items

Development quote of the week

I’m a big believer in Conway’s Law, but not in the sense that I’ve heard most people talk about it. I say “most people”, like I’m the lone heretic of some secret cabal that convenes once a month to discuss a jokey fifty year old observation about software architecture, I get that, but for now just play along. Maybe I am? If I am, and I’m not saying one way or another, between you and me we’d have an amazing secret handshake.
Mike Hoye

Comments (none posted)

GNU C Library 2.25 released

Version 2.25 of the GNU C Library has been released. This release contains the long-awaited support for the getrandom() system call and a long list of other features; click below for the full announcement.

Full Story (comments: none)

Git v2.12.0-rc0

An early preview release of Git v2.12.0-rc0 is available for testing. "It is comprised of 441 non-merge commits since v2.11.0, contributed by 63 people, 19 of which are new faces."

Full Story (comments: none)

Kodi 17.0

Kodi 17.0 (Krypton) has been released. Kodi is a software media center for playing videos, music, pictures, games, and more. This release features a new skin, an updated video engine, improvements to the music library, numerous improvements to Live TV and PVR functionality, and more.

Comments (none posted)

Announcing Rust 1.15

The Rust team has released version 1.15 of the Rust programming language, which adds a custom derive feature. "These kinds of libraries are extremely powerful, but rely on custom derive for ergonomics. While these libraries worked on Rust stable previously, they were not as nice to use, so much so that we often heard from users “I only use nightly because of Serde and Diesel.” The use of custom derive is one of the most widely used nightly-only features. As such, RFC 1681 was opened in July of last year to support this use-case. The RFC was merged in August, underwent a lot of development and testing, and now reaches stable today!"

Comments (24 posted)

GNU Wget 1.19 released

GNU wget 1.19 has been released. "It comes with major improvements for Metalink, IDNA2008 for international domain names, an option to call external tools for fetching user/password, several bugfixes and improvements."

Full Story (comments: none)

Newsletters and articles

Development newsletters

Comments (none posted)

Sandstorm is returning to its community roots

Kenton Varda reports that Sandstorm, as a company, is no more, but community development lives on. LWN covered the Sandstorm personal cloud platform in June 2014.

Many people also know that Sandstorm is a for-profit startup, with a business model centered on charging for enterprise-oriented features, such as LDAP and SAML single-sign-on integration, organizational access control policies, and the like. This product was called “Sandstorm for Work”; it was still open source, but official builds hid the features behind a paywall. Additionally, we planned eventually to release a scalable version of Sandstorm for big enterprise users, based on the same tech that powers Sandstorm Oasis, our managed hosting service.

As an open source project, Sandstorm has been successful: We have a thriving community of contributors, many developers building and packaging apps, and thousands of self-hosted servers running in the wild. This will continue.

However, our business has not succeeded. To date, almost no one has purchased Sandstorm for Work, despite hundreds of trials and lots of interest expressed. Only a tiny fraction of Sandstorm Oasis users choose to pay for the service – enough to cover costs, but not much more.

Comments (3 posted)

Page editor: Rebecca Sobol

Announcements

Brief items

The GNOME Foundation gets a new director

The GNOME Foundation's long search for a new executive director has finally come to an end: Neil McGovern has taken the job. "McGovern is an experienced leader in Free Software projects and is best known for his role as Debian Project Leader from 2014-15. He has been on the Boards of numerous organizations, including Software in the Public Interest, Inc. and the Open Rights Group."

Full Story (comments: 1)

RethinkDB source relicensed, donated to the Linux Foundation

The Cloud Native Computing Foundation has announced that it has purchased the rights to the RethinkDB NoSQL database and contributed it to the Linux Foundation. In the process, the code was relicensed from the Affero GPLv3 to the Apache license. "RethinkDB is an open source, NoSQL, distributed document-oriented database that is in production use today by hundreds of technology startups, consulting firms and Fortune 500 companies, including NASA, GM, Jive, Platzi, the U.S. Department of Defense, Distractify and Matters Media. Some of Silicon Valley’s top firms invested $12.2 million over more than eight years in the RethinkDB company to build a state-of-the-art database system, but were unsuccessful in creating a sustainable business, and it shut down in October 2016."

Comments (37 posted)

Articles of interest

Free Software Supporter Issue 106, February 2017

The Free Software Foundation's newsletter for February covers U.S. Copyright Office should end the broken DMCA anti-circumvention exemptions process, FSF Europe reports back on their experience at the 33rd Chaos Communication Congress, Summary Report: Internet Governance Forum, 2016 December in Jalisco, Mexico, Free FPGA, Free Software Directory meeting recaps, and several other topics.

Full Story (comments: none)

What to know before jumping into a career as an open source lawyer (opensource.com)

Luis Villa talks about the open-source lawyer career path on opensource.com. "First, going to law school is a gamble. Recent American law school graduates must fight fiercely for one of the few jobs that can cover their massive debt, and roughly 50% fail the California bar. And, the open source gamble is bigger, because the opportunities are even fewer."

Comments (11 posted)

Calls for Presentations

Postgres Vision 2017 Call for Papers

Postgres Vision will take place June 26-29 in Boston, MA. The call for papers deadline is February 24.

Full Story (comments: none)

Power Management and Scheduling in the Linux Kernel (OSPM-summit)

OSPM-summit will take place April 3-4 in Pisa, Italy. The deadline for submitting topics/presentations is February 26.

Full Story (comments: none)

Swiss PGDay 2017 - Call for Speakers has opened

Swiss PGDay will take place June 30 in Rapperswil, Switzerland. The call for speakers deadline is April 14.

Full Story (comments: none)

DebConf17: Call for Proposals

The DebConf Content team has announced the call for proposals for DebConf17, which will take place August 6-12 in Montreal, Canada. The deadline for proposals is June 4.

Comments (none posted)

CFP Deadlines: February 9, 2017 to April 10, 2017

The following listing of CFP deadlines is taken from the LWN.net CFP Calendar.

DeadlineEvent Dates EventLocation
February 12 June 9
June 10
Hong Kong Open Source Conference 2017 Hong Kong, Hong Kong
February 18 March 18 Open Source Days Copenhagen Copenhagen, Denmark
February 24 June 26
June 29
Postgres Vision Boston, MA, USA
February 26 April 3
April 4
Power Management and Scheduling in the Linux Kernel Summit Pisa, Italy
February 27 April 6
April 8
Netdev 2.1 Montreal, Canada
February 28 May 18
May 20
Linux Audio Conference Saint-Etienne, France
February 28 May 2
May 4
samba eXPerience 2017 Goettingen, Germany
March 1 May 6
May 7
LinuxFest Northwest Bellingham, WA, USA
March 4 May 31
June 2
Open Source Summit Japan Tokyo, Japan
March 6 June 18
June 23
The Perl Conference Washington, DC, USA
March 7 August 23
August 25
JupyterCon New York, NY, USA
March 12 April 26 foss-north Gothenburg, Sweden
March 15 May 13
May 14
Open Source Conference Albania 2017 Tirana, Albania
March 18 June 19
June 20
LinuxCon + ContainerCon + CloudOpen China Beijing, China
March 20 May 4
May 6
Linuxwochen Wien 2017 Wien, Austria
March 27 July 10
July 16
SciPy 2017 Austin, TX, USA
March 28 October 23
October 24
All Things Open Raleigh, NC, USA
March 31 June 26
June 28
Deutsche Openstack Tage 2017 München, Germany
April 1 April 22 16. Augsburger Linux-Infotag 2017 Augsburg, Germany
April 2 August 18
August 20
State of the Map Aizuwakamatsu, Fukushima, Japan

If the CFP deadline for your event does not appear here, please tell us about it.

Upcoming Events

ACLU Massachusetts Technology for Liberty Director Kade Crockford at LibrePlanet 2017

The Free Software Foundation has announced that Kade Crockford will be a keynote speaker at LibrePlanet (March 25-26 in Cambridge, MA). "Kade Crockford is the Director of the Technology for Liberty Program at the ACLU of Massachusetts. Kade works to protect and expand core First and Fourth Amendment rights and civil liberties in the digital 21st century, focusing on how systems of surveillance and control impact not just society in general but their primary targets — people of color, Muslims, immigrants, and dissidents."

Comments (none posted)

Netdev 2.1 Location and Hotel

The Netdev team has announced the conference location, Le Westin Montréal, in Montréal, Quebec, Canada. "Le Westin will also be the conference hotel. We have a special arrangement for all attendees to get special rates at the hotel. We would like to encourage attendees to book at this hotel." Netdev 2.1 takes place April 6-8.

Full Story (comments: none)

Netdev 2.1 seeking netdev conferences reporter(s)

The Netdev team is seeking writers to cover the Netdev 2.1 conference, which will take place April 6-8 in Montreal, Canada.

Full Story (comments: none)

Events: February 9, 2017 to April 10, 2017

The following event listing is taken from the LWN.net Calendar.

Date(s)EventLocation
February 7
February 9
AnacondaCON Austin, TX, USA
February 14
February 16
Open Source Leadership Summit Lake Tahoe, CA, USA
February 15
February 16
Prague PostgreSQL Developer Day 2017 Prague, Czech Republic
February 17 Swiss Python Summit Rapperswil, Switzerland
February 18
February 19
PyCaribbean Bayamón, Puerto Rico, USA
February 20
February 24
OpenStack Project Teams Gathering Atlanta, GA, USA
February 21
February 23
Embedded Linux Conference Portland, OR, USA
February 21
February 23
OpenIoT Summit Portland, OR, USA
March 2
March 3
PGConf India 2017 Bengaluru, India
March 2
March 5
Southern California Linux Expo Pasadena, CA, USA
March 6
March 10
Linaro Connect Budapest, Hungary
March 7 Icinga Camp Berlin 2017 Berlin, Germany
March 10
March 12
conf.kde.in 2017 Guwahati, Assam, India
March 11
March 12
Chemnitzer Linux-Tage Chemnitz, Germany
March 16
March 17
IoT Summit Santa Clara, CA, USA
March 17
March 19
FOSS Asia Singapore, Singapore
March 17
March 19
MiniDebConf Curitiba 2017 Curitiba, Brazil
March 18 Open Source Days Copenhagen Copenhagen, Denmark
March 18
March 19
curl up - curl meeting 2017 Nuremberg, Germany
March 20
March 21
Linux Storage, Filesystem & Memory Management Summit Cambridge, MA, USA
March 22
March 23
Vault Cambridge, MA, USA
March 25
March 26
LibrePlanet 2017 Cambridge, MA, USA
March 28
March 31
PGConf US 2017 Jersey City, NJ, USA
April 3
April 4
Power Management and Scheduling in the Linux Kernel Summit Pisa, Italy
April 3
April 6
‹Programming› 2017 Brussels, Belgium
April 3
April 6
Open Networking Summit Santa Clara, CA, USA
April 3
April 7
DjangoCon Europe Florence, Italy
April 5
April 6
Dataworks Summit Munich, Germany
April 6
April 8
Netdev 2.1 Montreal, Canada

If your event does not appear here, please tell us about it.

Page editor: Rebecca Sobol


Copyright © 2017, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds