User: Password:
|
|
Subscribe / Log in / New account

LWN.net Weekly Edition for November 27, 2013

Fish shell 2.1

By Nathan Willis
November 27, 2013

Version 2.1 of the fish shell has been released, complete with a string of new features and fixes sure to hook users.

We first looked at fish all the way back in 2005. In keeping with longstanding tradition for shell-naming, fish is an abbreviation, for "friendly interactive shell." In the early days, fish's friendliness was visible through features like automatic syntax highlighting, feature-rich tab-completion behavior, and context-aware help. In subsequent years, it is clear that fish has had an influence on other shells, as older projects like Bash have added and enhanced their own takes on some of those features.

[The fish shell]

Naturally, fish itself has continued to evolve over the same time period, too. The project still advertises its friendlier implementations of well-known shell features, however—among them syntax highlighting and tab completion, plus command history, prompts, and even common built-in commands like cd. In addition, there are unique features like a web-based configuration tool (which runs on a local Python-powered HTTP server).

The fish source code is hosted at GitHub, while the project's web site offers binary packages for a wide range of Linux distributions, Windows, and Mac OS X. Version 2.1.0, the latest release, was released on October 28. The release notes page provides an overview of the changes in 2.1 and a handful of previous releases, though not for the entire history of the project.

Less is more

Many shells offer tab-completion for commands and filenames, so that users only need to type enough letters to uniquely identify what follows—and, when there are multiple possibilities, the shell usually writes them to the terminal window to help. Fish takes this concept considerably further. First, its tab-completion feature expands not only commands and filenames, but usernames, environment variables, job IDs, and process names. The filename-completion also supports strings with wildcard characters; so typing "b??t" would correctly bring up "bait" as a tab-completion match.

Second, fish includes several command-specific completions, allowing manpage completion for man and whatis, Makefile target completion for make, mount-point completion for mount, package-name completion for apt-get, rpm, and yum, hostname completion for ssh, and username completion for su. For other programs, fish offers tab-completion for each program's command-line switches and options. Each of the program-specific completions is tailor-made for the specific command; for example, ssh hostname completion draws on the list of hosts in the user's known_hosts file. It is also possible to write custom completion options for any command.

The 2.1 release adds another factor to fish's tab-completion effort: fuzzy matching of filenames and commands. The shell first attempts to find a prefix match (that is, "fo" matches "foo"), but if it does not find one it will look for substring matches in the middle and at the end of filenames, and, as a last resort, it will look for true fuzzy matches. Thus, typing "far" could match "foobar," if there are no better matches.

On the other hand, tab completion requires the user to actually hit the tab key; those who find even that effort to be too strenuous will be relieved that fish also makes a best-guess attempt at predicting what the user will type next, in the form of autosuggestions that appear after the text cursor while the user is typing. To accept an autosuggestion, one only needs to hit the right-arrow key on the keyboard. The upshot is predictive typing not unlike the text-prediction on mobile phones or in web browser location bars, and like those autosuggestions, fish provides suggestions that are weighted to the most-recently-typed input, rather than simple string matching. Furthermore, while tab completion works one token at a time, fish's autosuggestions can match entire commands, and are thus—potentially—faster.

Fish also features a number of other expansions, such as numeric ranges (e.g., [1..9]), sets (e.g., {a,b,d}), and process IDs (using the percent sign %). The PID expansion will match process names and expand them to the appropriate PID, so typing kill %jav will match a running java process, allowing the user to kill it without looking up the PID first. A change in fish 2.1 is that % by itself expands to the last command backgrounded, so fg % will put the last backgrounded process back into the foreground.

Going off script

Fish is scriptable, and in most cases uses familiar syntax, but here too it adds its own twist on the feature. There are some special environment variables, the most notable of which is status, which stores the exit status of the last process. In addition to 0 for success and 1 for an error, fish will attempt to provide more information with some specific values that begin with 12. An status value of 126, for example, means that the last command exited with an error because the filename provided was not executable; 127 means that no matching command or function was found. A process that exits with a signal will set status to 128N, where N is the signal number. In these cases, of course, the value of the status variable is treated like a string, which could trip up shell scripts if not handled properly.

Fish also treats the shell history differently than many of its competitors. First, there is a history variable that is an array of the previous commands; among other possibilities, that makes the history available to scripts. But the built-in history command is different, too. Simply hitting the up arrow will step back through previous commands (as it does in other shells), but typing a few letters and then hitting the up arrow performs a history search for the typed characters.

For example, typing make followed by the up arrow will let the user step back through all the previous matches against make. Holding down the Alt key while using the up and down arrows allows the user to search only for the word beneath the text cursor; that could be useful for narrowing down a search with a lot of matches or particularly long commands. Users can also delete items from the history, and can prevent any command from being recorded in the history (by preceding the command with a space character).

The fish project also prides itself on its extensive built-in help, which certainly could come in handy for those who find all of the differences with Bash and other shells daunting. The documentation is provided as both manpages and HTML, and all of fish's built-in commands support the -h switch.

Looking good

Fish's syntax highlighting is another area where the project attempts to provide more functionality than other shells. Although many standard Unix commands are available with their own color-coding (ls perhaps being the most popular), fish color-codes all commands as the user types. Commands, filenames, autosuggestions, tab-completion matches, quoted strings, parameters, comments, and error messages are all colored differently. Color overload is perhaps an understandable fear, but the color values are user-selectable.

[fish web configuration]

The best way to modify the color values is to use fish's web-based configuration tool. The tool is launched with fish_config, and starts up a web server running on localhost:8000, which it opens using the user's default browser. The configuration page lets users graphically select colors (foreground and background) for all of the syntax options, as well as whether each option is underlined. Most of the syntax options are used in the command line and output, but some (such as the current working directory) are also used in the shell prompt.

The shell prompt itself is also configurable within the web configuration tool; version 2.1 includes 14 presets, some of which are tailored toward specific use cases (there are two Git-specific prompts, for example). A right-side prompt (i.e., a prompt-like informational area that sits in the right-hand side of the command line), which is a feature also found in Zsh, is also supported. The configuration tool also lets the user browse through any functions and aliases defined in the current session, view the history array, and look at the environment variables.

Fish also supports many of the same configuration options expected by shell users, such as setting the terminal window title or setting a greeting message. There are a few differences, of course; for example, the fish equivalent to .bashrc is ~/.config/fish/config.fish, so it may take a few minutes of looking through the documentation to get up to speed. But on the whole, fish does a good job of adding its layer of friendliness onto the standard shell experience without introducing major disruptions to how users work. True, people with a significant standing investment in Bash scripts may not find migrating them to fish (or any other shell) an appealing prospect, but for more casual shell usage, it could be quite a good catch.

Comments (20 posted)

Creative Commons licenses reach 4.0

By Nathan Willis
November 27, 2013

Creative Commons (CC) has released the long-awaited 4.0 version of its suite of licenses for artistic and creative works. The revisions are intended to fix specific issues that users had encountered over time with the current crop of licenses. While, as with the previous efforts, the process for drafting the 4.0 set was conducted in the open, this time there was a more formal structure in place that established a clear set of goals for the new licenses. The result is, hopefully, a set of licenses that better address releasing works internationally, are easier to understand and comply with, and that mesh better with legal requirements outside of copyright itself.

There are currently six CC licenses: Attribution (aka CC BY), Attribution-ShareAlike (CC BY-SA), Attribution-NoDerivs (CC BY-ND), Attribution-NonCommercial (CC BY-NC), Attribution-NonCommercial-ShareAlike (CC BY-NC-SA), and Attribution-NonCommercial-NoDerivs (CC BY-NC-ND). At the moment, the CC web site's licenses page still points to the old versions of each, but its license chooser includes the updated links. The wiki page for the 4.0 licenses outlines the changes and the rationale behind them.

The previous revision of the CC license set was the 3.0 suite released in 2007. Among other changes, the 3.0 set separated "US" and "generic" versions of the licenses, altered some language to more uniformly address the concepts of "moral rights" and "collecting societies," and introduced clarifications specifically to improve compatibility with the Debian Free Software Guidelines (DFSG) and compatibility with MIT's OpenCourseWare project.

According to the CC blog, the process to update the licenses began at the organization's 2011 Global Summit. As with the 3.0 revision, the key factors included addressing concerns over interoperability with other licenses and with legal rights that are distinct from copyright but that CC license users were encountering in the real world. A public process for crafting the revisions was rolled out in December 2011. A requirements-gathering period was open through February 2012, during which stakeholders could voice their concerns on a public discussion list.

Subsequently, the requirements proposed were organized, vetted, and discussed by CC and collected on a wiki page. The first draft of the revisions was posted in April 2012, with subsequent drafts following in August, February 2013, and September. Public comments were taken on each draft, leading to the eventual publication of the final versions on November 25.

Data, morals, and other rights

Of the changes in the 4.0 set, perhaps the biggest is the treatment of non-copyright rights (sometimes referred to as "neighboring rights") that the law in various countries grants to creators and publishers. Chief among these neighboring rights is the notion of sui generis database rights, a legal concept which is important in the EU and several other jurisdictions. Sui generis database rights are intellectual property rights granted to those who compile and edit a database or a data set. The investment made to create the data set is given legal protection, so that others cannot harvest or reuse the data without the compiler's consent. That means that extracting a "substantial" subset of the facts from the data set requires the creator's consent—even if the form or expression (which would be covered by copyright) is changed.

The CC organization itself is not in favor of sui generis database rights, because the concept is at odds with CC's free culture principles, but the earlier CC licenses did not properly address the issue. The EU "ports" of the 3.0 licenses specified that when derivative works were used in such a way that only database rights were triggered, the copyright requirements and prohibitions of the license (e.g., requiring attribution or disallowing derivatives) were waived. The non-EU versions of the licenses did not address database rights at all.

That approach reflected CC's stance against sui generis database rights, but it had the effect of restricting CC license adoption in many big data and scientific projects. Furthermore, it opened the door to alternative licenses like the Open Database License, which CC views as insufficiently free, difficult to comply with, and burdened with extraneous clauses (such as contractual obligations).

The 4.0 CC licenses have been rewritten to specifically cover database rights, performance rights, broadcast rights, and recording rights under the same terms as copyrights. Thus, a creator can place a work (including a data set) under one of the 4.0 licenses and provide a uniform set of conditions about how derivatives must be attributed, shared, or used for commercial purposes.

On the other hand, the 4.0 licenses specifically limit the licensor's "moral rights." Moral rights are another non-copyright right that some jurisdictions grant to the creators of works; they allow the creator to restrict a use of the work that the creator considers to have a harmful or objectionable affect on the creator's relationship to the work. The revised licenses do not allow a creator to offer a work under a CC license and simultaneously restrict licensees on moral rights grounds. This is a policy decision on CC's part, to maximize the rights granted to licensees. Similarly, the 4.0 licenses do not permit licensors to impose restrictions based on privacy rights, publicity rights, or personality rights.

In addition, all of the neighboring rights language has been unified across the licenses and edited to be uniform worldwide, so that there are no longer different "ports" for different jurisdictions. CC is working hard on providing translations in many languages, but the separate "EU" versus "non-EU" versions of the 3.0 series are gone.

Compatibility and compliance

The 4.0 licenses also make changes to improve compatibility with outside licenses. The first change specifically addresses the CC licenses which have a ShareAlike clause. Because it might not be clear otherwise whether or not a work under a ShareAlike license could be combined with another work under other terms (say, the GNU Free Documentation License, to pick one at random), in the 3.0 set, the CC BY-SA license had a compatibility clause pointing to an official list of other licenses that could be used to create a combined derivative work (meaning, specifically, that the combined work could be licensed under either license; as-is, CC BY-SA must be the license for the derivative). 4.0 extends this feature to the CC BY-NC-SA license as well. The fact that so far CC has not put any outside licenses on the official compatibility list might make the issue seem moot, but at least the mechanism is in place.

On a more practical front, the 4.0 set also adds "or later" language to the ShareAlike licenses, so it is clear that, for example, a CC BY-SA 4.0 work can be combined with a CC BY-SA 5.0 work without triggering any impossible legal puzzles.

There are several changes that are designed to simplify the job of determining whether a particular usage of a CC-licensed work complies with the license or not. First, if a licensee's rights under a CC license are terminated by not complying with the terms, the 4.0 licenses state that those rights will be automatically reinstated if the violation is corrected within 30 days of the licensee being made aware of the violation. Previous versions of the licenses required violators to expressly seek reinstatement from the licensor.

Second, the new licenses explicitly allow licensees to make "private adaptations" of a work—meaning that users can do whatever they want to the work if they are not republishing it or redistributing it. That is a common-sense interpretation in most cases, but in earlier versions of the licenses it was a big problem for the NoDerivs licenses. Without the clarification, some lawyers might decide that a CC BY-ND data set could not be used internally by a company at all if there was any adaptation of the data involved (as there most certainly would be). Now such uses are expressly permitted, and the permission includes copyrights as well as all of the neighboring rights.

Finally, the 4.0 licenses take an explicit stance on the question of DRM restrictions. Now, if a work is released under a CC license and someone creates a derivative work with it that is wrapped in a DRM scheme, downstream users are expressly given permission to circumvent the DRM. As the CC wiki page puts it:

we see no reason at a policy level to differentiate between prohibiting licensees from imposing legal restrictions that restrict exercise of the licensed rights – an essential and long-standing prohibition and tradition in our licenses – and prohibiting the application of technologies that have the effect of imposing legal restrictions on reuse.

This permission to circumvent only applies to works derived from the CC-licensed original, though. Transmission services are still able to wrap their distribution channel in its own DRM scheme.

License away

To be sure, many of the changes in the CC 4.0 license suite are details that only apply in certain legal jurisdictions or in specific sets of circumstances. But then again, that specificity is what legalese is for. On the whole the changes strengthen the rights and permissions that are granted to users of CC-licensed works, which is whole point of the Creative Commons organization. No longer can someone release a database under a CC license and still restrict others from using it; no longer can someone take a CC BY movie created elsewhere and wrap it in DRM to prevent users from repurposing it.

Some of the effects of the 4.0 changes may not be felt for quite some time; there is no telling when or if "CC compatible licenses" will actually be approved, and the "or later" clauses in the 4.0 suite do not retroactively affect 3.0-and-earlier licenses. Furthermore, the NoDerivs and NonCommercial licenses remain incompatible with most open source and free software licenses (after all, they do impose restrictions on the licensee); there were proposals put forward to deprecate NonCommercial or significantly revise it for the 4.0 suite, but CC decided none of the proposals justified the significant disruptions to existing CC usage that might result. On the whole, however, this latest revision makes it clearer what is allowed and what is not, which ought to make CC licensing a more appealing choice for a great many content creators.

Comments (9 posted)

2013 Linux and free software timeline - Q1

By Nathan Willis
November 27, 2013

Here is LWN's sixteenth annual timeline of significant events in the Linux and free software world for the year. As per tradition, we will divide up the timeline up into quarters; this is our account of January–March 2013. Timelines for the remaining quarters of the year will appear in the coming weeks.

There are almost certainly some errors or omissions; if you find any, please send them to timeline@lwn.net.

LWN subscribers have paid for the development of this timeline, along with previous timelines and the weekly editions. If you like what you see here, or elsewhere on the site, please consider subscribing to LWN.

For those readers in a truly reflective mood, our timeline index page includes links to the previous timelines and other retrospective articles that date all the way back to 1998.


January

Canonical announces its plan to deliver Ubuntu for phones and tablets (announcement; LWN article).

A programmer had a problem. He thought to himself, "I know, I'll solve it with threads!". has Now problems. two he

-- Davidlohr Bueso

FreeBSD 9.1 is released (announcement).

BlueZ 5.0 is released (LWN article). [Debian logo]

Debian's m68k port is revived (LWN blurb).

LLVM 3.2 released (announcement).

DRM technology will still fail to prevent widespread infringement. In a related development, pigs will still fail to fly.

-- Ed Felten

Red Hat Enterprise Linux (RHEL) 5.9 is released (announcement).

Firefox 18 is released, introducing WebRTC support (announcement).
[Fedora logo]

Fedora 18 is released (announcement; LWN article). The ARM release follows a few weeks later (announcement).

Kolab 3.0 is released after 7 years of development (announcement).

People really ought to be forced to read their code aloud over the phone - that would rapidly improve the choice of identifiers

-- Al Viro

Well-known free software developer and co-author of the RSS specification Aaron Swartz takes his own life on January 11, while he was the target of a protracted federal court case about the public release of articles from the JSTOR digital archive. Among many others, Cory Doctorow reflects on the events themselves (LWN blurb); shortly afterward, the government formally drops its case (LWN blurb).

Long Term Support Initiative kernel 3.4 is released (LWN blurb).

LWN turns 15 years old; there is much rejoicing (LWN article).

I use "political" and "ideological" without criticism. Debian's chief goal - freedom - is a matter of ideology. And because freedom always means escaping from someone's control, it's also a matter of politics.

-- Ian Jackson


[FirefoxOS logo]

Mozilla announces that developer phones will be released for its upcoming Firefox OS (LWN blurb).

Version 1.1 of the Trinity fuzz tester is released (announcement; LWN article).

linux.conf.au (LCA) is held in Canberra, Jan 28 to Feb 2 (LWN coverage).

February

FOSDEM 2013 is held in Brussels, February 2 to 3 (LWN coverage).

Want to visit an incomplete version of our website where you can't zoom? Download our app!

-- Randall Munroe

KDE 4.10 is released (announcement). [KDE logo]

digiKam 3.0 is released (announcement; LWN article).

Krita 2.6 is released (announcement).

I'm waking up in the middle of the night and have to try a few more passwords just so I can get back to sleep. For those who don't know, dreaming of password combinations sucks.

-- Jeremiah Grossman

The Open Invention Network (OIN) passes 500 licensees, setting a new milestone (LWN blurb).

Linaro forms the Linaro Networking Group (LWN blurb).

GNOME selects JavaScript as its official development platform language (LWN blurb).

[WebRTC logo]

The WebRTC audio/video standard is demonstrated in a call between Google Chrome and Mozilla Firefox (LWN blurb; LWN article).

LibreOffice 4.0 is released (announcement).

The Python trademark is put at risk in the European Union by a competing application (LWN blurb); the situation is later resolved by the Python Software Foundation (LWN blurb).

The 2013 Android Builders' Summit is held in San Francisco, February 18 to 19 (LWN coverage).

One person's bug is another person's fascinating invertebrate.

-- Neil Brown

Embedded Linux Conference (ELC) is held in San Francisco, February 20 to 22 (LWN coverage).

Linux 3.8 is released (announcement; merge window summaries 1, 2; statistics; KernelNewbies summary).

Tizen 2.0 is released (LWN article).
[Tizen logo]

The Opera browser drops its own web rendering engine to adopt WebKit instead (LWN blurb).

Firefox 19 is released, marking the debut of the browser's built-in PDF renderer (announcement).

Southern California Linux Expo (SCALE) 11x is held in Los Angeles, February 22 to 24 (LWN coverage).

RHEL 6.4 is released (LWN blurb).

Yeah, a plan, I know it goes against normal kernel development procedures, but hey, we're in our early 20's now, it's about time we started getting responsible.

-- Greg Kroah-Hartman

The first preview images of Ubuntu for phones and tablets are released (LWN blurb; LWN article).

[Ruby logo]
Ruby 2.0 is released (LWN blurb).

Subsurface 3.0 is released (announcement), thus hitting the 3.0 mark approximately 14 times faster than the co-authors' previous project to reach the milestone.

BIND 10 is released (announcement).

LG acquires webOS from Hewlett Packard (LWN blurb).

Debian releases its first arm64 image (LWN blurb); openSUSE follows with its own Aarch64 preview a week later (LWN blurb).

March

Linaro Connect Asia is held in Hong Kong, March 4 to 8 (LWN coverage).

More importantly, does a vintage kernel sound better than a more recent one? I've been doing some testing and the results are pretty clear, not that they should surprise anyone who knows anything about recording:

1) Older kernels sound much warmer than newer ones.

2) Kernels compiled by hand on the machine they run on sound less sterile than upstream distro provided ones which also tend to have flabby low end response and bad stereo imaging.

3) As if it needed saying, gcc4 is a disaster for sound quality. I mean, seriously if you want decent audio and you use gcc4 you may as well be recording with a tin can microphone.

-- Ben Bell

Ubuntu announces the Mir display server, a move which is met with controversy over the decision to not adopt Wayland (LWN blurb; LWN article).

Google and the MPEG-LA announce an agreement that prevents the formation of a patent pool around WebM and VP8 (LWN blurb).

Google announces the Zopfli compression algorithm (LWN blurb).

[0install logo] 0install 2.0 is released (announcement).

openSUSE 12.3 is released (announcement; LWN article).

Google announces the shutdown of its Google Reader feed-reading product to considerable weeping and gnashing of teeth (announcement).

Be careful, you've already submitted some kernel patches; keep on this path and you might just wake up one morning and find yourself a kernel developer.

-- Paul Moore

The candidates for the Debian Project Leader (DPL) election are announced (announcement; LWN article).

Following the acquisition of the embedded Linux news site LinuxDevices, founder Rick Lehrbaum launches LinuxGizmos (LWN blurb).

Plasma Media Center 1.0 is released (LWN blurb).

[Ardour logo]

Ardour 3.0 is released (LWN blurb).

Emacs 24.3 is released (LWN blurb).

PyCon 2013 is held in Santa Clara, March 17 to 21 (LWN coverage).

The Django community mourns the passing of longtime contributor Malcolm Tredinnick (LWN blurb).

MongoDB 2.4 is released (announcement).

Personally, I prefer the approach where we figure out what kind of tires we need on the next car and plan for them when we buy the car over an approach where we try to change the tires while the car is in motion.

-- Scott Kitterman

The microblogging service Identi.ca, popular with the free-software community, closes its doors to be replaced by a new, decentralized platform called pump.io (LWN article).

[GCC logo]
GCC 4.8.0 is released (announcement; LWN article). The release is the first to be written in C++ (LWN article).

OpenSSH 6.2 is released (announcement).

GNOME 3.8 is released (announcement).

Comments (1 posted)

Page editor: Jonathan Corbet

Security

Python adopts SipHash

By Jake Edge
November 27, 2013

Hash collisions are a fact of life, but one that can have serious consequences when attackers can control the values being hashed. We looked at this problem, which can allow denial-of-service attacks, back in January 2012 for Python, PHP, Java, Ruby, JavaScript, and other dynamic languages. While fixes were made at that time, the hash function used by Python was still vulnerable to certain kinds of attacks. That situation is now changing, with Python Enhancement Proposal (PEP) 456 having been accepted for inclusion in the upcoming Python 3.4 release. That means Python will be using SipHash for its hash function going forward.

Hash functions are used to reproducibly turn an arbitrary string (or series of bytes) into a single value (of a length determined by the type of hash) that can be used for various purposes. For example, cryptographic hash functions are used to derive digest values for digital files that can then be used with signature algorithms to digitally "sign" documents or other data (e.g. distribution software packages). Hash functions are also used for data structures like dictionaries (aka hashes or associative arrays) where the function maps the key to a value that can be used to find the data associated with the key in the dictionary.

But, with any hash function, multiple keys will hash to the same value. In a data structure, that is often handled by making each hash "bucket" actually be a linked list of the colliding entries—normally just a few. Operations, such as lookup, insert, or delete, on keys that hash to a particular bucket then have to traverse the list. If the number of collisions is low, the effect of a short list traversal is minimal, but if that number is high, it can significantly impact the performance of those operations.

Normally, languages try to choose hash functions that will fairly evenly spread the expected key values into the hash space. But if the hash function is known to an attacker, and they can arrange to provide the key values to be hashed, denial-of-service attacks are possible. One way attackers can do that is with HTTP POST (form submission, essentially) requests. Many web application frameworks helpfully collect up all of the POSTed variables into a dictionary for delivery to the application. Just supplying a list of variables that all hash to the same bucket (and possibly submitting that POST multiple times) may be enough to bring a web server to its knees.

This was all discovered in Perl in 2003, then rediscovered for other languages in late 2011. A bug was opened for Python, then closed a few months later after the hash function used by the interpreter was randomized based on a flag (-R) given at run time. That didn't fully solve the problem, however, since effectively only 256 separate functions were used—an attacker could use various techniques to determine the hash function, thus could still cause a denial of service. In fact, Jean-Philippe Aumasson and Daniel J. Bernstein developed a proof-of-concept attack to recover the seed used to randomize the hash function for Python 2.7.3 and 3.2.3.

Shortly after the original bug was closed, a new bug was filed that recognized the inadequacy of the solution, but it took 18 months or so before PEP 456 was formally accepted for inclusion into Python 3.4. PEP author Christian Heimes looked at several different hash functions before settling on the SipHash24 variant. It "provides the best combination of speed and security" and several other high-profile projects (e.g. Ruby, Perl, Rust, FreeBSD, Redis, ...) have chosen SipHash.

Unlike earlier hash functions used by Python and others, SipHash is a cryptographic hash. Python's current implementation uses a modified Fowler-Noll-Vo (FNV) hash function, which was changed to add a random prefix and suffix to the bytes being hashed. But Heimes is convinced that "the nature of a non-cryptographic hash function makes it impossible to conceal the secrets".

At the time that PEP 456 was being written, there was another discussion of the issue on the python-dev mailing list. Heimes started the conversation by soliciting opinions on whether the hash function should be a build-time option or be switchable at run time. Most felt that choosing the algorithm at compile time was sufficient, but some, including Python benevolent dictator for life (BDFL) Guido van Rossum, were not at all convinced that any change was needed.

Van Rossum was concerned that the problem is largely manufactured by "some security researchers drumming up business"—that it is only of theoretical interest. But Armin Rigo disagreed:

It should be IMHO either ignored (which is fine for a huge fraction of users), or seriously fixed by people with the correctly pessimistic approach. The current hash randomization is simply not preventing anything; someone posted long ago a way to recover bit-by-bit the hash randomized used by a remote web program in Python running on a server. The only benefit of this hash randomization option (-R) was to say to the press that Python fixed very quickly the problem when it was mediatized :-/

In the end, Van Rossum did not put his foot down; he said that he was "fine with a new hash function as long as it's either faster, or safer and not slower". SipHash24 has been within a few percent of the performance of the existing hash function on several different benchmarks. There are concerns that it will impact performance for short keys (less than, say, seven bytes) because it has some setup and teardown costs, so switching to a faster but less secure hash for short keys is being investigated.

According to the PEP, there are multiple places in the CPython code that use their own version of the hash algorithm: "the current hash algorithm is hard-coded and implemented multiple times for bytes and three different Unicode representations". That would make it harder for someone trying to put in their own replacement, so the PEP also proposes reworking the internals of Python such that the hash function can be replaced in a single location. That will appear in Python 3.4 as well.

It seems a bit strange that it took this long for Python to fix the problem. As Rigo said, ignoring the problem might be a reasonable approach for a substantial fraction of Python users, but that was true before the previous fix was applied as well. Given that Python developers presumably don't just want to apply a cosmetic fix, it is a little surprising it took this long to get to the "proper" solution. But Larry Hastings may be right with his suggestion that "there was enough bike shedding that people ran out of steam" to immediately address the problem.

Given how widespread SipHash is now for dictionary hash functions, we are potentially vulnerable to some kind of breakthrough in finding collisions in that algorithm. But, at least there are a full 64 bits of entropy being used by SipHash (rather than the eight bits for the modified FNV function). That should at least make brute force attacks infeasible—we will just need to keep our eye out for cryptographic breakthroughs down the road.

Comments (10 posted)

Brief items

Security quotes of the week

The National Security Agency has been gathering records of online sexual activity and evidence of visits to pornographic websites as part of a proposed plan to harm the reputations of those whom the agency believes are radicalizing others through incendiary speeches, according to a top-secret NSA document. The document, provided by NSA whistleblower Edward Snowden, identifies six targets, all Muslims, as “exemplars” of how “personal vulnerabilities” can be learned through electronic surveillance, and then exploited to undermine a target's credibility, reputation and authority.
The Huffington Post (Thanks to Michael Kerrisk.)

Either way, it says quite a lot (none of it good) about our "intelligence" professionals when they offer up a document with a redacted date (makes no sense in the first place), which is easily revealed by the very URL (wtf?) that the intelligence officials chose, and which is further undermined by the fact that the same document had already been declassified with totally different redactions (and which reveals the date). And we're supposed to believe these folks are smart enough to not screw up with all the data they're collecting on everyone?
Mike Masnick

Comments (2 posted)

New vulnerabilities

389-ds-base: denial of service

Package(s):389-ds-base CVE #(s):CVE-2013-4485
Created:November 21, 2013 Updated:January 14, 2014
Description:

From the Red Hat advisory:

It was discovered that the 389 Directory Server did not properly handle certain Get Effective Rights (GER) search queries when the attribute list, which is a part of the query, included several names using the '@' character. An attacker able to submit search queries to the 389 Directory Server could cause it to crash. (CVE-2013-4485)

Alerts:
Fedora FEDORA-2013-21875 389-ds-base 2014-01-14
Scientific Linux SLSA-2013:1752-1 389-ds-base 2013-12-03
Fedora FEDORA-2013-22012 389-ds-base 2013-12-03
Mageia MGASA-2013-0357 389-ds-base 2013-11-30
Red Hat RHSA-2013:1752-01 389-ds-base 2013-11-21
Oracle ELSA-2013-1752 389-ds-base 2013-11-26

Comments (none posted)

augeas: file overwrite and information leak

Package(s):augeas CVE #(s):CVE-2012-0786 CVE-2012-0787
Created:November 21, 2013 Updated:December 4, 2013
Description:

From the Red Hat advisory:

Multiple flaws were found in the way Augeas handled configuration files when updating them. An application using Augeas to update configuration files in a directory that is writable to by a different user (for example, an application running as root that is updating files in a directory owned by a non-root service user) could have been tricked into overwriting arbitrary files or leaking information via a symbolic link or mount point attack. (CVE-2012-0786, CVE-2012-0787)

Alerts:
Mandriva MDVSA-2014:022 augeas 2014-01-24
Mageia MGASA-2014-0058 augeas 2014-02-12
Scientific Linux SLSA-2013:1537-2 augeas 2013-12-03
Oracle ELSA-2013-1537 augeas 2013-11-26
Red Hat RHSA-2013:1537-02 augeas 2013-11-21

Comments (none posted)

bip: denial of service

Package(s):bip CVE #(s):CVE-2013-4550
Created:November 21, 2013 Updated:November 27, 2013
Description:

From the Red Hat bugzilla entry:

bip 0.8.8 and earlier contains an issue where failed SSL handshakes result in a resource leak. A remote attacker can use this flaw to cause bip to run out of resources, resulting in a denial of service.

Alerts:
Mageia MGASA-2013-0351 bip 2013-11-22
Fedora FEDORA-2013-21018 bip 2013-11-21
Fedora FEDORA-2013-21060 bip 2013-11-21

Comments (none posted)

bugzilla: cross-site request forgery

Package(s):bugzilla CVE #(s):CVE-2013-1733
Created:November 26, 2013 Updated:November 27, 2013
Description: From the CVE entry:

Cross-site request forgery (CSRF) vulnerability in process_bug.cgi in Bugzilla 4.4.x before 4.4.1 allows remote attackers to hijack the authentication of arbitrary users for requests that modify bugs via vectors involving a midair-collision token.

Alerts:
Mageia MGASA-2014-0199 bugzilla 2014-05-02
Mandriva MDVSA-2013:285 bugzilla 2013-11-26

Comments (none posted)

busybox: privilege escalation

Package(s):busybox CVE #(s):CVE-2013-1813
Created:November 21, 2013 Updated:December 9, 2013
Description:

From the Red Hat advisory:

It was found that the mdev BusyBox utility could create certain directories within /dev with world-writable permissions. A local unprivileged user could use this flaw to manipulate portions of the /dev directory tree. (CVE-2013-1813)

Alerts:
Scientific Linux SLSA-2013:1732-2 busybox 2013-12-09
Gentoo 201312-02 busybox 2013-12-02
Mageia MGASA-2013-0358 busybox 2013-11-30
Red Hat RHSA-2013:1732-02 busybox 2013-11-21
Oracle ELSA-2013-1732 busybox 2013-11-26

Comments (none posted)

drupal7: multiple vulnerabilities

Package(s):drupal7 CVE #(s):CVE-2013-6385 CVE-2013-6386 CVE-2013-6387 CVE-2013-6388 CVE-2013-6389
Created:November 26, 2013 Updated:December 30, 2013
Description: From the Drupal advisory:

Drupal's form API has built-in cross-site request forgery (CSRF) validation, and also allows any module to perform its own validation on the form. In certain common cases, form validation functions may execute unsafe operations. Given that the CSRF protection is an especially important validation, the Drupal core form API has been changed in this release so that it now skips subsequent validation if the CSRF validation fails.

This vulnerability is mitigated by the fact that a form validation callback with potentially unsafe side effects must be active on the site, and none exist in core. However, issues were discovered in several popular contributed modules which allowed remote code execution that made it worthwhile to fix this issue in core. Other similar issues with varying impacts are likely to have existed in other contributed modules and custom modules and therefore will also be fixed by this Drupal core release.

Alerts:
Debian DSA-2828-1 drupal6 2013-12-28
Mandriva MDVSA-2013:287-1 drupal 2013-12-17
Fedora FEDORA-2013-22507 drupal6 2013-12-12
Fedora FEDORA-2013-21844 drupal7 2013-12-02
Mageia MGASA-2013-0359 drupal 2013-11-30
Mandriva MDVSA-2013:287 drupal 2013-11-26
Debian DSA-2804-1 drupal7 2013-11-26

Comments (none posted)

drupal7-context: multiple vulnerabilities

Package(s):drupal7-context CVE #(s):CVE-2013-4445 CVE-2013-4446
Created:November 21, 2013 Updated:November 27, 2013
Description:

From the Red Hat bugzilla entry:

First issue is that the module allows execution of PHP code via manipulation of a URL argument in a path used for AJAX operations when running in a configuration without a json_decode function provided by PHP or the PECL JSON library. The vulnerability is

This vulnerability is only exploitable on a server running a PHP version prior to 5.2 that does not have the json library installed.

Second issue is that the module uses Drupal's token scheme to restrict access to the json rendering of a block. This control mechanism is insufficient as Drupal's token scheme is designed to provide security between two different sessions (or a session and a non authenticated user) and is not designed to provide security within a session. The vulnerability is mitigated by needing blocks that have sensitive information.

The suggested fix is to update Drupal6-context to 6.x-3.2 and Drupal7-context to 7.x-3.0.

Alerts:
Fedora FEDORA-2013-21231 drupal6-context 2013-11-23
Fedora FEDORA-2013-20965 drupal7-context 2013-11-21
Fedora FEDORA-2013-21298 drupal6-context 2013-11-23
Fedora FEDORA-2013-20976 drupal7-context 2013-11-21

Comments (none posted)

glibc: denial of service

Package(s):glibc CVE #(s):CVE-2013-4458
Created:November 25, 2013 Updated:November 27, 2013
Description: From the Mageia advisory:

A stack (frame) overflow flaw, which led to a denial of service (application crash), was found in the way glibc's getaddrinfo() function processed certain requests when called with AF_INET6. A similar flaw to CVE-2013-1914, this affects AF_INET6 rather than AF_UNSPEC

Alerts:
Debian-LTS DLA-165-1 eglibc 2015-03-06
Gentoo 201503-04 glibc 2015-03-08
Scientific Linux SLSA-2014:1391-2 glibc 2014-11-03
Red Hat RHSA-2014:1391-02 glibc 2014-10-14
Oracle ELSA-2014-1391 glibc 2014-10-16
Ubuntu USN-2306-3 eglibc 2014-09-08
Ubuntu USN-2306-2 eglibc 2014-08-05
Ubuntu USN-2306-1 eglibc 2014-08-04
Mandriva MDVSA-2013:284 glibc 2013-11-25
Mandriva MDVSA-2013:283 glibc 2013-11-25
Mageia MGASA-2013-0340 glibc 2013-11-22
SUSE SUSE-SU-2016:0470-1 glibc 2016-02-16
Fedora FEDORA-2016-b0e67c88b5 glibc 2016-05-12
Mageia MGASA-2016-0206 glibc 2016-05-24

Comments (none posted)

ibutils: insecure tmp files

Package(s):RDMA stack CVE #(s):CVE-2013-2561
Created:November 21, 2013 Updated:November 27, 2013
Description:

From the Red Hat advisory:

A flaw was found in the way ibutils handled temporary files. A local attacker could use this flaw to cause arbitrary files to be overwritten as the root user via a symbolic link attack. (CVE-2013-2561)

Alerts:
Scientific Linux SLSA-2013:1661-2 RDMA stack 2013-12-09
Oracle ELSA-2013-1661 RDMA stack 2013-11-26
Red Hat RHSA-2013:1661-02 RDMA stack 2013-11-21

Comments (none posted)

kernel: two vulnerabilities

Package(s):kernel CVE #(s):CVE-2013-4591 CVE-2013-4592
Created:November 21, 2013 Updated:November 27, 2013
Description:

From the Red Hat advisory:

* It was found that the fix for CVE-2012-2375 released via RHSA-2012:1580 accidentally removed a check for small-sized result buffers. A local, unprivileged user with access to an NFSv4 mount with ACL support could use this flaw to crash the system or, potentially, escalate their privileges on the system . (CVE-2013-4591, Moderate)

* A flaw was found in the way IOMMU memory mappings were handled when moving memory slots. A malicious user on a KVM host who has the ability to assign a device to a guest could use this flaw to crash the host. (CVE-2013-4592, Moderate)

Alerts:
Red Hat RHSA-2014:0284-01 kernel 2014-03-11
Ubuntu USN-2116-1 linux-ti-omap4 2014-02-18
Ubuntu USN-2115-1 linux-ti-omap4 2014-02-18
Ubuntu USN-2112-1 linux-lts-raring 2014-02-18
Ubuntu USN-2111-1 linux-lts-quantal 2014-02-18
Ubuntu USN-2114-1 kernel 2014-02-18
Ubuntu USN-2067-1 linux-ti-omap4 2014-01-03
Ubuntu USN-2066-1 kernel 2014-01-03
openSUSE openSUSE-SU-2014:0247-1 kernel 2014-02-18
Oracle ELSA-2014-3002 kernel 2014-02-12
Mandriva MDVSA-2013:291 kernel 2013-12-18
Scientific Linux SLSA-2013:1645-2 kernel 2013-12-16
Oracle ELSA-2013-2584 kernel 2013-11-28
Oracle ELSA-2013-2584 kernel 2013-11-28
Oracle ELSA-2013-2585 kernel 2013-11-28
Oracle ELSA-2013-2585 kernel 2013-11-28
Oracle ELSA-2013-2583 kernel 2013-11-28
Red Hat RHSA-2013:1645-02 kernel 2013-11-21
Oracle ELSA-2013-1645 kernel 2013-11-26

Comments (none posted)

kernel: denial of service

Package(s):kernel CVE #(s):CVE-2013-4563
Created:November 25, 2013 Updated:December 1, 2013
Description: From the CVE entry:

The udp6_ufo_fragment function in net/ipv6/udp_offload.c in the Linux kernel through 3.12, when UDP Fragmentation Offload (UFO) is enabled, does not properly perform a certain size comparison before inserting a fragment header, which allows remote attackers to cause a denial of service (panic) via a large IPv6 UDP packet, as demonstrated by use of the Token Bucket Filter (TBF) queueing discipline.

Alerts:
Ubuntu USN-2113-1 linux-lts-saucy 2014-02-18
Ubuntu USN-2117-1 kernel 2014-02-18
openSUSE openSUSE-SU-2014:0205-1 kernel 2014-02-06
Fedora FEDORA-2013-21822 kernel 2013-11-29
Fedora FEDORA-2013-21807 kernel 2013-11-24

Comments (none posted)

krb5: two denial of service flaws

Package(s):krb5 CVE #(s):CVE-2013-1417 CVE-2013-1418
Created:November 21, 2013 Updated:December 9, 2013
Description:

From the Mageia advisory:

An authenticated remote client can cause a KDC to crash by making a valid TGS-REQ to a KDC serving a realm with a single-component name. The process_tgs_req() function dereferences a null pointer because an unusual failure condition causes a helper function to return success (CVE-2013-1417).

If a KDC serves multiple realms, certain requests can cause setup_server_realm() to dereference a null pointer, crashing the KDC. This can be triggered by an unauthenticated user (CVE-2013-1418).

Alerts:
Oracle ELSA-2015-0439 krb5 2015-03-12
Scientific Linux SLSA-2014:1389-2 krb5 2014-11-03
Scientific Linux SLSA-2014:1245-1 krb5 2014-10-13
CentOS CESA-2014:1245 krb5 2014-09-30
Oracle ELSA-2014-1389 krb5 2014-10-16
Oracle ELSA-2014-1245 krb5 2014-09-17
Red Hat RHSA-2014:1389-02 krb5 2014-10-14
Red Hat RHSA-2014:1245-01 krb5 2014-09-16
Ubuntu USN-2310-1 krb5 2014-08-11
Gentoo 201312-12 mit-krb5 2013-12-16
openSUSE openSUSE-SU-2013:1833-1 krb5 2013-12-07
Fedora FEDORA-2013-21786 krb5 2013-12-03
Mageia MGASA-2013-0335 krb5 2013-11-20
Mageia MGASA-2013-0336 krb5 2013-11-20
openSUSE openSUSE-SU-2013:1738-1 krb5 2013-11-21
Mandriva MDVSA-2013:275 krb5 2013-11-21
openSUSE openSUSE-SU-2013:1751-1 krb5 2013-11-24

Comments (none posted)

libhttp-body-perl: code execution

Package(s):libhttp-body-perl CVE #(s):CVE-2013-4407
Created:November 22, 2013 Updated:March 25, 2014
Description:

From the Debian advisory:

Jonathan Dolle reported a design error in HTTP::Body, a Perl module for processing data from HTTP POST requests. The HTTP body multipart parser creates temporary files which preserve the suffix of the uploaded file. An attacker able to upload files to a service that uses HTTP::Body::Multipart could potentially execute commands on the server if these temporary filenames are used in subsequent commands without further checks.

Alerts:
openSUSE openSUSE-SU-2014:0433-1 perl-HTTP-Body 2014-03-25
Mandriva MDVSA-2013:282 perl-HTTP-Body 2013-11-25
Mageia MGASA-2013-0352 perl-HTTP-Body 2013-11-22
Debian DSA-2801-1 libhttp-body-perl 2013-11-21

Comments (none posted)

luci: two vulnerabilities

Package(s):luci CVE #(s):CVE-2013-4481 CVE-2013-4482
Created:November 21, 2013 Updated:December 4, 2013
Description:

From the Red Hat advisory:

A flaw was found in the way the luci service was initialized. If a system administrator started the luci service from a directory that was writable to by a local user, that user could use this flaw to execute arbitrary code as the root or luci user. (CVE-2013-4482)

A flaw was found in the way luci generated its configuration file. The file was created as world readable for a short period of time, allowing a local user to gain access to the authentication secrets stored in the configuration file. (CVE-2013-4481)

Alerts:
Scientific Linux SLSA-2013:1603-2 luci 2013-12-03
Red Hat RHSA-2013:1603-02 luci 2013-11-21

Comments (none posted)

mantis: cross-site scripting

Package(s):mantis CVE #(s):CVE-2013-4460
Created:November 25, 2013 Updated:November 27, 2013
Description: From the Red Hat bugzilla:

It was reported [1],[2] that an XSS flaw in Mantis' account_sponsor_page.php existed, where project names were not properly sanitized. This could lead to the execution of malicious javascript when visiting that page if a malicious user had project manager permissions.

This exists in all versions of Mantis from 1.0.0 to 1.2.15 and is corrected in 1.2.16.

Alerts:
Fedora FEDORA-2014-15079 mantis 2014-12-12
Fedora FEDORA-2013-20176 mantis 2013-11-24
Fedora FEDORA-2013-20202 mantis 2013-11-24

Comments (none posted)

memcached: denial of service

Package(s):memcached CVE #(s):CVE-2011-4971
Created:November 25, 2013 Updated:February 3, 2014
Description: From the Mageia advisory:

Memcached is vulnerable to a denial of service as it can be made to crash when it receives a specially crafted packet over the network.

Alerts:
openSUSE openSUSE-SU-2014:0951-1 memcached 2014-07-30
openSUSE openSUSE-SU-2014:0867-1 memcached 2014-07-03
Gentoo 201406-13 memcached 2014-06-14
Ubuntu USN-2080-1 memcached 2014-01-13
Debian DSA-2832-1 memcached 2014-01-01
Fedora FEDORA-2014-0934 memcached 2014-02-03
Fedora FEDORA-2014-0926 memcached 2014-02-03
Mandriva MDVSA-2013:280 memcached 2013-11-22
Mageia MGASA-2013-0339 memcached 2013-11-22

Comments (none posted)

monitorix: unspecified vulnerability

Package(s):monitorix CVE #(s):
Created:November 25, 2013 Updated:December 4, 2013
Description: From the Fedora advisory:

Urgent update for security bug fix of BUILTIN HTTP SERVER.

Alerts:
Fedora FEDORA-2013-22011 monitorix 2013-12-04
Fedora FEDORA-2013-21998 monitorix 2013-11-24

Comments (none posted)

moodle: multiple vulnerabilities

Package(s):moodle CVE #(s):CVE-2013-6780 CVE-2013-3630
Created:November 25, 2013 Updated:November 27, 2013
Description: From the CVE entries:

Moodle through 2.5.2 allows remote authenticated administrators to execute arbitrary programs by configuring the aspell pathname and then triggering a spell-check operation within the TinyMCE editor. (CVE-2013-3630)

Cross-site scripting (XSS) vulnerability in uploader.swf in the Uploader component in Yahoo! YUI 2.5.0 through 2.9.0 allows remote attackers to inject arbitrary web script or HTML via the allowedDomain parameter. (CVE-2013-6780)

Alerts:
Fedora FEDORA-2013-21354 moodle 2013-11-23
Fedora FEDORA-2013-21397 moodle 2013-11-23

Comments (none posted)

nginx: security restriction bypass

Package(s):nginx CVE #(s):CVE-2013-4547
Created:November 22, 2013 Updated:December 17, 2013
Description:

From the Debian advisory:

Ivan Fratric of the Google Security Team discovered a bug in nginx, a web server, which might allow an attacker to bypass security restrictions by using a specially crafted request.

Alerts:
SUSE SUSE-SU-2013:1895-1 nginx 2013-12-16
openSUSE openSUSE-SU-2013:1791-1 nginx-1.0 2013-11-30
Fedora FEDORA-2013-21826 nginx 2013-12-02
openSUSE openSUSE-SU-2013:1792-1 nginx 2013-11-30
Mandriva MDVSA-2013:281 nginx 2013-11-24
Debian DSA-2802-1 nginx 2013-11-21
Mageia MGASA-2013-0349 nginx 2013-11-22
openSUSE openSUSE-SU-2013:1745-1 nginx 2013-11-22

Comments (none posted)

openstack-glance: information leak

Package(s):openstack-glance CVE #(s):
Created:November 21, 2013 Updated:December 30, 2013
Description:

From the Red Hat bugzilla entry:

The directory /var/log/glance is world readable and contains log files that are readable which can result in exposure of sensitive information.

Alerts:
Fedora FEDORA-2013-23680 openstack-glance 2013-12-28
Fedora FEDORA-2013-19997 openstack-glance 2013-11-21

Comments (none posted)

pacemaker: denial of service

Package(s):pacemaker CVE #(s):CVE-2013-0281
Created:November 21, 2013 Updated:February 17, 2014
Description:

From the Red Hat advisory:

A denial of service flaw was found in the way Pacemaker performed authentication and processing of remote connections in certain circumstances. When Pacemaker was configured to allow remote Cluster Information Base (CIB) configuration or resource management, a remote attacker could use this flaw to cause Pacemaker to block indefinitely (preventing it from serving other requests). (CVE-2013-0281)

Alerts:
Mageia MGASA-2014-0069 pacemaker 2014-02-14
Scientific Linux SLSA-2013:1635-2 pacemaker 2013-12-03
Red Hat RHSA-2013:1635-02 pacemaker 2013-11-21

Comments (none posted)

python: sub string wildcard matching flaw

Package(s):python3 CVE #(s):CVE-2013-7440
Created:November 26, 2013 Updated:August 18, 2015
Description: From the Python bug report:

Ryan Sleevi of the Google Chrome Security Team has informed us about another issue that is caused by our failure to implement RFC 6125 wildcard matching rules. RFC 6125 allows only one wildcard in the left-most fragment of a hostname. For security reasons matching rules like *.*.com should be not supported.

For wildcards in internationalized domain names I have followed the piece of advice "In the face of ambiguity, refuse the temptation to guess.". A substring wildcard does no longer match an IDN A-label fragment. '*' still matches a full punycode fragment but 'x*' no longer matches 'xn--foo'.

Alerts:
Fedora FEDORA-2015-11995 bzr 2015-08-15
Fedora FEDORA-2015-12001 bzr 2015-08-15
openSUSE openSUSE-SU-2015:1052-1 python-setuptools 2015-06-11
Mageia MGASA-2013-0376 python3 2013-12-18
Fedora FEDORA-2013-21415 python3 2013-11-26
Fedora FEDORA-2013-21418 python3 2013-11-26
Fedora FEDORA-2016-52b294538d python-pymongo 2016-02-12
Fedora FEDORA-2016-50abc3e885 python-pymongo 2016-02-12
Red Hat RHSA-2016:1166-01 python27 2016-05-31

Comments (none posted)

quagga: denial of service

Package(s):quagga CVE #(s):CVE-2013-6051
Created:November 26, 2013 Updated:December 27, 2013
Description: From the Debian advisory:

bgpd could be crashed through BGP updates.

Alerts:
Fedora FEDORA-2013-23504 quagga 2013-12-27
Debian DSA-2803-1 quagga 2013-11-26

Comments (none posted)

ruby: code execution

Package(s):ruby CVE #(s):CVE-2013-4164
Created:November 26, 2013 Updated:January 8, 2014
Description: From the CVE entry:

Heap-based buffer overflow in Ruby 1.8, 1.9 before 1.9.3-p484, 2.0 before 2.0.0-p353, 2.1 before 2.1.0 preview2, and trunk before revision 43780 allows context-dependent attackers to cause a denial of service (segmentation fault) and possibly execute arbitrary code via a string that is converted to a floating point value, as demonstrated using (1) the to_f method or (2) JSON.parse.

Alerts:
Gentoo 201412-27 ruby 2014-12-13
Red Hat RHSA-2014:0011-01 ruby193-ruby 2014-01-07
Mageia MGASA-2014-0003 ruby 2014-01-06
SUSE SUSE-SU-2013:1897-1 ruby19 2013-12-17
Slackware SSA:2013-350-06 ruby 2013-12-16
Fedora FEDORA-2013-22315 ruby 2013-12-11
openSUSE openSUSE-SU-2013:1834-1 ruby20 2013-12-07
openSUSE openSUSE-SU-2013:1835-1 ruby19 2013-12-07
SUSE SUSE-SU-2013:1828-1 ruby 2013-12-05
Debian DSA-2810-1 ruby1.9.1 2013-12-04
Debian DSA-2809-1 ruby1.8 2013-12-04
Scientific Linux SLSA-2013:1764-1 ruby 2013-12-03
Fedora FEDORA-2013-22423 ruby 2013-12-04
Ubuntu USN-2035-1 ruby1.8, ruby1.9.1 2013-11-27
Oracle ELSA-2013-1764 ruby 2013-11-27
Red Hat RHSA-2013:1767-01 ruby 2013-11-26
Mandriva MDVSA-2013:286 ruby 2013-11-26
Red Hat RHSA-2013:1764-01 ruby 2013-11-25

Comments (none posted)

sudo: privilege escalation

Package(s):sudo CVE #(s):CVE-2013-2777
Created:November 21, 2013 Updated:November 27, 2013
Description:

From the Red Hat advisory:

It was found that sudo did not properly validate the controlling terminal device when the tty_tickets option was enabled in the /etc/sudoers file. An attacker able to run code as a local user could possibly gain additional privileges by running commands that the victim user was allowed to run via sudo, without knowing the victim's password. (CVE-2013-2776, CVE-2013-2777)

Alerts:
Gentoo 201401-23 sudo 2014-01-21
Scientific Linux SLSA-2013:1701-2 sudo 2013-12-09
Oracle ELSA-2013-1701 sudo 2013-11-26
Red Hat RHSA-2013:1701-02 sudo 2013-11-21

Comments (none posted)

xen: denial of service

Package(s):xen CVE #(s):CVE-2013-4551
Created:November 21, 2013 Updated:November 27, 2013
Description:

From the Red Hat bugzilla entry:

Permission checks on the emulation paths (intended for guests using nested virtualization) for VMLAUNCH and VMRESUME were deferred too much. The hypervisor would try to use internal state which is not set up unless nested virtualization is actually enabled for a guest.

A malicious or misbehaved HVM guest, including malicious or misbehaved user mode code run in the guest, might be able to crash the host.

Alerts:
Gentoo 201407-03 xen 2014-07-16
openSUSE openSUSE-SU-2014:0483-1 xen 2014-04-04
openSUSE openSUSE-SU-2013:1876-1 xen 2013-12-16
CentOS CESA-2013:X013 xen 2013-11-25
Fedora FEDORA-2013-21057 xen 2013-11-21
Fedora FEDORA-2013-21041 xen 2013-11-21

Comments (none posted)

xen: denial of service/privilege escalation

Package(s):xen CVE #(s):CVE-2013-6375 CVE-2013-4356
Created:November 25, 2013 Updated:December 23, 2013
Description: From the CVE entries:

Xen 4.2.x and 4.3.x, when using Intel VT-d for PCI passthrough, does not properly flush the TLB after clearing a present translation table entry, which allows local guest administrators to cause a denial of service or gain privileges via unspecified vectors related to an "inverted boolean parameter." (CVE-2013-6375)

Xen 4.3.x writes hypervisor mappings to certain shadow pagetables when live migration is performed on hosts with more than 5TB of RAM, which allows local 64-bit PV guests to read or write to invalid memory and cause a denial of service (crash). (CVE-2013-4356)

Alerts:
Gentoo 201407-03 xen 2014-07-16
Fedora FEDORA-2013-23251 xen 2013-12-21
Fedora FEDORA-2013-22312 xen 2013-12-07
Fedora FEDORA-2013-22325 xen 2013-12-07
CentOS CESA-2013:X013 xen 2013-11-25

Comments (none posted)

Page editor: Jake Edge

Kernel development

Brief items

Kernel release status

The current development kernel is 3.13-rc1, released on November 22. In the end, 10,518 non-merge changesets were pulled into the mainline during this merge window. Now the stabilization period starts, with the final 3.13 release due around the end of the year.

Stable updates: no stable updates have been released in the last week. 3.12.2, 3.11.10, 3.10.21, and 3.4.71 are in the review process as of this writing; they can be expected sometime on or after November 28. Note that 3.11.10 is expected to be the final update to the 3.11 kernel.

Comments (none posted)

Quotes of the week

futexes are no place for believe. Either you understand them completely or you just leave them alone.
Thomas Gleixner

From your description, it sounds like SPECTRE is actually trying to make the job easier for the operating system to some degree by defining a standard hardware platform. If this actually works out and they hardware people don't screw up too much, supporting that platform should be a no-brainer, and I see no fundamental problem with adding ACPI support for that. [...]

Unfortunately it is impossible to know at this point what work is actually relevant for SPECTRE and what is not, so we can't really merge anything specific to ARM64+ACPI until we have access to an actual spec, or we get a video message by someone with a monocle and a lap cat to shed some more light on the actual requirements.

Arnd Bergmann

Comments (2 posted)

Checkpoint/restore tool v1.0

After years of work, version 1.0 of the checkpoint/restore tool is available. This is a mostly user-space-based tool that is able to capture the state of a set of processes to persistent storage and restore it at some future time, possibly on a different system. See this 2013 Kernel Summit article for details on the current state of this functionality.

Comments (13 posted)

Facebook likes Btrfs

Two Btrfs developers — Chris Mason and Josef Bacik — have simultaneously announced their departure from Fusion IO to work for Facebook instead. Chris says: "From a Btrfs point of view, very little will change. All of my Btrfs contributions will remain open and I'll continue to do all of my development upstream." Josef adds "Facebook is committed to the success of Btrfs so not much will change as far as my involvement with the project, I will still be maintaining btrfs-next and working on stability."

Comments (none posted)

Kernel development news

The conclusion of the 3.13 merge window

By Jonathan Corbet
November 26, 2013
Linus released 3.13-rc1 and closed the 3.13 merge window on November 22, perhaps a couple of days earlier than some developers might have expected. Counting a couple of post-rc1 straggler pulls, some 10,600 non-merge changesets were pulled into the mainline for this development cycle; that is about 700 since last week's summary.

As might be expected, the list of user-visible features included in that relatively small set of patches is short; it includes:

  • The squashfs filesystem now has multi-threaded decompression; it can also decompress directly into the page cache, eliminating the temporary buffer used previously.

  • There have been several changes to the kernel's key-storage subsystem. The maximum number of keys has increased to an essentially unlimited value, allowing, for example, the NFS code to store vast numbers of user ID mapping values as keys. There is a new concept of a "trusted" key, being one obtained from the hardware or otherwise validated, and keyrings can be marked as allowing only trusted keys. Finally, a mechanism for persistent keys not attached to a given user ID has been added, and key data can be quite large; both of these changes were needed to enable Kerberos to use the key subsystem.

  • New hardware support includes:

    • Input: Samsung SUR40 touchscreens.

    • Security: Nuvoton NPCT501 I2C trusted platform modules, Atmel AT97SC3204T I2C trusted platform modules, OMAP34xx random number generators, Qualcomm MSM random number generators, and Freescale cryptographic accelerators (job ring support).

Changes visible to kernel developers include:

  • There is a new associative array data structure in the kernel. It was added to support the keyring work, but could be applicable in other situations as well. See Documentation/assoc_array.txt for details.

  • The information in struct page is now even more dense with the addition of Joonsoo Kim's patch set to have the slab allocator store more information there. See this article for details.

Now the final stabilization phase for all of this work begins. Your editor predicts that the final 3.13 kernel will be released sometime between the New Year and the beginning of linux.conf.au 2014 on January 6.

Comments (1 posted)

The tick broadcast framework

November 26, 2013

This article was contributed by Preeti U Murthy.

Power management is an increasingly important responsibility of almost every subsystem in the Linux kernel. One of the most established power management mechanisms in the kernel is the cpuidle framework which puts idle CPUs into sleeping states until they have work to do. These sleeping states are called the "C-states" or CPU operating states. The deeper a C-state, the more power is conserved.

However, an interesting problem surfaces when CPUs enter certain deep C-states. Idle CPUs are typically woken up by their respective local timers when there is work to be done, but what happens if these CPUs enter deep C-states in which these timers stop working? Who will wake up the CPUs in time to handle the work scheduled on them? This is where the "tick broadcast framework" steps in. It assigns a clock device that is not affected by the C-states of the CPUs as the timer responsible for handling the wakeup of all those CPUs that enter deep C-states.

Overview of the tick broadcast framework

In the case of an idle or a semi-idle system, there could be more than one CPU entering a deep idle state where the local timer stops. These CPUs may have different wakeup times. How is it possible to keep track of when to wake up the CPUs, considering a timer is merely a clock device that cannot keep track of more information than the time at which it is supposed to interrupt? The tick broadcast framework in the kernel provides the necessary infrastructure to handle the wakeup of such CPUs at the right time.

Before looking into the tick broadcast framework, it is important to understand how the CPUs themselves keep track locally of when their respective pending events need to be run.

The kernel keeps track of the time at which a deferred task needs to be run based on the concept of timeouts. The timeouts are implemented using clock devices called timers which have the capacity to raise an interrupt at a specified time. In the kernel, such devices are called the "clock event" devices. Each CPU is equipped with a local clock event device that is programmed to interrupt at the time of the next-to-run deferred task on that CPU, so that said task can be scheduled on the CPU. These local clock event devices can also be programmed to fire periodically to do regular housekeeping jobs like updating the jiffies value, checking if a task has to be scheduled out, etc. These timers are therefore called the "tick devices" in the kernel and are represented by struct tick_device.

A per-CPU tick_device representing the local timer is declared using the variable tick_cpu_device. Every CPU keeps track of when its local timer needs to interrupt it next in its copy of tick_cpu_device as next_event and programs the local timer with this value. To be more precise, the value can be found in tick_cpu_device->evtdev->next_event, where evtdev is an instance of the clock event device mentioned above.

The external clock device that is required to stand in for the local timers in some deep idle states is just another tick device, but is not normally required to keep track of events for specific CPUs. This device is represented by tick_broadcast_device (defined in kernel/time/tick-broadcast.c), in contrast to tick_cpu_device.

Registering a timer as the tick_broadcast_device

During the initialization of the kernel, every timer in the system registers itself as a tick_device. In the kernel, these timers are associated with some flags which define their properties. That property which is of special interest to us is represented by the flag CLOCK_EVT_FEAT_C3STOP. This means that in the C3 idle state, the timer stops. Although the C3 idle state is specific to the x86 architecture, this feature flag is generally used to convey that the timer stops in one of the deep idle states.

Any timers which do not have the flag CLOCK_EVT_FEAT_C3STOP set are potential candidates for tick_broadcast_device. Since all local timers have this flag set on architectures where they stop in deep idle states, all of them become ineligible for this role. On architectures like x86, there is an external device called the HPET — High Precision Event Timer — which becomes a suitable candidate. Since the HPET is placed external to the processor, the idle power management for a CPU does not affect it. Naturally it does not have the CLOCK_EVT_FEAT_C3STOP flag set among its properties and becomes the choice for tick_broadcast_device.

Tracking the CPUs in deep idle states

Now we'll return to the way the tick broadcast framework keeps track of when to wake up the CPUs that enter idle states when their local timers stop. Just before a CPU enters such an idle state, it calls into the tick broadcast framework. This CPU is then added to a list of CPUs to be woken up; specifically, a bit is set for this CPU in a "broadcast mask".

Then a check is made to see if the time at which this CPU has to be woken up is prior to the time at which the tick_broadcast_device has been currently programmed. If so, the time at which the tick_broadcast_device should interrupt is updated to reflect the new value and this value is programmed into the external timer. The tick_cpu_device of the CPU that is going to deep idle state is now put in CLOCK_EVT_MODE_SHUTDOWN mode, meaning that it is no longer functional.

Each time a CPU goes to deep idle state, the above steps are repeated and the tick_broadcast_device is programmed to fire at the earliest of the wakeup times of the CPUs in deep idle states.

Waking up the CPUs in deep idle states

When the external timer expires, it interrupts one of the online CPUs, which scans the list of CPUs that have asked to be woken up to check if any of their wakeup times have been reached. That means the current time is compared to the tick_cpu_device->evtdev->next_event of each CPU. All those CPUs for which this is true are added to a temporary mask (different from the broadcast mask) and the appropriate next expiry time of the tick_broadcast_device is set to the earliest wakeup time of those CPUs. What remains to be seen is how the CPUs in the temporary mask are woken up.

Every tick device has a "broadcast method" associated with it. This method is an architecture-specific function encapsulating the way inter-processor interrupts (IPIs) are sent to a group of CPUs. Similarly, each local timer is also associated with this method. The broadcast method of the local timer of one of the CPUs in the temporary mask is invoked by passing it the same mask. IPIs are then sent to all the CPUs that are present in this mask. Since wakeup interrupts are sent to a group of CPUs, this framework is called the "broadcast" framework. The broadcast is done in tick_do_broadcast() in kernel/time/tick-broadcast.c.

The IPI handler for this specific interrupt needs to be that of the local timer interrupt itself so that the CPUs in deep idle states wake up as if they were interrupted by the local timers themselves. The effects of their local timers stopping on entering an idle state is hidden away from them; they should see the same state before and after wakeup and continue running like nothing had happened.

While handling the IPI, the CPUs call into the tick broadcast framework so that they can be removed from the broadcast mask, since it is known that they have received the IPI and have woken up. Their respective tick devices are brought out of the CLOCK_EVT_MODE_SHUTDOWN mode, indicating that they are back to being functional.

Conclusion

As can be seen from the above discussion, enabling deep idle states cause the kernel to have to do additional work. One would therefore naturally wonder if it is worth going through this trouble, since it could hamper performance in the process of saving power.

Idle CPUs enter deep C-states only if they are predicted to remain idle for a long time, on the order of milliseconds. Therefore, broadcast IPIs should be well spaced in time and are not so frequent as to affect the performance of the system. We could further optimize the tick broadcast framework by aligning the wakeup time of the idle CPUs to a periodic tick boundary whose interval is of the order of a few milliseconds so that CPUs going to idle at almost the same time choose the same wakeup time. By looking at more such ways to minimize the number of broadcast IPIs sent we could ensure that the overhead involved is insignificant compared to the large power savings that the deep idle states yield us. If this can be achieved, it is a good enough reason to enable and optimize an infrastructure for the use of deep idle states.

Acknowledgments

I would like to thank my technical mentor Vaidyanathan Srinivasan for having patiently reviewed the initial drafts, my manager Tarundeep Singh, and my teammates Srivatsa S. Bhat and Deepthi Dharwar for their guidance and encouragement during the drafting this article.

Many thanks to IBM Linux Technology Center and LWN for having provided this opportunity.

Comments (4 posted)

ACPI for ARM?

By Jonathan Corbet
November 22, 2013
The "Advanced Configuration and Power Interface" (ACPI) was not an obvious win when support for it was first merged into the mainline kernel. The standard was new, actual implementations were unreliable, and supporting it involved bringing a large virtual machine into the kernel. For years, booting with ACPI disabled was the first response to a wide range of problems; one can still find web sites advising readers to do that. But, for the most part, ACPI has settled in as a mandatory part of the PC platform standard. Now, however, it appears that a similar story may be about to play out in the ARM world.

Arguments for and against ACPI

There have been rumblings for a few years that ACPI would start to appear in ARM-based systems, and in server systems in particular. Recently, some code to support such systems has started to make the rounds; Olof Johansson, a co-maintainer of the arm-soc tree, looked at this code and didn't like what he saw:

The more I start to see early UEFI/ACPI code, the more I am certain that we want none of that crap in the kernel. It's making things considerably messier, while we're already very busy trying to convert everything over and enable DT -- we'll be preempting that effort just to add even more boilerplate everywhere and total progress will be hurt.

In this message and several followups Olof clarified what he was trying to get across. The ARM world already has a mechanism to describe the hardware — device trees — that is only now coming into focus. Adding device tree support has required making changes to a large amount of platform and driver code; supporting ACPI threatens to bring just as much work and add a second code path for system configuration that will need to be maintained forever. Even worse is the fact that there are no established standards for ACPI in the ARM setting; nobody really knows how things are supposed to work, and what is coming out in the early stages is not encouraging. Bringing in ARM ACPI support now would be committing the kernel community to supporting a moving target indefinitely.

Olof went on to suggest that it might be best to wait for others to figure out how ACPI on ARM is supposed to work:

Oh wait, there's people who have been doing this for years. Microsoft. They should be the ones driving this and taking the pain for it. Once the platform is enabled for their needs, we'll sort it out at our end. After all, that has worked reasonably well for x86 platforms.

He added that, until there are ACPI systems shipping with Windows and working well, the Linux community should stay far away from ACPI on ARM. If ACPI-based systems actually hit the market, he said, they can be supported with a pre-boot layer that translates the system's ACPI tables into the device tree format.

Disagreement with this position came in a couple of forms. Several people point out that standards developed by Microsoft may not suit the Linux community as well as we might like. As Mark Rutland (a device tree bindings maintainer) put it:

I'm not sure it's entirely reasonable to assume that Microsoft will swoop in and develop standards that are useful to us or even applicable to the vast majority of the systems that are likely to exist. If they do, then we will (by the expectation that Linux should be able to run wherever another OS can) have to support whatever standards they may create.

Russell King added another point echoed by many: refusing to support ACPI could cost the community its chance to influence (or even control) how the standard evolves. In his words:

We have a possibility here to define how we'd like ACPI to look: we have the chance to have ACPI properties using the same naming that we already have for DT.

Shutting the door on ACPI, Russell asserted, would be a move that the community would regret in the long term.

Jon Masters joined the conversation to make the claim that ARM-based servers were committed to the ACPI path, saying "all of the big boys are going to be using ACPI whether it's liked much or not". He said that the server space requires a mechanism that has been standardized and set in stone, and that, in his opinion, the device tree abstraction is far too unstable to be usable (a claim that Grant Likely strongly disagreed with). Red Hat, Jon said, is fully behind ACPI on ARM servers for all of the products that it has absolutely not said it will ever offer. Jon's wording, along with his suggestion that everything has already been decided in NDA-protected conference rooms, won him few friends in this discussion, but his point remains: there will be systems using ACPI on the market, and Linux has to deal with them somehow.

What to do

But that still doesn't answer the question of how to deal with them. Arnd Bergmann suggested that ACPI might not be a long-term issue for the ARM community:

I think we can still treat ACPI on ARM64 as a beginner's mistake and provide hand-written DT blobs for the few systems that start shipping with that. The main reason for doing it in the first place was the expected number of Windows RT servers, but WinRT isn't doing well at the moment, so it's not unreasonable to assume it's going the same way as WinRT tablets.

Most people, though, seemed to think that ACPI could be here to stay, so the community will have to figure out some way of dealing with it.

One possibility might be Olof's idea of translating the ACPI tables into a device tree, but that approach was somewhat unpopular. It looks to many like a partial answer to the problem that would run into no end of problems; there is also the matter of running the ACPI Machine Language (AML) code found in the ACPI firmware. AML can be necessary for hardware initialization and power management tasks, but it has no analog in the device tree world. Generally, there was a sentiment that, if ACPI is to be supported on ARM systems, it should be supported properly and not behind some sort of translation layer.

In the short term, some sort of translation to device trees — either at boot-time or done by hand — seems likely to be the outcome, though. Putting code into the kernel to support any ACPI-based systems that might appear in the near future just seems to many like a way to take on a long-term support burden for short-lived systems. What might start to tip the balance could be systems which, as Arnd described them, are "PCs with their x86 CPU removed and an ARM chip put in there"; adding ACPI support for those would be "harmless enough," he said. But Arnd seems to be strongly against adding ACPI support for complicated ARM-style systems.

Longer-term, the community is likely to watch and wait. Efforts will be made to direct the evolution of ACPI for ARM systems; Linaro, in particular, has developers engaged with that process now. And even Olof is open to bringing in ACPI support at some point in the future, once its supporters "seem to have their act together, have worked out their kinks and reached a usable stable platform". But that, he says, could be a couple of years from now.

Microsoft, through its dominance of the market for software on PC-class systems, was able to push hardware standards in directions it liked. In the ARM world, Linux dominates just as strongly, so it seems a bit surprising to be playing catch-up with shifts in the ARM platform in this way. Part of the problem, of course, is that there is no single Linux voice at the standards table; companies like Linaro and Red Hat are working on the problem, but they do not represent, or seemingly even talk to, the rest of the community on this topic. The fact that much of this work is done under non-disclosure agreements does not help; NDAs do not fit well with how community development is done.

In the end, it will certainly work out; it is hard to imagine any significant class of ARM-based hardware being successful without solid Linux support. It's mostly a matter of how much short- and long-term pain will have to be endured to make that support happen. For all the early complaining, ACPI has mostly worked out in the x86 world; it may well find a useful role in the ARM market as well.

Comments (32 posted)

Patches and updates

Kernel trees

  • Sebastian Andrzej Siewior: 3.12.1-rt4 . (November 22, 2013)

Architecture-specific

Core kernel code

Development tools

Device drivers

Documentation

Filesystems and block I/O

Memory management

Security-related

Miscellaneous

Page editor: Jonathan Corbet

Distributions

Acrobat Reader for openSUSE 13.1

By Jake Edge
November 27, 2013

One might think that removing a unmaintained package with known security bugs from a distribution's repository would be largely uncontroversial. If the package in question is proprietary and closed-source, one would expect the outcry to be smaller still. But, for some openSUSE users, removing Adobe's Acrobat Reader, even though it suffers from all of those ills, is unreasonable. Because the free software PDF readers do not have certain capabilities needed by those users, relying on insecure, unmaintained code is still better than the alternatives.

The issue came to a head with release candidates for the recently released openSUSE 13.1. Viljo Mustonen tried out RC2, but found that Acrobat Reader (acroread) was not available. That short note set off a sizable thread, which eventually moved into a second, even longer, thread. It turns out that some users have a real need for a PDF reader that has features that acroread has and the free alternatives lack. But Adobe dropped support for Linux for its Reader product after the 9.x series.

In a series of posts, openSUSE user Carlos E. R. complained that his government, bank, and utility companies all provide various forms and receipts in PDF format that use digital signatures and buttons, neither of which is supported by Evince or Okular—both of which are based on the Poppler PDF rendering library. Others also noted that they sometimes had to resort to acroread, though plenty of participants in the thread were satisfied with the open source PDF viewers.

Multiple suggestions were offered to help deal with the difficult PDF files. Guido Berhoerster mentioned another PDF reader, MuPDF, which is not based on Poppler as an alternative, but Carlos E. R. found no joy there. Wine was mentioned multiple times, but recent versions of Acrobat Reader for Windows do not run under Wine. Installing Windows in a virtual machine evidently works but suffers from a number of problems: it's a heavy-handed approach and requires a Windows license. One of the more workable solutions would be to keep the existing acroread as distributed by Adobe for Linux, but to wrap it in an AppArmor profile to try to prevent any PDF exploits from wreaking havoc, as suggested by Christian Boltz (and others).

But there are advocates for just adding the package back into the distribution, though with some scary warnings so that users are aware of the dangers. For example, Ruediger Meier suggested that it simply return:

Hm, personally I don't use acroread. Nevertheless I think it shouldn't have been removed because we know it's still needed for many cases.

Just add it again, don't install per default, warn the user that it's broken and unmaintained. Maybe remove it from DVD but keep it in non-oss repo. For me it makes no sense to remove it completely just to tell everybody that they should download the same manually...

But shipping a package is something of a commitment by a distribution. Since Reader is unmaintained (and has had multiple security updates on other platforms—something that might well point the finger at holes in the existing version), it is a bit dangerous for openSUSE to commit to the package. In fact, as Stephan Kulow noted, there are rules about unmaintained software in openSUSE; dropping acroread was simply following them.

While some bemoaned Adobe's dropping of support, Ludwig Nussel had a different view:

They are doing us a favor. In the short term there might be a period where there is no good solution for some PDFs indeed. In the long term however I am sure the lack of the proprietary fallback will help the free PDF viewers and other PDF tools to catch up faster. More users of the free viewers will draw more attention to them which also means more developers.

Furthering that idea, John Layt noted the problem was more widespread than just openSUSE and suggested that "all the distros get together and fund a project to improve poppler to the point where it meets the required needs". In addition, there is another, similar, looming problem that Wolfgang Rosenauer pointed out: Adobe has dropped support for Flash for Linux too. That particular problem will be even more painful than the loss of acroread, he said.

Both acroread and Flash have long been available for Linux, but they were never free (as in freedom). PDF and Flash on the web were adopted widely, however, and free alternatives never quite caught up to the gratis versions that Adobe supplied. Now that Adobe has moved on, the free software world has been a bit caught out. It is, once again, an object lesson in the perils of proprietary software, though unfortunately it is a lesson that probably won't be learned outside of free software circles.

With any luck, the alternatives to acroread and the Adobe Flash plugin will accelerate their development in the near future. In the meantime, the existing versions of the Adobe code can still be run—though some kind of confinement to mitigate exploits seems warranted.

Comments (13 posted)

Brief items

Distribution quote of the week

It must be lots of fun to live in a world where there are Apps and there is an OS and a nice simple line we can draw and ne'er the twain shall meet, but I don't think Fedora is ever going to be such a world ...
-- Adam Williamson

Comments (none posted)

Black Lab Linux 4.1.8 released

The Black Lab Linux Project has announced the release of Black Lab Linux 4.1.8. "This is the inaugural release of Black Lab Linux since we had to rename our distribution from OS/4 OpenLinux to Black Lab Linux." This release features a complete rebranding and artwork change, an updated kernel, access to the ElementaryOS PPA, and lots of updated packages and bug fixes. (Thanks to Roberto J Dohnert)

Comments (3 posted)

DragonFly Release 3.6

DragonFly BSD 3.6.0 has been released. "Dports, which uses the FreeBSD ports system as a base, and the 'pkg' tools for installation, is now default on DragonFly. Over 20,000 packages are available in binary or source form." There are also SMP scaling improvements and experimental support for newer Intel and ATI chipsets.

Comments (none posted)

OpenMandriva Lx final release

The OpenMandriva Community has announced the release of OpenMandriva Lx 2013.0, the first stable release since the OpenMandriva Association was formed. This release features new artwork from the community, KDE 4.11.2, and other updated applications. "This release, OpenMandriva Lx 2013.0 is dedicated to the memory of Ron (Ronald van Pomeren), aka Arvi Pingus. Ron was one of the first contributors to OpenMandriva, dedicated, professional – and just a very good person and friend to many of us. We miss him." The release notes are here.

Comments (none posted)

Red Hat Enterprise Linux 6.5 released

Red Hat has announced the release of Enterprise Linux 6.5 (RHEL 6.5). The release has new features in multiple areas, including security, networking, virtualization, and more. "As application deployment options grow, portability becomes increasingly important. Red Hat Enterprise Linux 6.5 enables customers to deploy application images in containers created using Docker in their environment of choice: physical, virtual, or cloud. Docker is an open source project to package and run lightweight, self-sufficient containers; containers save developers time by eliminating integration and infrastructure design tasks."

Comments (6 posted)

Distribution News

Debian GNU/Linux

Call for DebConf volunteers and DebConf15/16 bids

The Debian DebConf team is looking for volunteers and bids for DebConf 15 in 2015. They are also open to suggestions about venues for DebConf 16. "Please consider joining the DebConf team, whether or not you are associated with a future bid! There are many possible ways to help in the months before DebConf, even if you live far away from where the event will happen."

Full Story (comments: none)

Mageia Linux

Mageia 2 End-of-Life (EOL)

Mageia 2 has now reached End-of-Life (EOL). There will be no further updates. "Just make sure you have a fully updated Mageia 2, including mageia-prepare-upgrade-2-3.mga2 released as MGAA-2013-0124 (http://advisories.mageia.org/MGAA-2013-0124.html) to ensure better stability and support for online upgrades."

Full Story (comments: none)

Newsletters and articles of interest

Distribution newsletters

Comments (none posted)

Page editor: Rebecca Sobol

Development

OpenMusic for Linux

November 26, 2013

This article was contributed by Dave Phillips

On October 28 of this year composer/developer Anders Vinjar posted an interesting message to a mailing list for users of Lisp-based music and sound software, saying: "I've been working on a Linux port of IRCAM's OpenMusic lately and think it's approaching a useful state now." It was a quiet announcement for exciting news. I've installed and tested two previous Linux incarnations of OpenMusic. With all due respect to the previous attempts, this port is the first truly usable version for Linux. To be clear, the current release is a beta version with some stability and performance issues, but in my tests so far it appears that most of OpenMusic's major features are in working order.

Why is this news exciting? To answer that question I have to clarify a few points regarding software for music and sound production. When we consider the variety of available music software we find a cornucopia of programs for recording, editing, and processing your audio/MIDI productions. In contrast, we find fewer applications devoted to the art and craft of music composition. Yes, the modern DAW (digital audio workstation) blurs the distinction, but it's fair to note that a DAW is a generalized application. Notation software can be considered composition software, but its utility is limited to only those with the relevant knowledge of standard notation practice. OpenMusic includes features from DAWs, MIDI sequencers, and music notation software, binding them together into something unique.

What is OpenMusic?

OpenMusic is a Lisp-based visual programming environment designed originally for music composition. Its evolution has added features with significance for graphic arts, musicology, and even mathematics. OpenMusic was created at IRCAM, the famous Parisian institute founded by Pierre Boulez for advanced research into music and acoustics. IRCAM brings together composers and programmers to produce software designed specifically for the techniques of contemporary composition, e.g. algorithmic methods, composition by sonification and mapping, 12-tone and serial methods, spectral analysis and application, etc. This collaboration has produced a rich environment of software for audio synthesis, music composition, and digital signal processing. Composers have produced notable works with this software, and in particular with OpenMusic.

OpenMusic is a descendant of PatchWork, also created at IRCAM. PatchWork presented the model of the patching canvas, a workspace upon which the user places and connects graphic units representing functions and other operations to create data sets in forms useful for music composition. The user connects the units with virtual cables, a process called "patching." By patching together various generative and processing units the user creates a data-flow graph that typically ends by producing one of the program's output target formats such as a WAV or MIDI file. The completed graph is called a patch. OpenMusic has expanded PatchWork's capabilities and is considered its replacement, but the process of patching remains a basic activity of an OpenMusic session.

OpenMusic is open-source software available under the terms of the GPLv3.

Working with OpenMusic

OpenMusic 6.x is available from this page. At the time this article was written, its version stood at 6.7, but the Linux release is in a beta state. The software is currently available only as an RPM package (source for the port has not yet been released), but you can use the alien conversion utility to change it to your preferred package format. The default installation locates the openmusic binary in /usr/bin and places its supporting files in the /usr/share directory.

[OpenMusic workspace] After installing the package you can start OpenMusic by clicking on its desktop icon or by entering "openmusic" in an xterm window. An opening dialog will appear, asking whether you want to invoke a new session workspace or return to an old one. For your first experience, ask for a new workspace. When the program starts you'll see the workspace shown to the right, at which time you can right-click in that space to start a new patch or load a previously created item.

Before going further you should set your preferences in the workspace window. Open the "OpenMusic 6.7" menu and click on the Preferences item. The Preferences dialog customizes various aspects of OpenMusic to your liking, including important path settings, colors, fonts, audio/MIDI features, notation settings, and libraries to load on start-up. I'll have more to say about the Preferences dialog, but at this point you can open a new patch for editing and start rolling with OpenMusic.

I've already described OpenMusic's basic working design and the central importance of the patch. An OpenMusic patch can also include subpatches called "abstractions" that conceal functional units within a patch. Of course these functions could be exposed in the patch, but abstractions reduce screen clutter and help clarify the organization and purpose of the host patch. OpenMusic further extends the original PatchWork model with the maquette (a container for multiple patches) and the sheet, containers for higher-level organization of material.

The image below displays a simple OpenMusic patch; I've added descriptive comments, but to be clear the patch merely adds two numbers then multiplies them by ten.

[OpenMusic patch]

The patch demonstrates the deployment of the generative functions, their connection with virtual cables, and the production of an output value. The comments are presented in different fonts and colors, a nice amenity when clarifying complex patches. Note that the Listener (the application's Lisp console) prints the results first for the addition and then for the multiplication.

So much for using OpenMusic as a desktop calculator. By far the more common employment of OpenMusic is in the domain of music composition; the figure below shows OpenMusic in a more musically productive mode with output in multiple formats:

[OpenMusic at work]
A single core process generates data destined to become a soundfile, a MIDI file, and score files in standard notation, LilyPond code, and Csound's sco format. Though it appears to be rather complicated, this patch is actually straightforward. I borrowed and combined parts from various tutorials to create a sequence of actions from the generation of random number series to the realization of those numbers in the various output formats. In OpenMusic cut-and-paste is your friend, and you should feel free to reuse existing code to practice building patches.

I made liberal use of the "d" key to summon OpenMusic's online documentation for its classes and functions. The information there is sparse, but I was able to learn which data types were generated and accepted by an object's outlets and inlets, necessary knowledge when making and troubleshooting connections between objects in your processing network.

By the way, despite OpenMusic's reliance on Lisp, no knowledge of the language is required beyond the proper formation of data sets, i.e. the correct placement of parentheses. Even that amount of Lisp can be minimized by using objects intended to generate data sets in the correct format. Of course, like any good Lisp-based application, OpenMusic includes a Listener, a direct interface to the Lisp language. From the top down, OpenMusic's Listener window contains the Lisp prompt, a running status area, and an error reporter. If you know Lisp you can add the power of the Listener to your workflow. For example, the OM-SoX library package includes some useful extra code that can be loaded in the Listener — an easy procedure for a Lisp novice — but the process may not be transparent to the Lisp-less. Nevertheless, prior knowledge of the language is not an absolute requirement for productive use of OpenMusic.

Borrowing From The Library

The OpenMusic libraries provide considerable extra power to the basic installation, expanding the environment's capabilities with more exotic processing functions, interfaces to external software synthesis environments, routines for signal analysis and resynthesis, image to sound conversion, and so forth. An OpenMusic external library is similar to a SuperCollider Quark, a loadable extension in the form of a bundle of classes, functions, tutorial examples, source code, documentation, and other resources. Many libraries include tutorial patches listed under the Help menu.

To load an OpenMusic library, open the Library item from the Windows menu, then open the File/Load Library dialog and make your selection. Alternately you can double-click on the library's icon in the dialog GUI. When you first run OpenMusic, it will list only the available internal libraries — a useful set — which are unloaded by default. After you've practiced loading a few libraries you should visit the IRCAM site to see what other libraries are currently available. After installing a new library it will be listed, ready for use. If a library won't load, the system will send a (hopefully helpful) error message to the Listener. If no errors are reported, then that's it, you're free to check out the newly added features.

Some libraries are oriented to a particular approach to using OpenMusic — the OMTristan library, for example, is based on the practices of composer Tristan Murail, while others are more general in scope and application. OpenMusic's packaged libraries are IRCAM-approved and should work with the latest release. Third-party libraries may require changes that may or may not be trivial to make. If you encounter such a library be sure to contact the author about updating his or her software.

From the many excellent libraries for OpenMusic here are some that I've been using frequently:

  • OM2Csound is a freely available library from IRCAM that provides an interface between OpenMusic and Csound, sweet news for Csound users such as myself. The connection is basic, but it works, and if you want a richer set of Csound-related functions, you can purchase the OMChroma library, also from IRCAM. Thanks to developer Jean Bresson, I was able to test an early release of OMChroma 4.1 while writing this article. The library uses a few dozen Csound .orc files — Csound's instrument definitions — that require some simple fixes to accommodate Csound6, after which the OMChroma tutorial patches will run without complaint. As of November 26, OMChroma 4.2 has been released, with support for Csound6.

  • The OMAlea and OMChaos libraries (distributed with OpenMusic) are collections of functions based on chance (aleatoric) and chaotic processes. They're great for generating random numbers in various distributions to be put to use as pitches, rhythmic units, and whatever other musical purpose you can find for them. They are also perfect for exploring the worlds of constrained random number generation.

  • Chris Bagwell's SoX is a well-known multi-platform utility for audio file format conversion, resampling, effects processing, playback/record, and more. I was pleasantly surprised to find Marlon Schumacher's OM-SoX library, an OpenMusic interface to SoX's many useful features. Alas, the library refused to load until I entered this code in the Listener:
    	(setf *all-players* nil)
    

    [OpenMusic with sox] After I ran that command, the library loaded without complaint. I added it to my personal preferences.lisp file to successfully load OM-SoX at start-up, though I still need to specify the location of the sox binary in the Preferences/Externals dialog. When the location is applied soxplayer becomes the default player, overriding the default jackplayer. In my experience so far, soxplayer is the better player, but I may be missing crucial JACK settings somewhere.

    To the right you can see a SoX-based patch made with a SOUND object, the sox-spectrogram object, and a PICTURE box. As I indicated earlier, some of SoX's more interesting functions — such as sox-compand, sox-denoise, sox-reverb and others — must be loaded in the Listener, but the spectrogram is immediately available for use.

  • David Echevarria and Yannick Chapuis designed the OMXmulti library to replace OpenMusic's MULTI-SEQ and POLY-SEQ classes. The XMULTI class retains all features of the original classes and adds many extra features for composers, e.g. chromatic/harmonic transformations (transposition and inversion), temporal and intervallic compression/expansion, an expanded notation symbol palette, and so forth. Thanks to the utility of those features, I find I use this library frequently as I develop a typical workflow with OpenMusic.

With the exception of OMChroma all these libraries are freely available. OMChroma can be purchased from the IRCAM shop, along with a variety of other proprietary OpenMusic libraries created at and maintained by IRCAM. After my satisfying experience with OMChroma I'm definitely interested in checking out some of those other libraries.

By the way, these and other OpenMusic libraries are fun to investigate, but when working on a real project try to restrain yourself from loading any more libraries than you actually need. As developer Jean Bresson advises, having many libraries loaded at the same time may make the system unstable and difficult to debug if things go wrong.

Documentation

IRCAM's official documentation for OpenMusic includes a user-level manual with tutorial patches, a set of introductory videos, and a reference manual for programmers. More material can be discovered out on the net. From my searches I discovered the tutorial videos from fiboribo (aka Federico Bonacossa) and the more advanced material on the Algorithmic Composer Web site. These are first-rate resources that will help bring you up to speed in OpenMusic, consider them both highly recommended.

OpenMusic's Help menus differ slightly with their context. The workspace Help connects the user to the tutorial patches, the users manual (displayed in your system's default browser), and the function reference documentation. The Help in a patch window adds items to display the editor command keys, to show selected class and function definitions, and to abort an evaluation.

You can also access the user-level documentation for any object by selecting it and pressing the "d" key. Control-i will call up further information on a selected object.

OpenMusic has been used by many composers, including Tristan Murail, Kaija Saariaho, Gerard Grisey, and Brian Ferneyhough. IRCAM has published three books focused on the use of OpenMusic in real-world scenarios. The OM Composer's Book is a two-volume set of interviews with composers who describe their use of OpenMusic as a central component in their works. Contemporary Composition Techniques And OpenMusic is another book-length study of works and methods based on OpenMusic, with special attention to the thought and practices of Tristan Murail.

In addition to these resources, you can find many pieces of music composed with the assistance of OpenMusic. Tristan Murail, Gerard Grisey, and Kaija Saariaho are well-represented on YouTube and are often described as "spectralist" composers, i.e. they make use of audio spectra to determine aspects of their compositions, often using OpenMusic's tools for analyzing sound and representing the analysis in a composer-friendly format, typically as SDIF files (SDIF being the "Sound Description Interchange Format"). The data sets derived from the analyses can then be further processed by other OpenMusic objects.

Usability Issues & Tips

The version of OpenMusic described here is a beta version with various unresolved issues. Anders has indicated that some solutions will have to wait for the completion of a new audio/MIDI handler. Meanwhile, here are some tips for dealing with annoyances that may not be due to any problem with OpenMusic itself.

Drag-and-drop has been substantially improved since I tested this version's first beta, but if you have problems with it, or just prefer the keyboard, you'll be glad to learn that the arrow keys and Shift-arrow combinations will move your selections quickly and smoothly.

For Ubuntu users: Canonical's overlay scrollbar can be a frustrating annoyance, often crashing OpenMusic with a single move. The problem almost certainly lies within OpenMusic somewhere, but, even so, my advice is to disable the scrollbar completely with Synaptic, apt-get, or whatever removal tool suits your preference. Remove the widget and all associated libraries. Your windows will now have their normal sliders and scrollbars that do not randomly crash OpenMusic.

Study the preferences.lisp file generated for your workspace. Customizing this file can be very helpful. Some features are not represented in the GUI's Preference dialog or don't persist after you've selected them, but you can set them manually.

Anders welcomes error reports, so save those log files and send them to him when something goes wrong. From my correspondence with him and other members of the crew, it appears that IRCAM is solidly behind this project. They want a fully-operational version for Linux and they have committed resources to make it work, so send in those reports.

Outro

OpenMusic is deep software and I'm only starting to navigate its depths. It's not for everyone and it doesn't suit every purpose, but it is designed for general application as a composer's helper. If you're already into contemporary methods of composition, then you might find OpenMusic a powerful addition to your workflow. On the other hand, if you're new to the worlds of algorithmic and experimental music, OpenMusic makes an excellent learning environment. However you deploy it, OpenMusic is a welcome addition to any composer's software collection. It has already become an essential part of mine.

Comments (4 posted)

Brief items

Quotes of the week

Clearly putting the value of integer expressions into strings is a very esoteric corner case that very few C++ programs need to do, if it is this hard to do it and there is no obvious standard idiom that would work in all compilers and language vintages.
Tor Lillqvist (hat tip to Cesar Eduardo Barros)

We are losing collected collective wisdom at al alarming rate in the GNOME project as people (like myself) become less active and old wiki pages get deleted wholesale as we move to new infrastructure and the content gets "refreshed".

I don't think this is a good thing, but if it's a conscious decision that's one thing. If it's collateral damage and is happening unawares, then it's more serious (and consider this to be calling attention to it).

Dave Neary

I think LibreOffice is a pretty good model for WYSIWYG, apart from not being Emacs.
Richard Stallman

So yes, I'll agree that many people think R is hard. (Many people also think data science is hard, but that doesn't seem to be slowing the field.) I'll also agree that Python is an elegant and popular language useful for data work. I've got nothing against Python; if I had the time, I'd be interested in learning it myself. But it's still a big leap from "R is hard and I like Python" to "Python is displacing R." And as any good data scientist knows, the burden is on the researcher who makes a claim to prove it, not on his or her readers to conduct research in order to find it false.
Sharon Machlis

Comments (9 posted)

Docker 0.7 released

Docker is a system for the creation and deployment of applications within containers; it was covered here in October. The 0.7 release is now available, with a long list of new features, including the ability to run on standard Linux kernels, a fancy storage driver subsystem, better control over networking and communications between containers, and more. "It took us a while to find our bearings and adapt to the new, crazy pace of Docker’s development. But we are finally figuring it out. Starting with 0.7, we are placing Quality in all its forms – user interface, test coverage, robustness, ease of deployment, documentation, consistency of APIs – at the center our development process."

Comments (none posted)

Gnuaccounting 0.8.4 available

Version 0.8.4 of the open source accounting package Gnuaccounting has been released. This version adds support for LibreOffice 4 and Apache OpenOffice 4, support for renaming scanned files by decoding bar codes within the scanned image, and support for the Single Euro Payments Area (SEPA) standard: "Invoice metadata can be added according to the DocTag standard and customer's IBAN and BIC numbers can now be entered in the context of the SEPA-preparations."

Full Story (comments: none)

Newsletters and articles

Development newsletters from the past week

Comments (none posted)

A Summer Spent on the LLVM Clang Static Analyzer for the Linux Kernel (Linux.com)

Linux.com profiles Eduard Bachmakov, a Google Summer of Code student that worked on static analysis for the Linux kernel. "Much work toward creating a static analyzer for the Linux kernel had already been done as part of the LLVM project. One of the goals of Bachmakov's internship was to demonstrate how the analzyer works through a tool that traces where errors come from and creates a report. (See an example of his checker tool, here.) He also set out to make a selection of checkers that make sense within the kernel. “A lot (of checks) while technically correct, don’t apply. Many checks are just omitted because it’s understood that this would never happen,” Bachmakov said. “These are issues that can’t be read from the code. These are things you have to know, so there were a lot of false positives.”"

Comments (6 posted)

Page editor: Nathan Willis

Announcements

Brief items

Creative Commons 4.0 licenses released

The Creative Commons has announced the availability of version 4.0 of its license suite. "We had ambitious goals in mind when we embarked on the versioning process coming out of the 2011 CC Global Summit in Warsaw. The new licenses achieve all of these goals, and more. The 4.0 licenses are extremely well-suited for use by governments and publishers of public sector information and other data, especially for those in the European Union. This is due to the expansion in license scope, which now covers sui generis database rights that exist there and in a handful of other countries."

Comments (8 posted)

Articles of interest

Seigo: Introducing Improv

On his blog, KDE hacker Aaron Seigo introduces Improv, the first hardware product from the Make•Play•Live community. Improv is a $75 development board, with some fairly beefy specs and running Mer OS, that will be shipping in January. It consists of two separate boards, the CPU card and the feature board, with the latter being an open hardware device. "The hardware of Improv is extremely capable: a dual-core ARM® Cortex™-A7 System on Chip (SoC) running at 1Ghz, 1 GB of RAM, 4 GB of on-board NAND flash and a powerful OpenGL ES GPU. To access all of this hardware goodness there are a variety of ports: 2 USB2 ports (one fullsize host, one micro OTG), SD card reader, HDMI, ethernet (10/100, though the feature card has a Gigabit connector; more on that below), SATA, i2c, VGA/TTL and 8 GPIO pins. The entire device weighs less than 100 grams, is passively cooled and fits in your hand."

Comments (31 posted)

New Books

Functional Programming Patterns in Scala and Clojure--New from Pragmatic Bookshelf

Pragmatic Bookshelf has released "Functional Programming Patterns in Scala and Clojure" by Michael Bevilacqua-Linn.

Full Story (comments: none)

Calls for Presentations

Open Document Editors Devroom at FOSDEM

The Open Document Editors Devroom at FOSDEM will take place February 1, 2014 in Brussels, Belgium. The devroom is organized by Apache OpenOffice and LibreOffice. The call for talks deadline is December 22.

Full Story (comments: none)

Mini-DebConf in Paris

There will be a Mini-DebConf in Paris, France on January 18-19, 2014. The call for papers is open until January 10.

Full Story (comments: none)

Libre Graphics Meeting 2014

Libre Graphics Meeting 2014 will be held at the University of Leipzig in Leipzig, Germany, from April 2 to 5, 2014. The call for participation is open from now until January 15.

Comments (none posted)

Texas Linux Fest 2014

Texas Linux Fest 2014 will be held at the Austin Convention Center on June 13th and 14th, 2014. The call for papers is open until April 5, 2014.

Comments (none posted)

CFP Deadlines: November 27, 2013 to January 26, 2014

The following listing of CFP deadlines is taken from the LWN.net CFP Calendar.

DeadlineEvent Dates EventLocation
December 1 February 7
February 9
devconf.cz Brno, Czech Republic
December 1 March 6
March 7
Erlang SF Factory Bay Area 2014 San Francisco, CA, USA
December 2 January 17
January 18
QtDay Italy Florence, Italy
December 3 February 21
February 23
conf.kde.in 2014 Gandhinagar, India
December 15 February 21
February 23
Southern California Linux Expo Los Angeles, CA, USA
December 31 April 8
April 10
Open Source Data Center Conference Berlin, Germany
January 7 March 15
March 16
Chemnitz Linux Days 2014 Chemnitz, Germany
January 10 January 18
January 19
Paris Mini Debconf 2014 Paris, France
January 15 February 28
March 2
FOSSASIA 2014 Phnom Penh, Cambodia
January 15 April 2
April 5
Libre Graphics Meeting 2014 Leipzig, Germany
January 17 March 26
March 28
16. Deutscher Perl-Workshop 2014 Hannover, Germany
January 19 May 20
May 24
PGCon 2014 Ottawa, Canada
January 19 March 22 Linux Info Tag Augsburg, Germany
January 22 May 2
May 3
LOPSA-EAST 2014 New Brunswick, NJ, USA

If the CFP deadline for your event does not appear here, please tell us about it.

Upcoming Events

Events: November 27, 2013 to January 26, 2014

The following event listing is taken from the LWN.net Calendar.

Date(s)EventLocation
November 28 Puppet Camp Munich, Germany
November 30
December 1
OpenPhoenux Hardware and Software Workshop Munich, Germany
December 6 CentOS Dojo Austin, TX, USA
December 10
December 11
2013 Workshop on Spacecraft Flight Software Pasadena, USA
December 13
December 15
SciPy India 2013 Bombay, India
December 27
December 30
30th Chaos Communication Congress Hamburg, Germany
January 6 Sysadmin Miniconf at Linux.conf.au 2014 Perth, Australia
January 6
January 10
linux.conf.au Perth, Australia
January 13
January 15
Real World Cryptography Workshop NYC, NY, USA
January 17
January 18
QtDay Italy Florence, Italy
January 18
January 19
Paris Mini Debconf 2014 Paris, France

If your event does not appear here, please tell us about it.

Page editor: Rebecca Sobol


Copyright © 2013, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds