User: Password:
Subscribe / Log in / New account Weekly Edition for May 8, 2014

Jolla and Mer

By Jake Edge
May 7, 2014
Embedded Linux Conference

As David Greaves held up a phone to begin his Embedded Linux Conference talk, he said that it would be no surprise that phone was not running Windows Mobile or iOS—the surprise is that it is not running Android either. In fact he was holding up the Qt/Wayland/systemd/Btrfs phone, which is better known as the Jolla phone. In reality, though, it is really a "Linux phone", he said, unlike Android-based phones.

Greaves started out at British Telecom, then worked on Maemo, and then its successor, MeeGo. The latter has been shut down, but Nokia and, especially, Intel did an awful lot of work in creating MeeGo. Two guys (he and Carsten Munk) looked at MeeGo and saw much that was useful there, so they founded the Mer project, he said. MeeGo was a bit over-ambitious, though, so Mer slimmed down MeeGo's 1500 packages to around 300.

[David Greaves]

That's the background for the Jolla (which Greaves pronounced as "Yah-La", which contrasts with our last attempt at capturing the pronunciation) phone. But he stressed that he was not giving a "pitch for Jolla", who he works for now. He is proud of what the team of 90 people has done in less than a year, in hardware design, and the development of an operating system and user interface, but the fact that Jolla has "proven" the Mer approach is also important.

One thing that Greaves's talk did not do was to dispel the murkiness around the boundaries and capabilities of the various components. Projects like Mer, Nemo Mobile, Sailfish OS, and the Jolla phone have been talked about over the years, but it is (and always has been) a bit unclear where one project stops and the next starts up. Part of that is likely caused by the overlap in participants among those projects, but it does, at times, get rather confusing.

In any case, Sailfish OS can (obviously) run on Jolla devices, but it can also run on the Galaxy S3 and the Nexus 4. That is part of the Sailfish for Android project. Support for more devices is in the works using the Hardware Adaption Development Kit (HADK or "haddock" following the nautical theme). The idea is that any device that is rootable and will run CyanogenMod will also be able to run Sailfish OS. That will allow many more folks to try out the phone operating system without buying any new hardware.


The user interface (UI) is important, Greaves said, and "a good place to start" looking at the technology used in Mer/Sailfish OS. The UI is based on Qt 5.2 and Qt Modeling Language (QML, the JavaScript-based UI language for Qt). Qt was chosen for a number of reasons: it performs well, is open source, is not Java-based, and it has a huge developer base. In addition, the existence of QML played a role in that decision because it allows rapid UI development, he said.

QtWayland replaces Android's SurfaceFlinger as the compositor for Sailfish OS. It is not currently using the Android HWcomposer (for 2D graphics) "strenuously", but there are plans to do more of that. The performance of QtWayland is "pretty damn good". Wayland was chosen because it is not difficult to work with and it meshes well with the Android shared-buffers approach, he said. In the end, the developers don't really notice Wayland at all, since QtWayland basically handles that interaction for them.

One area where Qt has problems is its size. It it is a "big piece of software", which means it eats up a fair amount of phone resources. But Qt 5.2 has started modularizing Qt, so that only certain parts need to be included.

Another technology used in Mer/Sailfish OS is systemd, which has "been quite polarizing" in the open-source world. There is something of a love it or hate it attitude about systemd, but they came down on the "love it" side, Greaves said. There were times where "we swore a little bit" at systemd, but by and large it served their needs well.

Systemd is "really fast", predictable, and well-documented, he said. The Journal is "a good fit for us" as well. There have been some issues with user sessions, but part of that may be that the version of systemd was adopted "a long time ago in systemd time". In fact, the biggest problem they have run into with systemd is its rapid development pace and how closely tied it is to newer kernel versions. He hopes that Debian adopting systemd will help alleviate some of those problems in the future.

For Sailfish OS, there is a mix of systemd and Android init code. In fact, the Android init is being run as a service under systemd. The uevent data from Android is being interpreted by udev rules and mount units are created from the Android rc files. Jolla is still exploring how to manage the "mix of two worlds", he said.

Mer/Sailfish OS uses Btrfs for storing data. No partitions are used; Btrfs's dynamic subvolumes are used instead. Snapshots are used to restore the factory settings, but those settings are old at this point, so Jolla is looking into updating that snapshot. In addition, it is looking at using snapshots for features like rolling back unwanted changes and for easy backups.

For handling the network, ConnMan is used. They ran into some problems using it, but writing ifup/ifdown scripts would have been far worse, he said. A recent upgrade to a newer version of ConnMan has helped, too. There are some difficult issues around network handling, no matter what solution is used. The rules for choosing when to use various data sources (3G, WiFi, etc.) and when to switch between them are rather complicated. For phone features, oFono was used, while PulseAudio handled the audio path; both worked quite well, with few problems, he said.


System-on-Chip (SoC) makers typically only create board support packages (BSPs) for Android. "Open hardware is great", Greaves said, but there is not much of it around. Instead there are a bunch of user-space blobs for various pieces such as the camera and haptic feedback. All of those blobs use the Bionic C library, but Sailfish OS (and many other Linux systems) use glibc. Hybris (or libhybris) is meant to bridge that gap.

The idea is to allow Bionic-using code to co-exist with glibc-using code in the same process address space. It required wrappers around Bionic functions and some name mangling (prepending "android_" on Bionic functions for example). It was done with relatively few patches (around ten) to Bionic, he said, mostly for functionality like POSIX threads, errno, hardware vs. software floating point, and so on.

Using libhybris means that, for example, the Android EGL and GLES libraries can be used by glibc applications. Then it is just a matter of "rinse and repeat" to get wrappers for other Android libraries, like Gralloc, NFC, OpenCL, Camera, and so on. Greaves said that the project(s) will be trying to make libhybris work for any device that has a CyanogenMod build.

The rest of the phone is a "fairly standard Linux stack". It uses Git for phone backups, though some tweaks were made to handle videos since "Git-committing videos is not a bright idea". It uses RPM as its package manager, and Gstreamer for multimedia content. It uses the Linux kernel, too, of course, though the project doesn't do much with it: "if it boots and runs the hardware", it gets left alone.

If you build it ...

The Mer project has a Platform SDK that is used to build it. Mer "didn't want to have to worry" about which tool versions were installed on the builder's system, so it includes all of the right versions to be used inside of a chroot(). It uses Scratchbox2 for cross-compilation support. Scratchbox2 is "sane, sensible, and usable", Greaves said, which is quite a departure from the earlier Scratchbox. It is so much different (and better) that he wishes it had been given a new name.

Packages can be created easily. Typically it requires some minor tweaks to an existing package spec file. The result of the build is an RPM file that can be installed on the device. There is some LD_PRELOAD trickery in Scratchbox2 to invoke the cross-compiler (using Lua to do pattern-matching), all of which is "a little ugly", but makes it "seamless to the developer", he said.

The SDK can run in a virtual machine, which allows Windows and Mac users to use it. There are a small number of people in the company and project, so porting all of the tools to multiple platforms is not a viable solution. Virtualization solves that problem, he said.

Jolla and Mer use the Open Build Service (OBS), which is an "incredibly powerful build system", he said. Mer focuses on trying to enable vendors to make products, and OBS fits right into that model. Someone can submit a package to OBS and it will automatically build versions for multiple architectures.

It is not just packages that are built that way, though. The latest development tree of Mer and Sailfish OS are built regularly, typically driven by Git commits. The automated system will invoke mic (originally, the Moblin Image Creator—and the reason Mer had to start with an "M", he said with a chuckle) to create an image that can be flashed onto a device for testing.

Working in the open

Mer and Jolla came out of Maemo and MeeGo, which taught them a lot about working in the open. In the MeeGo days, some of the now-Jolla folks sometimes gave Nokia and Intel a hard time about not working more in the open. But now, those folks appreciate the problem from the other side. It's a hard problem and they are trying to learn from the mistakes that have been made in the past.

For Jolla, there are (at least) two pieces to the problem: working with its upstream (Mer) and collaborating with other vendors on new devices. Jolla's internal policies are geared toward the way it works, some of which may or may not mesh well with open-source projects and Mer in particular. For example, all commits must refer to or close a bug number (as the bug tracker is also used as a task tracker). There is an automated process that adds the commit message to the bug tracker, but that isn't particularly useful for Mer, which has a public bugzilla.

Jolla has an internal open-source policy as well. To start with, employees should participate in open-source projects as themselves, rather than as representatives of Jolla. But not all employees came from an open-source background, so some amount of education is needed. Policies covering interactions with projects is part of that as well. Helping to create those policies sometimes made Greaves feel like Bill & Ted: "Be excellent to each other", he said with a laugh.

The Mer project consolidates all of the open-source work that Jolla does (much of the UI layer is still closed). The Mer project's vision is "to make it easy to make devices". That explains what the project is doing and why, he said. For example, there is no UI for Mer. Nemo Mobile was a project started to create the middleware and UI for Mer, but most who are using Mer also use the middleware, so the middleware has been moved into Mer. That means that Nemo Mobile is just UI now, which shrinks it down considerably and, he hopes, makes it more accessible as a community project.

From the Mer project's point of view, the Jolla device is "huge credibility shot" for the project. But Mer works with others too, including Ubuntu and Intel on libhybris. Mer aspires to be more than just code, because "code is not enough". For devices to be easier to make, it requires best-practices documentation in lots of areas: quality assurance, how to do releases, how to manage regressions, building images, handling source repositories, bug tracking, and more.

But, the Jolla phone has shown that the Mer approach works. It took around nine months for 90 people to deliver a working phone with a brand new UI. Others can do the same.

All of the projects would love to have more participants, he said. For example, the camera does not currently work on Sailfish for Android on the Galaxy S3, but Ubuntu got the camera working for Ubuntu Touch, so "come help make it work". In addition, phone platforms are really just 3G data platforms with a touchscreen and some other interesting hardware; phones are not the only things that can be created on such a platform.

While Mer-based devices may not be free-software compliant (because of the binary blobs), Mer will be ready when free drivers come along. It doesn't make sense to wait for those drivers and then to start working on phones and other devices, he said. In that scenario, devices that are free-software-only will have fallen way too far behind.

Comments (19 posted)

A look at LyX 2.1

By Nathan Willis
May 7, 2014

LyX is a graphical document editor that serves as a front end to TeX and various TeX extensions (LaTeX, XeTeX, etc.). In that sense, LyX serves as a bridge between the high-precision world of TeX typesetting and the easier-to-use WYSIWYG world of word processors. Version 2.1.0 was released on April 25, incorporating updates to the handling of special-purpose content like equations or phonetic notation, improvements to LaTeX option support, and several new page-layout features.

The new release is available for download as source code and in binary packages for a variety of Linux distributions. It requires a working LaTeX installation (due to LaTeX's modularity, almost any modern version should be compatible) as well as Qt and Python.

LyX aims at a target that, to some, sounds inherently unattainable: it is designed to make TeX documents as easy to work with as generic letters and memos are in a word processor like LibreOffice Writer. The challenge stems from the fact that TeX was created to enable fine-grained manipulation of typesetting features that word processors gloss right over. TeX succeeds at this goal, which is why it is the de-facto document-preparation system for scientific research—after all, when one's PhD or professional reputation is on the line, having the equations look "more-or-less correct" just does not cut it. But, in practice, TeX achieves its precision by relying on well-honed macro collections and predefined document classes. Projects like LaTeX and BibTeX provide useful macros and shortcuts that authors can employ rather than writing raw TeX markup by hand.

[A letter in LyX 2.1]

Even so, the fact that authors can always drop down into TeX markup to handle the inevitable corner cases or specify mathematical formulas directly in math syntax is a strength—when no LaTeX option generates perfect output, TeX is always there to come to the rescue. Consequently, there is a school of thought which says that LyX, with its word-processor-like interface, does not add to the TeX-writing experience, it simply hides it behind another layer of indirection. That is, LyX users are not freed from the need to understand LaTeX and TeX, since they will eventually confront the same corner cases.

But that viewpoint short-sells much of what LyX has to offer. Yes, LyX provides a word-processor-like graphical user interface (GUI), but the GUI does not hide TeX's features from the user—it provides a way to access them via visual cues in the document and GUI components (e.g., toolbars and menu items). LyX does not reduce TeX to an implementation detail, but it makes it easier to create a valid TeX document—and, perhaps more importantly, it makes it more difficult to create a bad TeX document.

[Classes in LyX 2.1]

Many of the improvements found in LyX 2.1 illustrate this fact. For example, the most basic decision about a LaTeX document is the class to which it belongs, but not only do generic offerings like "letter" and "article" exist; each field, journal, and professional organization can have its own stylistic rules, with a separate class to represent them. In LyX 2.1, all of these classes are organized into categories (as opposed to the flat list of previous generations), which helps make sense out of the plethora of alternatives.

Similarly, version 2.1 adds GUI support for accessing far more LaTeX options, and it does a better job of presenting and explaining those options to the user. LaTeX provides macros that simplify formatting for common document features, such as code listings; LyX exposes the various options in pop-up dialogs. In 2.1, these option panels have been rewritten to standardize terminology, make all options visible at the same time, and to allow presets for common values. Support for several new commands has been added as well, many of which deal with mathematical expressions or with horizontal spacing tweaks.

[LaTeX options in LyX 2.1]

Table support got an update in 2.1. Arguably the most useful new feature is the ability to move or swap table rows and columns (either with keystrokes or menu commands). It is also now possible to rotate tables on the page—to any arbitrary angle, not just 90-degree increments. Perhaps it is difficult to imagine a use for such rotation functionality, but that might be overthinking matters.

The removal of arbitrary restrictions is a recurring theme in TeX; LyX 2.1 also adds support for custom paragraph shapes, but in removing the restriction that limited paragraphs to simple rectangles, the project could have settled for implementing a fixed set of polygons. Instead, any shape is supported, including unusual options like shapes with holes in the middle. It is also now possible to nest multiple columns of text within an existing column (which is useful for citing and quoting other documents where the page layout itself is important).

Several specific use-cases for LyX received their own improvements in the 2.1 release, such as the beamer class for creating presentation slides, phonetic notation using the International Phonetic Alphabet (IPA), and mathematical formulas. The beamer improvements center around a rewrite of the formerly awkward beamer layout module, but also add some new features like the ability to overlay content. Real IPA support is new; the IPA characters are accessible to the user through a special toolbar, providing an editing workflow that resembles the one used to access special math characters. In previous releases, the rudimentary IPA support was a hack of the existing math-editing code, so this represents a step forward.

Improved support for math typesetting is hardly a surprise; TeX was originally created by Donald Knuth to help him typeset The Art of Computer Programming, and TeX is used heavily by math journals. Nevertheless, there is always room for advancement. LyX 2.1 improves on its formula-and-equation support by adding a document-wide "math font" setting (which will not get overwritten if one changes the font of other body text), by adding a unicode-math package that supports math OpenType fonts (such as STIX or XITS), and by adding a new inline "equation editor" mode.

[Equation editing in LyX 2.1]

Several new languages are supported in LyX 2.1, if one activates the optional XeTeX output engine. XeTeX is best known for its support of OpenType, Apple Advanced Typography (AAT), and Graphite font systems, which enable typesetting many writing systems that are not supported (or are not easily supported) by pdfTeX, the default LyX output engine. XeTeX also supports several microtypographic features like hanging punctuation. It is also important to note that "language support" in LyX has a stronger meaning than it does in some other applications: switching the language setting of a document causes LyX to automatically make adjustments to features like the type of quotation marks used.

Speaking of typography, LyX 2.1 also adds support for several new TeX fonts. Unlike the WYSIWYG word-processor world, in which users can highlight any characters they want and put them in a different font via a drop-down menu, in the TeX world, font settings are often a document-centric decision. If the document class designer specifies one font for subheadings and another for body text, then that is simply how it goes, unless one adds document-specific overrides. LyX now ships with more built-in fonts available, and makes it easier for authors to add their own.

Finally, there are many small additions and enhancements in the new release, such as the ability to write multilingual captions, the ability to insert standard-issue "chemical risk and safety" statements, and a new dependency on libmagic to determine the file type of external resources (as opposed to built-in format detection). Nevertheless, there are also some limitations to make note of. Most importantly, LyX 2.1 is not yet compatible with Python 3; if it is the system default Python interpreter on the machine where LyX is installed, one should expect some trouble.

In all, LyX 2.1 is an incremental improvement over previous LyX releases—at least where everyday document editing is concerned—but there are still some significant enhancements that make a reexamination worthwhile for those who have found LyX too limiting in the past. There is still a learning curve, but the conceptual model of TeX is different enough from WYSIWYG word processing that some amount of retraining is inevitable. But if you think TeX is too difficult to learn, a few minutes with LyX is worth the investment.

[Those interested in TeX development should also see Knuth's TeX Tune-Up of 2014 [PDF] from TUGboat 35–1]

Comments (7 posted)

Page editor: Jonathan Corbet


Privacy Badger gives teeth to Do Not Track

By Nathan Willis
May 7, 2014

The Electronic Frontier Foundation (EFF) has released a browser add-on called Privacy Badger that repurposes the familiar "ad-blocking extension" concept to filter and block out web-tracking tools, rather than advertisements. Privacy Badger detects a number of behavior-tracking methods, attempting to block those that are either loaded invisibly or otherwise operate without the user's consent. In addition to its emphasis on privacy protection, though, it also offers several controls that distinguish it from ad-centric blockers like AdBlock Plus. Perhaps more interestingly, the extension is accompanied by an EFF policy through which sites can be whitelisted by adhering to privacy-respecting rules.

Privacy Badger was announced on May 1, with builds available for Firefox (although not Firefox for Android) and for Chrome/Chromium. The stated purpose of the extension is to help users combat "intrusive and objectionable practices in the online advertising industry, and many advertisers' outright refusal to meaningfully honor Do Not Track requests."

Do Not Track (DNT), of course, is an HTTP header intended to let users specify that they wish to opt out of web-tracking mechanisms. DNT was designed to be a voluntary mechanism that advertisers and data collectors would use as a means of self-regulation. Those businesses have done their best to undermine DNT, however, as many privacy advocates predicted they would.

Among other tactics, various advertising associations devised their own "interpretations" of DNT that, predictably, still involve tracking DNT users. On April 30, Yahoo's "Privacy Team" publicly announced that the company will start ignoring DNT completely, on the grounds that there is no "single standard" about the meaning of DNT. With the voluntary-self-policing loop now neatly closed, it should probably come as no surprise that the EFF followed up with a technical solution—although the timing of events could still be coincidental.

Privacy Badger does care

Privacy Badger is based on a fork of the AdBlock Plus engine; it blocks certain HTTP requests, but rather than blocking ads, the blocked content is limited to third-party requests (scripts, cookies, images, or other embedded resources) that are believed to be used as a user-tracking mechanism. These third-party resources are what Privacy Badger regards as "trackers;" they tend to be invisible to the user, but they allow the third-party domain to follow the user across multiple sites by logging the HTTP requests (usually setting a cookie containing some form of identifying string). Not requesting these resources in the first place prevents the remote party from tracking the user; the majority of these trackers emanate from the domains of third-party services, but some come from sites that otherwise contribute functionality to the page. Since blocking all third-party resources would break functionality of many sites, the extension attempts to distinguish between necessary resources and unnecessary ones. The EFF collected data prior to the release of the extension and created a whitelist of patterns that Privacy Badger will not block.

For third-party trackers not on the whitelist, however, Privacy Badger starts off by giving each site the benefit of the doubt. It includes the DNT header with each request, and does not block the tracker when it is first encountered. But if the tracker is encountered on another, unrelated site, that is regarded as evidence that it is violating the user's privacy, and it is added to the block list.

[Privacy Badger menu]

The status of the current page can be examined by opening the Privacy Badger menu (which, on Firefox, is placed in the "Add-on Bar"). All trackers detected in the current page are shown, color-coded to indicate their blocking state. Green means that the tracker is being allowed, yellow means it is a cross-domain tracker on the whitelist (that is, it is being permitted to prevent the site from breaking), and red means it is being blocked. The very first time a user employs Privacy Badger, all of the trackers will be either green or yellow, but the privacy-violating ones quickly get recognized and turned red after visiting just a few sites.

For the whitelisted tracker domains, Privacy Badger loads the resources (e.g., scripts or images), but it still blocks user-tracking cookies from the domain, which should provide some measure of privacy protection. It is not always possible to determine whether a given cookie is used for user tracking purposes or not, of course; the heuristic used allows cookies that have some other clear purpose (such as setting the preferred language), but the EFF notes that more work on the problem would be helpful.

Tracker go home

In practice, the Privacy Badger menu is a nice visualization aid. It shows only the domain name of the tracker, whereas AdBlock Plus and similar extensions generally present lengthy URLs and the full regular expressions used to match them. That means skimming through it is a lot easier.

In addition, the green/yellow/red status of each tracker also has a slider (albeit one that has just three discrete positions), so users can easily toggle between the settings for every tracker if they so desire. That is probably most useful for enabling a blocked tracker that is hampering site functionality, but it can be employed for other tasks, too (like seeing how many yellow trackers one can disable and still have a functioning browser session). Here, again, the ad-blocking extensions tend to expose a significantly less usable interface: if a blocked item is breaking page functionality, one must usually hunt through the blocked-items window, enabling and disabling specific expressions in hopes of finding it.

To be perfectly fair, though, ad blockers have a broader scope of content to try and match against, so it is only natural that they have more complicated tools with which to tune the results. The EFF goes to great lengths to explain that Privacy Badger is not, fundamentally speaking, an ad blocker. It will, as a matter of blocking third-party trackers, block third-party-tracker-laden ads, but users interested in reducing their exposure to advertising will need to find another extension to handle the task.

There are two other important categories of tracker that Privacy Badger does not protect against: "first-party" trackers and trackers that rely only on browser fingerprinting techniques. First-party trackers means tracking elements sent by the domain of the main URL itself. As is the case with whitelisted domains domains mentioned earlier, a concern with blocking resource requests too aggressively would be breaking the site's functionality; nevertheless the EFF notes that it hopes to be able to implement some level of first-party tracker blocking in a subsequent release.

Browser fingerprinting is a different beast entirely. The technique relies on gathering specific information about the user by recording information from the browser's User-Agent string, installed plugins, local time zone, accepted HTTP headers, and other system data that can be queried remotely. The EFF's Panopticlick demonstrates just how much data is leaked in this manner. As with first-party trackers, the Privacy Badger project says it hopes to add fingerprinting countermeasures in a future release, but those countermeasures will certainly involve techniques beyond tracker blocking.

Getting on the straight and narrow

As mentioned earlier, Privacy Badger includes the DNT header in each HTTP request; consequently, sites that respect the header and do not return user trackers do not get blocked. The EFF is using this approach as a means to promote DNT adoption. Specifically, advertisers (and other tracker-using sites) that specify a DNT-respecting policy will, in future versions of Privacy Badger, automatically be unblocked.

The EFF has written a proposed DNT policy as part of the initiative. The plan is that a site would store the policy document in plain text at a well-known location ( in the current draft), where Privacy Badger and other programs could locate it automatically and take the appropriate action in response (such as whitelisting the site). The hope is that if DNT policy statements become widespread, as robots.txt files are for search-engine exclusion, tracker-blocking programs like Privacy Badger can dispense with the built-in whitelist approach currently in use.

But dispensing with the hand-crafted whitelist is only part of the goal. The ultimate point is for sites to respect the DNT header. For that to happen, Privacy Badger and related tools will have to be deployed in significant enough numbers for advertisers to take notice. The EFF notes on the DNT policy page that it is open to having further discussions about the wording of the DNT policy document. If that policy document does take off, it would in essence be the de-facto standard interpretation of DNT's meaning—which would mean, in turn, that there is a consensus around DNT, which would eliminate the "no one agrees on what DNT means" argument espoused recently by Yahoo.

Of course, if that argument is really a spurious claim only tossed out to provide cursory justification for what the company wants to do anyway, then Yahoo and other tracker-using sites will find another argument and continue to track users. It is hard to handicap the chances that Privacy Badger has for making a significant impact on user-tracking behavior. It may remain a useful tool that only a few users employ (as is the case with ad-blocking extensions and other EFF privacy tools like HTTPS Everywhere). On the other hand, browser makers could take the concept to heart and build it into future releases, changing the game significantly.

For now, Privacy Badger is an alpha release, and much more work is still to come. But it is an easy-to-use tool, and it both offers protection against web trackers and sheds light on just how pervasive web-tracker deployment is; both are useful outcomes. The mobile versions of Chrome and Firefox are on the agenda for future releases, as is Opera support; on the project site, the EFF asks for developers interested in working on Safari and Internet Explorer extensions to make contact. There is no telling how well the project will fare as a DNT enforcement tool, but it may be the best option currently available.

Comments (6 posted)

New vulnerabilities

asterisk: denial of service

Package(s):asterisk CVE #(s):CVE-2014-2288 CVE-2014-2289
Created:May 5, 2014 Updated:May 9, 2014
Description: From the CVE entries:

The PJSIP channel driver in Asterisk Open Source 12.x before 12.1.1, when qualify_frequency "is enabled on an AOR and the remote SIP server challenges for authentication of the resulting OPTIONS request," allows remote attackers to cause a denial of service (crash) via a PJSIP endpoint that does not have an associated outgoing request. (CVE-2014-2288)

res/res_pjsip_exten_state.c in the PJSIP channel driver in Asterisk Open Source 12.x before 12.1.0 allows remote authenticated users to cause a denial of service (crash) via a SUBSCRIBE request without any Accept headers, which triggers an invalid pointer dereference. (CVE-2014-2289)

Gentoo 201405-05 asterisk 2014-05-03

Comments (1 posted)

chromium-browser: multiple vulnerabilities

Package(s):chromium-browser CVE #(s):CVE-2014-1730 CVE-2014-1731 CVE-2014-1732 CVE-2014-1733 CVE-2014-1734 CVE-2014-1735 CVE-2014-1736
Created:May 5, 2014 Updated:May 16, 2014
Description: From the CVE entries:

Google V8, as used in Google Chrome before 34.0.1847.131 on Windows and OS X and before 34.0.1847.132 on Linux, does not properly store internationalization metadata, which allows remote attackers to bypass intended access restrictions by leveraging "type confusion" and reading property values, related to i18n.js and (CVE-2014-1730)

core/html/HTMLSelectElement.cpp in the DOM implementation in Blink, as used in Google Chrome before 34.0.1847.131 on Windows and OS X and before 34.0.1847.132 on Linux, does not properly check renderer state upon a focus event, which allows remote attackers to cause a denial of service or possibly have unspecified other impact via vectors that leverage "type confusion" for SELECT elements. (CVE-2014-1731)

Use-after-free vulnerability in browser/ui/views/ in Google Chrome before 34.0.1847.131 on Windows and OS X and before 34.0.1847.132 on Linux allows remote attackers to cause a denial of service or possibly have unspecified other impact via an INPUT element that triggers the presence of a Speech Recognition Bubble window for an incorrect duration. (CVE-2014-1732)

The PointerCompare function in in Seccomp-BPF, as used in Google Chrome before 34.0.1847.131 on Windows and OS X and before 34.0.1847.132 on Linux, does not properly merge blocks, which might allow remote attackers to bypass intended sandbox restrictions by leveraging renderer access. (CVE-2014-1733)

Multiple unspecified vulnerabilities in Google Chrome before 34.0.1847.131 on Windows and OS X and before 34.0.1847.132 on Linux allow attackers to cause a denial of service or possibly have other impact via unknown vectors. (CVE-2014-1734)

Multiple unspecified vulnerabilities in Google V8 before, as used in Google Chrome before 34.0.1847.131 on Windows and OS X and before 34.0.1847.132 on Linux, allow attackers to cause a denial of service or possibly have other impact via unknown vectors. (CVE-2014-1735)

From the Debian advisory:

SkyLined discovered an integer overflow issue in the v8 javascript library. (CVE-2014-1736)

Gentoo 201408-16 chromium 2014-08-30
Ubuntu USN-2298-1 oxide-qt 2014-07-23
openSUSE openSUSE-SU-2014:0669-1 chromium 2014-05-16
openSUSE openSUSE-SU-2014:0668-1 chromium 2014-05-16
Mageia MGASA-2014-0213 chromium-browser-stable 2014-05-10
Debian DSA-2920-1 chromium-browser 2014-05-03

Comments (none posted)

cups-filters: command execution

Package(s):cups-filters CVE #(s):CVE-2014-4336 CVE-2014-4337 CVE-2014-4338
Created:May 6, 2014 Updated:November 4, 2014
Description: From the Red Hat bugzilla:

According to Sebastian Krahmer, the initial fix for CVE-2014-2707 is incomplete:

"This issue was reported as fixed in 1.0.51:

but it was found that the fix was incomplete with the full fix in 1.0.53: "

On June 19, CVE entries CVE-2014-4336, CVE-2014-4337, and CVE-2014-4338 were assigned to this issue. From the Mageia advisory:

The CVE-2014-2707 issue with malicious broadcast packets, which had been fixed in Mageia Bug 13216 (MGASA-2014-0181), had not been completely fixed by that update. A more complete fix was implemented in cups-filters 1.0.53 (CVE-2014-4336).

In cups-filters before 1.0.53, out-of-bounds accesses in the process_browse_data function when reading the packet variable could leading to a crash, thus resulting in a denial of service (CVE-2014-4337).

In cups-filters before 1.0.53, if there was only a single BrowseAllow line in cups-browsed.conf and its host specification was invalid, this was interpreted as if no BrowseAllow line had been specified, which resulted in it accepting browse packets from all hosts (CVE-2014-4338).

Oracle ELSA-2015-2360 cups-filters 2015-11-23
Mandriva MDVSA-2015:100 cups-filters 2015-03-29
Scientific Linux SLSA-2014:1795-1 cups-filters 2014-11-03
Oracle ELSA-2014-1795 cups-filters 2014-11-03
CentOS CESA-2014:1795 cups-filters 2014-11-04
Red Hat RHSA-2014:1795-01 cups-filters 2014-11-03
Mageia MGASA-2014-0267 cups-filter 2014-06-19
Fedora FEDORA-2014-5765 cups-filters 2014-05-06

Comments (none posted)

fish: multiple vulnerabilities

Package(s):fish CVE #(s):CVE-2014-2905 CVE-2014-2914 CVE-2014-2906
Created:May 6, 2014 Updated:October 9, 2014
Description: From the Red Hat bugzilla:

A number of vulnerabilities were reported in fish versions prior to 2.1.1:

CVE-2014-2905: fish universal variable socket vulnerable to permission bypass leading to privilege escalation

fish, from at least version 1.16.0 to version 2.1.0 (inclusive), does not check the credentials of processes communicating over the fishd universal variable server UNIX domain socket. This allows a local attacker to elevate their privileges to those of a target user running fish, including root.

fish version 2.1.1 is not vulnerable.

CVE-2014-2906: fish temporary file creation vulnerable to race condition leading to privilege escalation

fish, from at least version 1.16.0 to version 2.1.0 (inclusive), creates temporary files in an insecure manner.

Versions 1.23.0 to 2.1.0 (inclusive) execute code from these temporary files, allowing privilege escalation to those of any user running fish, including root.

Additionally, from at least version 1.16.0 to version 2.1.0 (inclusive), fish will read data using the psub function from these temporary files, meaning that the input of commands used with the psub function is under the control of the attacker.

fish version 2.1.1 is not vulnerable.

CVE-2014-2914: fish web interface does not restrict access leading to remote code execution

fish, from version 2.0.0 to version 2.1.0 (inclusive), fails to restrict connections to the Web-based configuration service (fish_config). This allows remote attackers to execute arbitrary code in the context of the user running fish_config.

The service is generally only running for short periods of time.

fish version 2.1.1 restricts incoming connections to localhost only. At this stage, users should avoid running fish_config on systems where there are untrusted local users, as they are still able to connect to the fish_config service and elevate their privileges to those of the user running fish_config.

Gentoo 201412-49 fish 2014-12-28
Mageia MGASA-2014-0404 fish 2014-10-09
Fedora FEDORA-2014-11850 fish 2014-10-08
Fedora FEDORA-2014-11838 fish 2014-10-08
Fedora FEDORA-2014-9402 fish 2014-08-23
Fedora FEDORA-2014-9407 fish 2014-08-23
Fedora FEDORA-2014-5783 fish 2014-05-08
Fedora FEDORA-2014-5794 fish 2014-05-06

Comments (none posted)

kernel: privilege escalation

Package(s):kernel CVE #(s):CVE-2014-0196
Created:May 6, 2014 Updated:July 24, 2014
Description: From the Ubuntu advisory:

A flaw was discovered in the Linux kernel's pseudo tty (pty) device. An unprivileged user could exploit this flaw to cause a denial of service (system crash) or potentially gain administrator privileges.

Oracle ELSA-2015-0290 kernel 2015-03-12
Oracle ELSA-2014-1392 kernel 2014-10-21
Oracle ELSA-2014-0678 kernel 2014-07-23
Ubuntu USN-2260-1 linux-lts-trusty 2014-06-27
Oracle ELSA-2014-0771 kernel 2014-06-19
SUSE SUSE-SU-2014:0807-1 Linux Kernel 2014-06-18
Red Hat RHSA-2014:0678-02 kernel 2014-06-10
openSUSE openSUSE-SU-2014:0766-1 Evergreen 2014-06-06
Red Hat RHSA-2014:0557-01 kernel-rt 2014-05-27
Ubuntu USN-2227-1 linux-ti-omap4 2014-05-27
Mageia MGASA-2014-0238 kernel-vserver 2014-05-24
Mageia MGASA-2014-0234 kernel-tmb 2014-05-23
Mageia MGASA-2014-0236 kernel-tmb 2014-05-24
Mageia MGASA-2014-0237 kernel-rt 2014-05-24
Mageia MGASA-2014-0235 kernel-linus 2014-05-24
SUSE SUSE-SU-2014:0696-1 Linux kernel 2014-05-22
Fedora FEDORA-2014-6354 kernel 2014-05-21
SUSE SUSE-SU-2014:0683-1 Linux kernel 2014-05-20
Mageia MGASA-2014-0229 kernel-vserver 2014-05-19
Mageia MGASA-2014-0227 kernel-rt 2014-05-19
Mageia MGASA-2014-0226 kernel-linus 2014-05-19
Mageia MGASA-2014-0228 kernel 2014-05-19
Red Hat RHSA-2014:0520-01 kernel 2014-05-20
openSUSE openSUSE-SU-2014:0678-1 kernel 2014-05-19
openSUSE openSUSE-SU-2014:0677-1 kernel 2014-05-19
Mageia MGASA-2014-0225 kernel 2014-05-18
Red Hat RHSA-2014:0512-01 kernel 2014-05-19
SUSE SUSE-SU-2014:0667-1 Linux Kernel 2014-05-16
Debian DSA-2928-1 linux-2.6 2014-05-14
Debian DSA-2926-1 kernel 2014-05-12
Fedora FEDORA-2014-6122 kernel 2014-05-10
Ubuntu USN-2201-1 linux-lts-saucy 2014-05-05
Ubuntu USN-2200-1 linux-lts-raring 2014-05-05
Ubuntu USN-2199-1 linux-lts-quantal 2014-05-05
Ubuntu USN-2204-1 kernel 2014-05-05
Ubuntu USN-2203-1 kernel 2014-05-05
Ubuntu USN-2202-1 kernel 2014-05-05
Ubuntu USN-2198-1 kernel 2014-05-05
Ubuntu USN-2196-1 kernel 2014-05-05
Ubuntu USN-2197-1 EC2 kernel 2014-05-05
CentOS CESA-2014:X009 kernel 2014-06-16
Mandriva MDVSA-2014:124 kernel 2014-06-13

Comments (1 posted)

libpng12: multiple vulnerabilities

Package(s):libpng12 CVE #(s):CVE-2013-7353 CVE-2013-7354
Created:May 2, 2014 Updated:June 10, 2014

From the openSUSE bug reports:

CVE-2013-7353: An integer overflow leading to a heap-based buffer overflow was found in the png_set_sPLT() and png_set_text_2() API functions of libpng. A attacker could create a specially-crafted image file and render it with an application written to explicitly call png_set_sPLT() or png_set_text_2() function, could cause libpng to crash or execute arbitrary code with the permissions of the user running such an application.

The vendor mentions that internal calls use safe values. These issues could potentially affect applications that use the libpng API. Apparently no such applications were identified.

CVE-2013-7354: An integer overflow leading to a heap-based buffer overflow was found in the png_set_unknown_chunks() API function of libpng. A attacker could create a specially-crafted image file and render it with an application written to explicitly call png_set_unknown_chunks() function, could cause libpng to crash or execute arbitrary code with the permissions of the user running such an application.

The vendor mentions that internal calls use safe values. These issues could potentially affect applications that use the libpng API. Apparently no such applications were identified.

Mandriva MDVSA-2015:071 libpng12 2015-03-27
Gentoo 201408-06 libpng 2014-08-14
Fedora FEDORA-2014-6892 mingw-libpng 2014-06-10
Mandriva MDVSA-2014:084 libpng 2014-05-12
Mageia MGASA-2014-0210 libpng 2014-05-10
Mageia MGASA-2014-0211 libpng 2014-05-10
openSUSE openSUSE-SU-2014:0616-1 libpng15 2014-05-07
openSUSE openSUSE-SU-2014:0618-1 libpng12 2014-05-07
openSUSE openSUSE-SU-2014:0604-1 libpng12 2014-05-02

Comments (none posted)

libvirt: denial of service

Package(s):libvirt CVE #(s):CVE-2013-7336
Created:May 2, 2014 Updated:May 7, 2014
Description: An unprivileged user can, through a specific sequence of calls, cause the libvirtd daemon to crash.
Gentoo 201412-04 libvirt 2014-12-09
Ubuntu USN-2209-1 libvirt 2014-05-07
openSUSE openSUSE-SU-2014:0593-1 libvirt 2014-05-02

Comments (none posted)

mediawiki: cross-site scripting

Package(s):mediawiki CVE #(s):CVE-2014-2853
Created:May 6, 2014 Updated:May 9, 2014
Description: From the CVE entry:

Cross-site scripting (XSS) vulnerability in includes/actions/InfoAction.php in MediaWiki before 1.21.9 and 1.22.x before 1.22.6 allows remote attackers to inject arbitrary web script or HTML via the sort key in an info action.

Gentoo 201502-04 mediawiki 2015-02-07
Mandriva MDVSA-2014:083 mediawiki 2014-05-08
Mageia MGASA-2014-0197 mediawiki 2014-04-28
Fedora FEDORA-2014-5684 mediawiki 2014-05-06
Fedora FEDORA-2014-5691 mediawiki 2014-05-06

Comments (none posted)

nagios-nrpe: code execution

Package(s):nagios-nrpe CVE #(s):CVE-2014-2913
Created:May 2, 2014 Updated:December 8, 2014

From the openSUSE bug report:

A remote, command execution flaw was discovered in Nagios NRPE when command arguments are enabled. A remote attacker could use this flaw to execute arbitrary commands. This issue affects versions 2.15 and older.

Fedora FEDORA-2014-5896 nrpe 2014-12-07
Fedora FEDORA-2014-5897 nrpe 2014-11-19
Gentoo 201408-18 nrpe 2014-08-30
SUSE SUSE-SU-2014:0682-1 nagios-nrpe, nagios-nrpe-debuginfo, 2014-05-20
Mageia MGASA-2014-0217 nrpe 2014-05-15
openSUSE openSUSE-SU-2014:0594-1 nrpe 2014-05-02
openSUSE openSUSE-SU-2014:0603-1 nagios-nrpe 2014-05-02

Comments (none posted)

ndjbdns: denial of service

Package(s):ndjbdns CVE #(s):
Created:May 1, 2014 Updated:May 7, 2014
Description: Version 1.06 of N-DJBDNS includes fixes for two denial-of-service vulnerabilities. See the ndjbdns changelog for more information.
Fedora FEDORA-2014-5511 ndjbdns 2014-05-01
Fedora FEDORA-2014-5471 ndjbdns 2014-05-01

Comments (none posted)

neutron: unintended access to other tenant networks

Package(s):neutron CVE #(s):CVE-2014-0056
Created:May 6, 2014 Updated:May 30, 2014
Description: From the Ubuntu advisory:

Aaron Rosen discovered that OpenStack Neutron did not properly perform authorization checks when creating ports when using plugins relying on the l3-agent. A remote authenticated attacker could exploit this to access the network of other tenants.

Red Hat RHSA-2014:0516-01 openstack-neutron 2014-05-29
Ubuntu USN-2194-1 neutron 2014-05-05

Comments (none posted)

openshift-origin-broker-util: privilege escalation

Package(s):openshift-origin-broker-util CVE #(s):CVE-2014-0164
Created:May 2, 2014 Updated:May 7, 2014

From the Red Hat advisory:

It was discovered that the mcollective client.cfg configuration file was world-readable by default. A malicious, local user on a host with the OpenShift Broker installed could read sensitive information regarding the mcollective installation, including mcollective authentication credentials. A malicious user able to obtain said credentials would potentially have full control over all OpenShift nodes managed via mcollective.

Red Hat RHSA-2014:0460-01 openshift-origin-broker-util 2014-05-01
Red Hat RHSA-2014:0461-01 openshift-origin-broker-util 2014-05-01

Comments (none posted)

openssl: denial of service

Package(s):openssl CVE #(s):CVE-2014-0198
Created:May 5, 2014 Updated:July 24, 2014
Description: From the Mageia advisory:

A null pointer dereference bug in OpenSSL 1.0.1g and earlier in so_ssl3_write() could possibly allow an attacker to cause generate an SSL alert which would cause OpenSSL to crash, resulting in a denial of service.

SUSE SUSE-SU-2015:0743-1 mariadb 2015-04-21
Mandriva MDVSA-2015:062 openssl 2015-03-27
Fedora FEDORA-2014-17576 mingw-openssl 2015-01-02
Fedora FEDORA-2014-17587 mingw-openssl 2015-01-02
Oracle ELSA-2014-1652 openssl 2014-10-16
Gentoo 201407-05 openssl 2014-07-28
Oracle ELSA-2014-0679 openssl 2014-07-23
Red Hat RHSA-2014:0679-01 openssl 2014-06-10
SUSE SUSE-SU-2014:0762-1 OpenSSL 1.0 2014-06-06
Slackware SSA:2014-156-03 openssl 2014-06-05
Scientific Linux SLSA-2014:0625-1 openssl 2014-06-05
Oracle ELSA-2014-0625 openssl 2014-06-05
Fedora FEDORA-2014-7102 openssl 2014-06-05
Fedora FEDORA-2014-7101 openssl 2014-06-05
CentOS CESA-2014:0625 openssl 2014-06-05
Red Hat RHSA-2014:0625-01 openssl 2014-06-05
Debian DSA-2931-1 openssl 2014-05-18
openSUSE openSUSE-SU-2014:0635-1 openssl 2014-05-13
openSUSE openSUSE-SU-2014:0634-1 openssl 2014-05-13
Mandriva MDVSA-2014:080 openssl 2014-05-08
Ubuntu USN-2192-1 openssl 2014-05-05
Mageia MGASA-2014-0204 openssl 2014-05-03

Comments (none posted)

openstack-glance: command execution

Package(s):openstack-glance CVE #(s):CVE-2014-0162
Created:May 1, 2014 Updated:May 13, 2014
Description: From the Red Hat advisory:

It was found that Sheepdog, a distributed object storage system, did not properly validate Sheepdog image URIs. A remote attacker able to insert or modify glance image metadata could use this flaw to execute arbitrary commands with the privileges of the user running the glance service. Note that only OpenStack Image setups using the Sheepdog back end were affected.

Fedora FEDORA-2014-5198 openstack-glance 2014-05-13
Ubuntu USN-2193-1 glance 2014-05-05
Red Hat RHSA-2014:0455-01 openstack-glance 2014-04-30

Comments (none posted)

php: privilege escalation

Package(s):php CVE #(s):CVE-2014-0185
Created:May 6, 2014 Updated:October 6, 2015
Description: From the Red Hat bugzilla:

It was reported that, on some distributions, PHP FPM (a FastCGI Process Manager for PHP) used a UNIX socket with insecure, default permissions. This would allow local users to execute PHP scripts with the privileges of the "apache" user. This is a similar situation to using mod_php where users can place scripts in their "~/public_html/" directory.

Original report:

openSUSE openSUSE-SU-2015:1685-1 froxlor 2015-10-06
Fedora FEDORA-2015-4216 php 2015-03-31
Mandriva MDVSA-2015:080 php 2015-03-28
Gentoo 201408-11 php 2014-08-29
Ubuntu USN-2254-2 php5 2014-06-25
Ubuntu USN-2254-1 php5 2014-06-23
Slackware SSA:2014-160-01 php 2014-06-09
Debian DSA-2943-1 php5 2014-06-01
Mandriva MDVSA-2014:087 php 2014-05-15
Mageia MGASA-2014-0215 php 2014-05-15
Fedora FEDORA-2014-5984 php 2014-05-12
Fedora FEDORA-2014-5960 php 2014-05-06
openSUSE openSUSE-SU-2014:0786-1 php5 2014-06-12
openSUSE openSUSE-SU-2014:0784-1 php5 2014-06-12

Comments (none posted)

python-fedora: two vulnerabilities

Package(s):python-fedora CVE #(s):
Created:May 7, 2014 Updated:May 22, 2014
Description: From the Fedora advisory:

Fix two security issues for services using python-fedora's TG1 and flask helpers.

The TG1 fix quotes variables that could have been used to launch an XSS attack.

The flask fix addresses OpenID Covert Redirect for web services which use flask_fas_openid to authenticate against the Fedora Account System.

Fedora FEDORA-2014-5948 python-fedora 2014-05-21
Fedora FEDORA-2014-5962 python-fedora 2014-05-06

Comments (none posted)

python-lxml: code injection

Package(s):python-lxml CVE #(s):CVE-2014-3146
Created:May 5, 2014 Updated:March 29, 2015
Description: From the Red Hat bugzilla:

The lxml.html.clean module cleans up HTML by removing embedded or script content, special tags, CSS style annotations and much more. It was found that the clean_html() function, provided by the lxml.html.clean module, did not properly clean HTML input if it included non-printed characters (\x01-\x08). A remote attacker could use this flaw to serve malicious content to an application using the clean_html() function to process HTML, possibly allowing the attacker to inject malicious code into a website generated by this application.

Mandriva MDVSA-2015:112 python-lxml 2015-03-29
Debian DSA-2941-1 lxml 2014-06-01
openSUSE openSUSE-SU-2014:0735-1 python-lxml 2014-05-30
Ubuntu USN-2217-1 lxml 2014-05-21
Mandriva MDVSA-2014:088 python-lxml 2014-05-15
Mageia MGASA-2014-0218 python-lxml 2014-05-15
Fedora FEDORA-2014-5801 python-lxml 2014-05-08
Fedora FEDORA-2014-5773 python-lxml 2014-05-02

Comments (none posted)

python3: privilege escalation

Package(s):python3 CVE #(s):CVE-2014-2667
Created:May 2, 2014 Updated:January 6, 2015

From the openSUSE bug report:

It was reported that a patch added to Python 3.2 caused a race condition where a file created could be created with world read/write permissions instead of the permissions dictated by the original umask of the process. This could allow a local attacker that could win the race to view and edit files created by a program using this call. Note that prior versions of Python, including 2.x, do not include the vulnerable _get_masked_mode() function that is used by os.makedirs() when exist_ok is set to True.

Mandriva MDVSA-2015:076 python3 2015-03-27
Gentoo 201503-10 python 2015-03-18
Fedora FEDORA-2014-16479 python3 2015-01-06
Fedora FEDORA-2014-16393 python3 2014-12-12
Mageia MGASA-2014-0216 python3 2014-05-15
openSUSE openSUSE-SU-2014:0596-1 python3 2014-05-02
openSUSE openSUSE-SU-2014:0597-1 python3 2014-05-02

Comments (none posted)

qt: denial of service

Package(s):qt CVE #(s):CVE-2014-0190
Created:May 2, 2014 Updated:December 15, 2014

From the Fedora bug tracker:

A NULL pointer dereference flaw was found in QGIFFormat::fillRect. If an application using the qt-x11 libraries opened a malicious GIF file, it could cause the application to crash.

Ubuntu USN-2626-1 qt4-x11, qtbase-opensource-src 2015-06-03
openSUSE openSUSE-SU-2015:0573-1 kdebase4-runtime, 2015-03-23
Gentoo 201412-25 qtgui 2014-12-13
Mageia MGASA-2014-0263 qt3 2014-06-18
Mageia MGASA-2014-0241 qt4 and qtbase5 2014-05-29
Mageia MGASA-2014-0240 qt4 2014-05-29
Fedora FEDORA-2014-6083 qt 2014-05-23
Fedora FEDORA-2014-5999 mingw-qt5-qtbase 2014-05-13
Fedora FEDORA-2014-5988 mingw-qt5-qtbase 2014-05-13
Fedora FEDORA-2014-6028 mingw-qt 2014-05-13
Fedora FEDORA-2014-6003 mingw-qt 2014-05-13
Fedora FEDORA-2014-5710 qt5-qtbase 2014-05-06
Fedora FEDORA-2014-5680 qt5-qtbase qt 2014-05-06
Fedora FEDORA-2014-5695 qt 2014-05-01
Fedora FEDORA-2014-6896 qt3 2014-06-10
Fedora FEDORA-2014-6922 qt3 2014-06-10

Comments (none posted)

rxvt-unicode: command execution

Package(s):rxvt-unicode CVE #(s):CVE-2014-3121
Created:May 5, 2014 Updated:June 25, 2014
Description: From the Mageia advisory:

rxvt-unicode (aka urxvt) before 9.20 is vulnerable to a user-assisted arbitrary commands execution issue. This can be exploited by the unprocessed display of certain escape sequences in a crafted text file or program output. Arbitrary command sequences can be constructed using this, and unintentionally executed if used in conjunction with various other escape sequences.

SUSE SUSE-SU-2014:0838-1 rxvt-unicode 2014-06-24
Gentoo 201406-18 rxvt-unicode 2014-06-19
openSUSE openSUSE-SU-2014:0814-1 rxvt-unicode 2014-06-18
Mandriva MDVSA-2014:094 rxvt-unicode 2014-05-16
Fedora FEDORA-2014-5938 rxvt-unicode 2014-05-12
Fedora FEDORA-2014-5939 rxvt-unicode 2014-05-12
Debian DSA-2925-1 rxvt-unicode 2014-05-08
Mageia MGASA-2014-0202 rxvt-unicode 2014-05-02

Comments (none posted)

strongswan: denial of service

Package(s):strongswan CVE #(s):CVE-2014-2891
Created:May 5, 2014 Updated:May 7, 2014
Description: From the Debian advisory:

A vulnerability has been found in the ASN.1 parser of strongSwan, an IKE/IPsec suite used to establish IPsec protected links.

By sending a crafted ID_DER_ASN1_DN ID payload to a vulnerable pluto or charon daemon, a malicious remote user can provoke a null pointer dereference in the daemon parsing the identity, leading to a crash and a denial of service.

Gentoo 201412-26 strongswan 2014-12-13
openSUSE openSUSE-SU-2014:0700-1 strongswan 2014-05-22
openSUSE openSUSE-SU-2014:0697-1 strongswan 2014-05-22
Debian DSA-2922-1 strongswan 2014-05-05

Comments (none posted)

struts: code execution

Package(s):struts CVE #(s):CVE-2014-0114
Created:May 7, 2014 Updated:July 20, 2016
Description: From the Red Hat advisory:

It was found that the Struts 1 ActionForm object allowed access to the 'class' parameter, which is directly mapped to the getClass() method. A remote attacker could use this flaw to manipulate the ClassLoader used by an application server running Struts 1. This could lead to remote code execution under certain conditions.

Fedora FEDORA-2014-9380 struts 2014-08-23
Debian DSA-2940-1 libstruts1.2-java 2014-08-21
SUSE SUSE-SU-2014:0902-1 struts 2014-07-16
Mandriva MDVSA-2014:095 struts 2014-05-16
Mageia MGASA-2014-0219 struts 2014-05-15
Oracle ELSA-2014-0474 struts 2014-05-07
Scientific Linux SLSA-2014:0474-1 struts 2014-05-07
CentOS CESA-2014:0474 struts 2014-05-07
Red Hat RHSA-2014:0474-01 struts 2014-05-07
Gentoo 201607-09 commons-beanutils 2016-07-20

Comments (none posted)

varnish: world-readable log files

Package(s):varnish CVE #(s):CVE-2013-0345
Created:May 6, 2014 Updated:May 7, 2014
Description: From the Red Hat bugzilla:

Agostino Sarubbo reported on the oss-security mailing list that, on Gentoo, /var/log/varnish is world-accessible and the log files inside the directory are world-readable. This could allow an unprivileged user to read the log files.

Checking on Fedora and EPEL, /var/log/varnish is provided with 0755 permissions. These should be reduced to 0700 permissions, like /var/log/httpd.

Gentoo 201412-30 varnish 2014-12-15
Fedora FEDORA-2013-24018 varnish 2014-05-06
Fedora FEDORA-2013-24023 varnish 2014-05-06

Comments (none posted)

xbuffy: code execution

Package(s):xbuffy CVE #(s):CVE-2014-0469
Created:May 5, 2014 Updated:May 7, 2014
Description: From the Debian advisory:

Michael Niedermayer discovered a vulnerability in xbuffy, an utility for displaying message count in mailbox and newsgroup accounts.

By sending carefully crafted messages to a mail or news account monitored by xbuffy, an attacker can trigger a stack-based buffer overflow, leading to xbuffy crash or even remote code execution.

Debian DSA-2921-1 xbuffy 2014-05-04

Comments (none posted)

Page editor: Jake Edge

Kernel development

Brief items

Kernel release status

The current development kernel is 3.15-rc4, released on May 4. According to Linus: "There's a few known things pending still (pending fix for some interesting dentry list corruption, for example - not that any remotely normal use will likely ever hit it), but on the whole things are fairly calm and nothing horribly scary. We're in the middle of the calming-down period, so that's just how I like it."

Stable updates: 3.14.3, 3.10.39, and 3.4.89 were released on May 6 with the usual set of important fixes.

Comments (none posted)

Quotes of the week

Every new configuration combination is a new situation that competes testing wise with the others.
David Miller

There's a stigma rightfully attached to out-of-tree patches, which roughly amounts to "people ought to submit patches upstream, we shouldn't have to support or care about out-of-tree patches". But that only works if the responses to patch submissions are either "No, because you need to fix X, Y, and Z", or "No, because your use case is better served by this existing mechanism already in the kernel", rather than "No, your use case is not valid".
Josh Triplett

If you are using glibc and GNU tools it isn't going to work, but long ago tools were written which just did the job they were supposed to do at the time and were small and tidy. Programmers were expected to use shell scripts to combine them for harder jobs rather than be the one person a year who invoked gnu-wibble --format-sideways-while-singing --tune=waltzing-matilda
Alan Cox

And as for announcing [long-term stable releases] ahead of time, I'm never going to do that again, the aftermath was horrid of people putting stuff that shouldn't be there. Heck, when people know about what the enterprise kernels are going to be, they throw stuff into upstream "early", so it's a well-known pattern and issue.
Greg Kroah-Hartman

Comments (6 posted)

Kernel Summit 2014 Call for Topics

The 2014 Kernel Summit will be held August 18 to 20 in Chicago, alongside LinuxCon North America. The call for topics (which is also a call for potential invitees) has gone out; there is a soft deadline of May 15 for topic suggestions.

Full Story (comments: none)

GlusterFS 3.5 released

Version 3.5 of the GlusterFS cluster filesystem has been released. New features include better logging, the ability to take snapshots of individual files (full volumes cannot yet be snapshotted), on-the-wire compression, on-disk encryption, and improved geo-replication support.

Comments (none posted)

The possible demise of remap_file_pages()

By Jonathan Corbet
May 7, 2014
The remap_file_pages() system call is a bit of a strange beast; it allows a process to create a complicated, non-linear mapping between its address space and an underlying file. Such mappings can also be created with multiple mmap() calls, but the in-kernel cost is higher: each mmap() call creates a separate virtual memory area (VMA) in the kernel, while remap_file_pages() can get by with just one. If the mapping has a large number of discontinuities, the difference on the kernel side can be significant.

That said, there are few users of remap_file_pages() out there. So few that Kirill Shutemov has posted a patch set to remove it entirely, saying "Nonlinear mappings are pain to support and it seems there's no legitimate use-cases nowadays since 64-bit systems are widely available." The patch is not something he is proposing for merging yet; it's more of a proof of concept at this point.

It is easy to see the appeal of this change; it removes 600+ lines of tricky code from the kernel. But that removal will go nowhere if it constitutes an ABI break. Some kernel developers clearly believe that no users will notice if remap_file_pages() goes away, but going from that belief to potentially breaking applications is a big step. So there is talk of adding a warning to the kernel; Peter Zijlstra suggested going a step further and require setting a sysctl knob to make the system call active. But it would also help if current users of remap_file_pages() would make themselves known; speaking now could save some trouble in the future.

Comments (7 posted)

Kernel development news

The first kpatch submission

By Jonathan Corbet
May 7, 2014
It is spring in the northern hemisphere, so a young kernel developer's thoughts naturally turn to … dynamic kernel patching. Last week saw the posting of SUSE's kGraft live-patching mechanism; shortly thereafter, developers at Red Hat came forward with their competing kpatch mechanism. The approaches taken by the two groups show some interesting similarities, but also some significant differences.

Like kGraft, kpatch replaces entire functions within a running kernel. A kernel patch is processed to determine which functions it changes; the kpatch tools (not included with the patch, but available in this repository) then use that information to create a loadable kernel module containing the new versions of the changed functions. A call to the new kpatch_register() function within the core kpatch code will use the ftrace function tracing mechanism to intercept calls to the old functions, redirecting control to the new versions instead. So far, it sounds a lot like kGraft, but that resemblance fades a bit once one looks at the details.

KGraft goes through a complex dance during which both the old and new versions of a replaced function are active in the kernel; this is done in order to allow each running process to transition to the "new universe" at a (hopefully) safe time. Kpatch is rather less subtle: it starts by calling stop_machine() to bring all other CPUs in the system to a halt. Then, kpatch examines the stack of every process running in kernel mode to ensure that none are running in the affected function(s); should one of the patched functions be active, the patch-application process will fail. If things are OK, instead, kpatch patches out the old functions completely (or, more precisely, it leaves an ftrace handler in place that routes around the old function). There is no tracking of whether processes are in the "old" or "new" universe; instead, everybody is forced to the new universe immediately if it is possible.

There are some downsides to this approach. stop_machine() is a massive sledgehammer of a tool; kernel developers prefer to avoid it if at all possible. If kernel code is running inside one of the target functions, kpatch will simply fail; kGraft, instead, will work to slowly patch the system over to the new function, one process at a time. Some functions (examples would include schedule(), do_wait(), or irq_thread()) are always running somewhere in the kernel, so kpatch cannot be used to apply a patch that modifies them. On a typical system, there will probably be a few dozen functions that can block a live patch in this way — a pretty small subset of the thousands of functions in the kernel.

While kpatch, with its use of stop_machine(), may seem heavy-handed, there are some developers who would like to see it take an even stronger approach initially: Ingo Molnar suggested that it should use the process freezer (normally used when hibernating the system) to be absolutely sure that no processes have any running state within the kernel. That would slow live kernel patching even more, but, as he put it:

Well, if distros are moving towards live patching (and they are!), then it looks rather necessary to me that something scary as flipping out live kernel instructions with substantially different code should be as safe as possible, and only then fast.

The hitch with this approach, as noted by kpatch developer Josh Poimboeuf, is that there are a lot of unfreezable kernel threads. Frederic Weisbecker suggested that the kernel thread parking mechanism could be used instead. Either way, Ingo thought, kernel threads that prevented live patching would be likely to be fixed in short order. There was not a consensus in the end on whether freezing or parking kernel threads was truly necessary, but opinion did appear to be leaning in the direction of being slow and safe early on, then improving performance later.

The other question that has come up has to do with patches that change the format or interpretation of in-kernel data. KGraft tries to handle simple cases with its "universe" mechanism but, in many situations, something more complex will be required. According to kGraft developer Jiri Kosina, there is a mechanism in place to use a "band-aid function" that understands both forms of a changed data structure until all processes have been converted to the new code. After that transition has been made, the code that writes the older version of the changed data structure can be patched out, though it may be necessary to retain code that reads older data structures until the next reboot.

On the kpatch side, instead, there is currently no provision for making changes to data structures at all. The plan for the near future is to add a callback that can be packaged with a live patch; its job would be to search out and convert all affected data structures while the system is stopped and the patch is being applied. This approach has the potential to work without the need for maintaining the ability to cope with older data structures, but only if all of the affected structures can be located at patching time — a tall order, in many cases.

The good news is that few patches (of the type that one would consider for live patching) make changes to kernel data structures. As Jiri put it:

We've done some very light preparatory analysis and went through patches which would make most sense to be shipped as hot/live patches without enough time for proper downtime scheduling (i.e. CVE severity high enough (local root), etc). Most of the time, these turn out to be a one-or-few liners, mostly adding extra check, fixing bounds, etc. There were just one or two in a few years history where some extra care would be needed.

So the question of safely handling data-related changes can likely be deferred for now while the question how to change the code in a running kernel is answered. There have already been suggestions that this topic should be discussed at the 2014 Kernel Summit in August. It is entirely possible, though, that the developers involved will find a way to combine their approaches and get something merged before then. There is no real disagreement over the end goal, after all; it's just a matter of finding the best approach for the implementation of that goal.

Comments (8 posted)

Porting Linux to a new architecture

By Jake Edge
May 7, 2014
Embedded Linux Conference

While it's certainly not an everyday occurrence, getting Linux running on a new CPU architecture needs to be done at times. To someone faced with that task, it may seem rather daunting—and it is—but, as Marta Rybczyńska described in her Embedded Linux Conference (ELC) talk, there are some fairly straightforward steps to follow. She shared those steps, along with many things that she and her Kalray colleagues learned as they ported Linux to the MPPA 256 processor.

When the word "porting" is used, it can mean one of three different things, she said. It can be a port to a new board with an already-supported processor on it. Or it can be a new processor from an existing, supported processor family. The third alternative is to port to a completely new architecture, as with the MPPA 256 (aka K1).

[Marta Rybczyńska]

With a new architecture comes a new CPU instruction set. If there is a C compiler, as there was for her team, then you can recompile the existing (non-arch) kernel C code (hopefully, anyway). Any assembly pieces need to be rewritten. There will be a different memory map and possibly new peripherals. That requires configuring existing drivers to work in a new way or writing new drivers from scratch. Also, when people make the effort to create a new architecture, they don't do that just for fun, Rybczyńska said. There will be benefits to the new architecture, so there will be opportunities to optimize the existing system to take advantage of it.

There are several elements that are common to any port. First, you need build tools, such as GCC and binutils. Next, there is the kernel, both its core code and drivers. There are important user-space libraries that need to be ported, such as libc, libm, pthreads, etc. User-space applications come last. Most people start with BusyBox as the first application, then port other applications one by one.

Getting started

To get started, you have to learn about the new architecture, she said. The K1 is a massively multi-core processor with both high performance and high energy efficiency, she said. It has 256 cores that are arranged in groups of sixteen cores which share memory and an MMU. There are Network-on-Chip interfaces to communicate between the groups. Each core has the same very large instruction word (VLIW) instruction set, which can bundle up to five instructions to be executed in one cycle. The cores have advanced bitwise instructions, hardware loops, and a floating point unit (FPU). While the FPU is not particularly important for porting the kernel, it will be needed to port user-space code.

To begin, you create an empty directory (linux/arch/k1 in her case), but then you need to fill it, of course. The initial files needed are less than might be expected, Rybczyńska said. Code is needed first to configure the processor, then to handle the memory map, which includes configuring the zones and initializing the memory allocators. Handling processor mode changes is next up: interrupt and trap handlers, including the clock interrupt, need to be written, as does code to handle context switches. There is some device tree and Kconfig work to be done as well. Lastly, adding a console to get printk() output is quite useful.

To create that code, there are a couple of different routes. There is not that much documentation on this early boot code, so there is a tendency to copy and paste code from existing architectures. Kalray used several as templates along the way, including MicroBlaze, Blackfin, and OpenRISC. If code cannot be found to fit the new architecture, it will have to be written from scratch. That often requires reading other architecture manuals and code—Rybczyńska can read the assembly language for several architectures she has never actually used.

There is a tradeoff between writing assembly code vs. C code for the port. For the K1, the team opted for as much C code as possible because it is difficult to properly bundle multiple instructions into a single VLIW instruction by hand. GCC handles it well, though, so the K1 port uses compiler built-ins in preference to asm inline functions. She said that the K1 has less assembly code than any other architecture in the kernel.

Once that is all in place, at some point you will get the (sometimes dreaded) "Failed to execute /init" error message. This is actually a "big success", she said, as it means that the kernel has booted. Next up is porting an init, which requires a libc. For the K1, they ported uClibc, but there are other choices, of course. She suggested that the first versions of init be statically linked, so that no dynamic loader is required.

Porting a libc means that the kernel-to-user-space ABI needs to be nailed down. At program startup, which values will be in what registers? Where will the stack be located? And so on. Basically, it required work in both the kernel and libc "to make them work together". System calls will also need to be worked on. Setting the numbers for the calls along with determining how the arguments will be passed (registers? stack?) is needed. Signals will need some work as well, but if the early applications being ported don't use signals, only basic support needs to be added, which makes things much simpler.

Kalray created an instruction set simulator for the K1, which was helpful in debugging. The simulator can show every single instruction with the value in each register. It is "handy and fast", Rybczyńska said, and was a great help when doing the port.

Eventually, booting into the newly ported init will be possible. At that point, additional user-space executables are on the agenda. Again she suggested starting out with static binaries. Work on the dynamic loader required "lots of work on the compiler and binutils", at least for the K1. Also needed is porting or writing drivers for the main peripherals that will be used.


Rybczyńska stressed that testing is "easily forgotten", but is important to the process. When changes are made, you need to ensure you didn't break things that were already working. Her team started by trying to create unit tests from the kernel code, but determined that was hard to do. Instead, they created a "test init" that contained some basic tests of functionality. It is a "basic validation that all of the tools, libc, and the kernel are working correctly", she said.

Further testing of the kernel is required as well, of course. The "normal idea" is to write your own tests, she said, but it would take months just to create tests for all of the system calls. Instead, the K1 team used existing tests, especially those from the Linux Test Project (LTP). It is a "very active project" with "tests for nearly everything", she said; using LTP was much better than trying to write their own tests.

Continuing on is just a matter of activating new functionality (e.g. a new kernel subsystem, filesystem, or driver), fixing things that don't compile, then fixing any functionality that doesn't work. Test-driven development "worked very well for us".

As an example, she described the process undertaken to port strace, which she called a nice debugging tool that is much less verbose than the instruction set simulator. But strace uses the ptrace() system call and requires support for signals. Up until that point, there had not been a need to support signals. The ptrace() tests in LTP were run first, then strace was tried. It compiled easily, but didn't work as there were architecture-specific pieces of the ptrace() code that still needed to be implemented.

Supporting a new architecture requires new code to enable the special features of the chip. For Kalray, the symmetric multi-processing (SMP) and MMU code required a fair amount of time to design and implement. The K1 also has the Network-on-Chip (NoC) subsystem, which is brand new to the kernel. Supporting that took a lot of internal discussion to create something that worked correctly and performed reasonably. The NoC connects the groups of cores, so its performance is integral to the overall performance of the system.

Once the port matures, building a distribution may be next up. One way is to "do it yourself", which is "fine if you have three packages", Rybczyńska said. But if you have more packages than that, it becomes a lot less fun to do it that way. Kalray is currently using Buildroot, which was "easy to set up". The team is now looking at the Yocto Project as another possibility.

Lessons learned

The team learned a number of valuable lessons in doing the port. To start with, it is important to break the work up into stages. That allows you to see something working along the way, which indicates progress being made, but it also helps with debugging. "Test test test", she said, and do it right from the beginning. There are subtle bugs that can be introduced in the early going and, if you aren't testing, you won't catch them early enough to easily figure out where they were introduced.

Wherever possible, use generic functionality already provided by the kernel or other tools; don't roll your own unless you have to. Adhere to the kernel coding style from the outset. She suggested using panic() and exit() in lots of places, including putting it in every non-implemented function. That will help not to waste time debugging problems that aren't actually problems. Code that won't compile if the architecture is unknown should be preferred. If an application has architecture dependencies, failing to compile is much easier to diagnose than some strange failure.

Spend time developing advanced debugging techniques and tools. For example, they developed a visualization tool that showed kernel threads being activated during the boot process. Reading the documentation is important, as is reading the comments in the code. Her last tip was that reading code for other platforms is quite useful, as well.

With that, she answered a few questions from the audience. The port took about two months to get it to boot the first init, she said, the rest "takes much more time". The port is completely self-contained as there are no changes to the generic kernel. Her hope is to submit the code upstream as soon as possible, noting that being out of the mainline can lead to problems (as they encountered with a pointer type in the tty functions when upgrading to 3.8). While Linux is not shipping yet for the K1, it will be soon. The K1 is currently shipping with RTEMS, which was easier to port, thus it filled the operating system role while the Linux port was being completed, she said.

Slides [PDF] from Rybczyńska's talk are available on the ELC slides page.

Comments (2 posted)

Networking on tiny machines

By Jonathan Corbet
May 7, 2014
Last week's article on "Linux and the Internet of Things" discussed the challenge of shrinking the kernel to fit on to computers that, by contemporary standards, are laughably underprovisioned. Shortly thereafter, the posting of a kernel-shrinking patch set sparked a related discussion: what needs to be done to get the kernel to fit into tiny systems and, more importantly, is that something that the kernel development community wants to even attempt?

Shrinking the network stack

The patch set in question was a 24-part series from Andi Kleen adding an option to build a minimally sized networking subsystem. Andi is looking at running Linux on systems with as little as 2MB of memory installed; on such systems, the Linux kernel's networking stack, which weighs in at about 400KB for basic IPv4 support, is just too big to shoehorn in comfortably. By removing a lot of features, changing some data structures, and relying on the link-time optimization feature to remove the (now) unneeded code, Andi was able to trim things down to about 170KB. That seems like a useful reduction, but, as we will see, these changes have a rough road indeed ahead of them before any potential merge into the mainline.

Some of the changes in Andi's patch set include:

  • Removal of the "ping socket" feature that allows a non-setuid ping utility to send ICMP echo packets. It's a useful feature in a general-purpose distribution, but it's possibly less useful in a single-purpose tiny machine that may not even have a ping binary. Nonetheless the change was rejected: "We want to move away from raw sockets, and making this optional is not going to help us move forward down that path."

  • Removal of raw sockets, saving about 5KB of space. Rejected: "Sorry, you can't have half a functioning ipv4 stack."

  • Removal of the TCP fast open feature. That feature takes about 3KB to implement, but it also requires the kernel to have the crypto subsystem and AES code built in. Rejected: "It's for the sake of the remote service not the local client, sorry I'm not applying this, it's a facility we want to be ubiquitous and in widespread use on as many systems as possible."

  • Removal of the BPF packet filtering subsystem. Rejected: "I think you highly underestimate how much 'small systems' use packet capturing and thus BPF."

  • Removal of the MIB statistics collection code (normally accessed via /proc) when /proc is configured out of the kernel. Rejected: "Congratulations, you just broke ipv6 device address netlink dumps amongst other things."

The above list could be made much longer, but the point should be apparent by now: this patch set was not welcomed by the networking community with open arms. This community has been working with a strong focus on performance and features on contemporary hardware; networking developers (some of them, at least) do not want to be bothered with the challenges of trying to accommodate users of tiny systems. As Eric Dumazet put it:

I have started using linux on 386/486 pcs which had more than 2MB of memory, it makes me sad we want linux-3.16 to run on this kind of hardware, and consuming time to save few KB here and here.

The networking developers also do not want to start getting bug reports from users of a highly pared-down networking stack wondering why things don't work anymore. Some of that would certainly happen if a patch set like this one were to be merged. One can try to imagine which features are absolutely necessary and which are optional on tiny systems, but other users solving different problems will come to different conclusions. A single "make it tiny" option has a significant chance of providing a network stack with 99% of what most tiny-system users need — but the missing 1% will be different for each of those users.

Should we even try?

Still, pointing out some difficulties inherent in this task is different from saying that the kernel should not try to support small systems at all, but that appears to be the message coming from the networking community. At one point in the discussion, Andi posed a direct question to networking maintainer David Miller: "What parts would you remove to get the foot print down for a 2MB single purpose machine?" David's answer was simple: "I wouldn't use Linux, end of story. Maybe two decades ago, but not now, those days are over." In other words, from his point of view, Linux should not even try to run on machines of that class; instead, some sort of specialty operating system should be used.

That position may come as a bit of a surprise to many longtime observers of the Linux development community. As a general rule, kernel developers have tried to make the system work on just about any kind of hardware available. The "go away and run something else" answer has, on rare occasion, been heard with regard to severely proprietary and locked-down hardware, but, even in those cases, somebody usually makes it work with Linux. In this case, though, there is a class of hardware that could run Linux, with users who would like to run Linux, but some kernel developers are telling them that there is no interest in adding support for them. This is not a message that is likely to be welcomed in those quarters.

Once upon a time, vendors of mainframes laughed at minicomputers — until many of their customers jumped over to the minicomputer market. Minicomputer manufacturers treated workstations, personal computers, and Unix as toys; few of those companies are with us now. Many of us remember how the proprietary Unix world treated Linux in the early days: they dismissed it as an underpowered toy, not to be taken seriously. Suffice to say that we don't hear much from proprietary Unix now. It's a classic Innovator's Dilemma story of disruptive technologies sneaking up on incumbents and eating their lunch.

It is not entirely clear that microscopic systems represent this type of disruptive technology; the "wait for the hardware to grow up a bit" approach has often worked well for Linux in the past. It is usually safe to bet on computing hardware increasing in capability over time, so effort put into supporting underpowered systems is often not worth it. But we may be dealing with a different class of hardware here, one where "smaller and cheaper" is more important than "more powerful." If these systems can be manufactured in vast numbers and spread like "smart dust," they may well become a significant part of the computing substrate of the future.

So the possibility that tiny systems could be a threat to Linux should certainly be considered. If Linux is not running on those devices, something else will be. Perhaps it will be a Linux kernel with the networking stack replaced entirely by a user-space stack like lwIP, or perhaps it will be some other free operating system whose community is more interested in supporting this hardware. Or, possibly, it could be something proprietary and unpleasant. However things go, it would be sad to look back someday and realize that the developers of Linux could have made the kernel run on an important class of machines, but they chose not to.

Comments (91 posted)

Patches and updates

Kernel trees


Core kernel code

Development tools

Device drivers



Memory management



Page editor: Jonathan Corbet


Debian and application-menu policies

By Nathan Willis
May 7, 2014

The Debian project is engaged in a debate over how the Debian menu is presented in various desktop environments and what policy for application packaging should be derived from that decision. The recent trend in desktop environments has been away from a master "applications menu," but Debian's response to that shift has as much to do with internal project-management processes as it does with updating packaging recommendations.

Back in May 2013, the issue of moving away from the Debian menu (which contains a categorical hierarchy of application launchers, as was a common feature of most desktop environments in years past) was first raised in a bug filed by Sune Vuorela. The original goal of the Debian menu was to provide a consistent way to access installed applications, regardless of which of Debian's many available desktop environments was in use. Over the years, however, the style and interface conventions of those various environments have shifted away from the one-master-menu approach.

Vuorela's bug report noted that the Debian menu was now hidden by default in GNOME, that a similar change was under consideration for KDE, and that many application packages had stopped including the menu-entry description files needed by the Debian menu in the first place. The recommended change was to soften the language found in Debian's official policy manual that told application packagers they should create a menu entry; instead, creating a menu entry would be an option, but the more important factor would be creating a .desktop file that could be used by the search-driven interfaces of GNOME Shell and other recent desktop environments (though it could also be used by menu-driven environments).

Of course, the Debian menu itself has had both fans and critics over the years. When shown, it frequently presents duplicates to a number of entries also found in the GNOME or KDE menu structures, which seems superfluous. On the other hand, Debian can control the content of the menu to a much greater extent, which makes it more predictable—it is always possible to find a utility in the Debian menu, the argument goes, even if the same utility gets removed or reclassified into a hard-to-find spot in the desktop environment's menu structure.

But much of the work of making the Debian menu usable fell on application packagers, who were tasked with creating the menu file for each package, and the Debian menu-entry format uses a distribution-specific syntax. Meanwhile, GNOME, KDE, and the vast majority of other desktop environments have agreed on the desktop entry specification for .desktop files, which covers most of the same metadata for each application.

Eventually, a proposal formed to recommend that packagers migrate away from the Debian menu-entry format to the .desktop format—in essence, deprecating the Debian menu in favor of the system (however it might be implemented in any particular desktop environment). In early 2014, the proposal picked up steam again (after several months of inactivity) as a possible change to be included in the upcoming Debian "jessie" release, and on February 15, Charles Plessy checked in the change to the policy manual.

Not everyone was satisfied, however. Bill Allombert (who along with Andreas Barth, Jonathan Nieder, and Russ Allbery, is a Policy Editor) reverted the change on February 25, arguing that a number of objections to the proposal had not been addressed, and, thus, "there is no consensus in favor of this change, so committing to policy was premature." Vuorela and others disagreed, insisting that all of the objections listed by Allombert had been answered, and concluding that "there is a consensus. Note that consensus doesn't mean unanimous."

Plessy contended that Allombert had sat out of the discussion over the preceding year, and that a majority of Debian Developers and Policy Editors had approved it. Stepping in after the decision and reverting it, he said, amounted to single-handedly vetoing the change. Plessy suggested taking the issue to the Debian Technical Committee (TC) for a resolution, and after another reply from Allombert did not seem to change matters, filed a bug on the subject with the TC on March 14.

While escalating the issue to the TC might seem to focus on the issue of whether one member of a team (in this case, the Policy Editors) can overrule a consensus reached by the others, that is not in fact the direction that the TC discussion took. As Ian Jackson said:

This is hardly the first time that a matter has come to the TC after a dispute has escalated to acts (on one or both sides) whose legitimacy is disputed. I doubt it will be the last. Our approach has always been to look at the underlying dispute and try to resolve it.

So, no. The TC will not make decisions about the content of policy on the basis of an adjudications about the policy process. We will rule on the underlying question(s), on the merits.

Instead, the TC discussion returned to the original question of whether or not the Debian menu remained a useful feature even in light of the increasing usage of standards by desktop environments. As was the case in the original bug's discussion thread, there are arguments from many different perspectives regarding situations where the "traditional" Debian menu might be better than a modern menu. They included technical concerns, like the fact that the Debian menu expects application icons in XPM format and at no greater size than 32-by32 pixels (both of which make for a less-than-pleasing image on modern displays), and implementation concerns, such as who would be tasked with creating the .desktop files for the various application packages.

Also raised as a concern is the degree to which the policy manual specifies what developers and packagers must do versus what they optionally can do. Both the old policy wording and the patched version say that shipping applications "should" include a menu entry in the specified format. But "should" can be interpreted either as a requirement—meaning that an application without the menu entry it "should" have will not be included—or as a recommendation that does not necessarily demand every application conform.

And, on that point, the TC does not yet appear to have arrived at a consensus. Debian has thousands of packages; if the policy is changed in such a way that developers and packagers are required to create .desktop files, the result could potentially be thousands of hours of work. Imposing such a requirement with a simple wording change does not seem to be an ideal move.

On April 13, Allbery noted that the "should" wording had resulted in a logjam in the discussion. The next response came on May 5, when Plessy appealed to the TC to reach a decision:

More than one month and a half later, Bill still has not explained his position to the technical committee.

In that context, I am asking the TC to a) acknowledge that the changes to section 9.6 after the Policy changes process was followed accordingly, and b) ask for Bill's commit 378587 be reverted. In particular, in the absence of Bill's contribution to the resolution of our conflict, I am asking the TC to not discuss the menu systems and focus instead on correcting Bill's misbehaviour.

What is at a stake here is not the Debian Menu system, it is the fact that in Debian, it takes 5 minutes for one person to block one year of effort and patience from multiple other persons.

At this point, it is not clear what the TC will do next. Over the course of the "should" discussion, it became clear that Debian's policy manual is not entirely consistent in the wording with which it describes requirements and recommendations. Worse yet, it became clear that not all members of the project agree, even when confronted with a single word. Whether or not further discussion can resolve those issues is hard to say, but Debian, at least, is no stranger to lengthy debates.

Comments (29 posted)

Brief items

Distribution quote of the week

Ultimately, if users want to see their distro thrive and survive they have got to roll their sleeves up and muck in a bit. Leaving it all to 'someone else' will not get the job done.
-- John Crisp

Comments (2 posted)

CyanogenMod 11.0 M6 is available

The developers of the CyanogenMod Android derivative have announced the availability of the 11.0 M6 release. The announcement also includes details about changes in the release scheme; there will be no more "stable" releases; instead, the project will attempt to produce reliable "M" releases with increasing frequency. "Our goal is to get a release out every 2 weeks with the same quality and expectations you would have of a ‘stable’ release (label for that yet undecided). But with that goal, it further underscores how the label ‘stable’ no longer works for us. With the current M cycle, we have gotten our routine down to every 4 weeks; to get it to 2 weeks is ambitious, but we can do it, and it would benefit everyone." See the announcement for a list of changes in this release.

Comments (129 posted)

OpenBSD 5.5

OpenBSD 5.5 has been released. OpenBSD is now ready for 2038. The entire source tree has been audited to support 64-bit time_t. This release also features numerous improvements across the distribution. See the changelog for a comprehensive list.

Comments (34 posted)

OpenMandriva­ Lx 2014.0

The OpenMandriva Community has announced the release of OpenMandriva­ Lx 2014.0 Phosphorus. "The kernel has been upgraded to 3.13.11 nrjQL – a powerful variant of the 3.13.11 kernel that has been configured with desktop system performance and responsiveness in mind. To achieve this the CPU and RCU have been configured with full pre­emption and boost mode, and the CK1 and BFQ patchsets have been added to provide further optimisations (including better CPU load and disk I/O schedulers, an improved memory manager using UKSM, and TuxOnIce providing suspension and hibernation services." See the release notes for details.

Comments (none posted)

Distribution News

Debian GNU/Linux

Bits from the Release Team - Freeze, removals and archs

The Debian release team talks about freezing "jessie" in 6 months, long term support for "squeeze", architectures, and more.

Full Story (comments: none)

Ubuntu family

Ubuntu 12.10 (Quantal Quetzal) reaches End of Life

As of May 16 Ubuntu 12.10 will reach its end of life. There will be no more updates after that time. Users of 12.10 are encouraged to upgrade to 14.04 via 13.10.

Full Story (comments: none)

Newsletters and articles of interest

Distribution newsletters

Comments (none posted)

Why ARM Servers, And Why Now? (EnterpriseTech)

Timothy Prickett Morgan takes a look at how the ARM server ecosystem is coming along. "As Canonical announced two weeks ago, Ubuntu Server 14.04 LTS is the X-Gene 1 from Applied Micro and the Thunder from Cavium Networks. Red Hat has demonstrated its Fedora development Linux on the X-Gene 1 and AMD’s “Seattle” Opteron A1150 processors, and Jon Masters, chief ARM architect at Red Hat, said at the ARM Tech Day that 98.6 percent of the packages in RHEL are “ARM clean,” and added that this is a full-on 64-bit implementation of the stack and that Red Hat would not support 32-bit code and that the support for 64KB memory pages made it impossible to do a 32-bit port. That said, you can always load a virtual machine with a version of Linux that does support 32-bit code if you need to, said Masters. Red Hat will not comment on when or how ARM support will be available with the commercial-grade Enterprise Linux, and a lot depends on the availability of hardware that supports various standards (UEFI and ACPI for instance), demand from customers, and the readiness of key programs such as Java. SUSE Linux has added support for ARM chips in the openSUSE development version and is very likely shooting to have ARM support in Enterprise Server 12, due to go into beta maybe later this year for production next year."

Comments (26 posted)

OpenELEC 4 offers simple XBMC install for standalone devices (BetaNews)

OpenELEC is a complete Linux distribution based around the XBMC media player. BetaNews reviews the recently released OpenELEC 4.0 which includes XBMC 13.0. "The biggest change to OpenELEC 4.0.0 is the completely reworked build system. This doesn’t just fix outstanding issues, but makes it easier for the development team to add new features going forward. It also leads to a simplified number of builds, with specific builds for certain chipsets (including NVIDIA ION, Intel and Fusion) now removed, with support rolled into the generic 32-bit and 64-bit builds instead."

Comments (none posted)

Page editor: Rebecca Sobol


Returning BlueZ to Android

By Jake Edge
May 6, 2014
Android Builders Summit

Toward the end of 2012, Google switched the Bluetooth stack in Android—for reasons unknown, though there has always been speculation about licensing—from the GPL-licensed BlueZ to the Apache-licensed BlueDroid. That switch was for the release of Android 4.2 (one of the Jelly Bean releases). Since the switch, though, Intel and the BlueZ project have been working to restore the option of running Android with BlueZ, which provides a whole raft of additional features lacking in BlueDroid. Marcel Holtmann of the Intel Open Source Technology Center reported on the BlueZ option at the Android Builders Summit (ABS) held in San Jose, CA, April 29–May 1.

[Marcel Holtmann]

After the October 2012 Android release with BlueDroid, the initial reviews of the new stack were "not that good", Holtmann said, which is not a huge surprise for a completely new Bluetooth stack. As it turns out, based on Google's February 2014 numbers, 73% of Android devices are actually still running BlueZ because they are running earlier releases. The initial release of Google Glass ran BlueZ as well.

From the perspective of the BlueZ developers, one good thing that came out of the BlueDroid switch was the addition of a Bluetooth hardware abstraction layer (HAL). That meant that Google engineers had to think about and define what features to expose and how to expose them. In the end, Google added a Bluetooth Core HAL and a Profile HAL, he said.

BlueDroid deficiencies

When it was added, BlueDroid was said to be "tiny", but it turns out to be 286K lines of C and C++ code. There are a number of limitations to BlueDroid, Holtmann said. For example, the entire stack, which includes the Bluetooth service, HAL layers, and BlueDroid itself, all runs in a single process.

But there is much more to be concerned about with BlueDroid, according to Holtmann. He had a list of more than a dozen items that are missing or sub-par in BlueDroid. To start with, every new hardware device that will be supported needs to fork the source of BlueDroid. There is build-time configuration for the stack, including which profiles are included and what hardware features are enabled. So there is no single BlueDroid tree with support for multiple hardware platforms. The Android open source project (AOSP) only provides trees for three Nexus devices (4, 5, and 7) which are based on either Broadcom or Qualcomm hardware.

Anything more complicated than supporting the serial (UART) interface to the hardware requires that a kernel shim driver be written, which means that devices connected via USB, PCI, SPI, etc. will require drivers to be written. In addition, the bus power management is done in user space, which we have learned is "not a good idea".

BlueDroid is a lot of new code being introduced into devices, without any kind of known security audit. The Git history for the repository starts in December 2012 and has a grand total of 140 commits. Worse yet, those commits are often huge and don't have commit messages that explain what is being done or why. There is little documentation provided, essentially just the examples, and there are no unit tests.

The stack itself suffers from audio latency problems. Part of that is due to the large number of context switches required for handling every audio frame, host controller interface (HCI) packet, network packet, and other communications. The initial release of BlueDroid had no support for debugging; recompiling was required to get debugging output. Things are a bit better with the Android 4.4 (KitKat) release of BlueDroid, though, he said.

There is no Intel architecture (IA) optimization for the required SBC audio codec, nor support for other Intel-only features, which is obviously a problem for Intel and its customers, he said. The 64-bit support for BlueDroid is unclear as well. Much of it has only been compile-tested on ARM, Holtmann said.

Beyond all of that, BlueDroid is not Bluetooth-certified except for the proprietary Broadcom AirForceBT stack that also uses the Bluetooth HALs. Support for Bluetooth 4.1 is left up to the device makers; BlueDroid only provides code for Bluetooth 4.0.

Given all of that, one might ask why Google switched away from BlueZ (which doesn't have most of the problems identified), as one audience member did. Holtmann said that he has heard rumors about why the switch was made, but that he didn't want to spread them. He is, however, interested in finding out, and suggested that someone from Google should explain the choice. Google attendees were in short supply at ABS; if any were present at the talk, they didn't seem willing (or able) to answer that particular question.

Bringing BlueZ back

There are now two different ways to support BlueZ features on Android devices. The first is a port of BlueDroid to use the existing Linux kernel drivers for Bluetooth. That allows devices to use all of the existing drivers, so Bluetooth is not limited to just the UART interface, as USB, SDIO, PCMCIA (if you can still find such devices), and others are available. There is a "tiny shim layer" of around 100 lines of kernel code that the upper layers of BlueDroid talk to.

That alternative is called "BlueDroid with HCI user channel" and "it works pretty well", Holtmann said. It allows a few of the problems identified with BlueDroid to be crossed off the list (user-space power management, only a few reference devices, new drivers required, limited debugging capabilities), but most of the rest remain. Fixing those problem is the goal of the second alternative: "BlueZ for Android".

[Marcel Holtmann]

BlueZ for Android (BfA) provides a "drop-in replacement" for BlueDroid, which means that apps do not need to change. That is also true for the HCI user channel alternative since it sits below BlueDroid. The D-Bus APIs that BlueZ normally uses have been replaced by integration with the Android Bluetooth HALs. BfA brings Bluetooth 4.1 support, as well as documentation and a wide of range of tests. It supports an even dozen profiles, with the Health Device Profile (HDP) currently being worked on.

It is a low-latency stack that also supports lower-power audio. BlueZ has had 64-bit support for some time now, as well as codecs optimized for the Intel architecture. It also supports Intel's hardware advanced encryption standard (AES) processing and hardware random number generation (RDRAND instruction). The code has been used and tested in a variety of different desktop and mobile platforms over many years, including Android.

The laundry list of BlueDroid deficiencies also dropped to near zero by swapping BlueZ in. There are still too many context switches for human interface device (HID) reports and radio frequency communication (RFCOMM) streams, but the project is working on eliminating those as well. Other than that, everything on the list has been addressed.

In addition, BfA has been developed as part of the open-source BlueZ project. Its Git repository stretches back much further, with many more, well-documented commits. It is also notable that BlueZ is on its way toward switching to the LGPL. Roughly 80% of the code is already licensed that way, with more coming, though it was not clear when that job would be finished.

While it was never said in the presentation, the clear implication of Holtmann's talk was that Google made a poor choice in switching to BlueDroid. The addition of the Bluetooth HALs was good, but BlueDroid itself simply did not have the right architecture or feature set. Unless Google puts a lot of effort into BlueDroid development, it will likely fall further behind, as things like Bluetooth 4.2 are on the horizon. But it would seem that device makers already have an alternative—it will be interesting to see if (and how much) it gets used.

Comments (29 posted)

Brief items

Quote of the week

This sort of thing happens to me every day and slows me down a lot. For example, if I type `git cone` instead of `git clone`:

It says:

    $ /usr/bin/git cone
    git: 'cone' is not a git command. See 'git --help'.

    Did you mean this?

You know DAMN WELL what I meant git, and you mock me by echoing it out right in front of me.

Christian Legnitto

Comments (15 posted)

XBMC 13.0 released

Version 13.0 of the XBMC media center application is available. "The dark night of waiting is finally over. Because here it is. The stable release of XBMC 13.0 Gotham edition. It has been months of hard work, improvements and testing since the 12.x releases." New features include better Android support (including hardware video decoding), support for some 3D movie formats, better touchscreen support, better UPnP support, a reworked audio engine, and more.

Comments (27 posted)

Scipy 0.14.0 available

Version 0.14.0 of the SciPy numeric computing library for Python has been released. Changes in this version include new functions and classes for interpolation, working with multivariate random variables, signal filtering, and optimization. The release announcement also notes this is "the first release for which binary wheels are available on PyPi for OS X, supporting the Python." There are also several deprecations; existing users should read the release notes for full details.

Full Story (comments: none)

Tor Browser Bundle 3.6 released

Version 3.6 of the Tor Browser Bundle (TBB) has been released. Most notably, the update includes the debut of "fully integrated Pluggable Transport support, including an improved Tor Launcher UI for configuring Pluggable Transport bridges." TBB is based on Firefox; version 3.6 on Firefox 24.5.

Comments (none posted)

Newsletters and articles

Development newsletters from the past week

Comments (none posted)

Bergius: Flowhub and the GNOME Developer Experience

At his blog, Henri Bergius writes about work from this week's GNOME Developer Experience hackfest in Berlin. One outcome of said hackfest is integration of the NoFlo flow-based programming environment with the GNOME APIs. "What the resulting project does is give the ability to build and debug GNOME applications in a visual way with the Flowhub user interface. You can interact with large parts of the GNOME API using either automatically generated components, or hand-built ones. And while your software is running, you can see all the data passing through the connections in the Flowhub UI." Though there is still more work to come, it is possible to develop and debug GTK+ and Clutter applications with NoFlo.

Comments (1 posted)

Page editor: Nathan Willis


Brief items

FSF launches "Reset the Net"

The Free Software Foundation has joined a coalition of thousands of Internet users, companies and organizations in a campaign to "Reset the Net". "Organizations and companies across the technology industry and political spectrum oppose the bulk collection of data on all internet users. Reset The Net is a day of action to secure and encrypt the web to shut out the government’s mass surveillance capabilities."

Full Story (comments: none)

Articles of interest

Open Letter to European Commission: Stop DRM in HTML5

The Free Software Foundation Europe sent an open letter to the European Commission on May 6th (Day against DRM) asking the EC to prevent Digital Restrictions Management technology from being closely integrated with the HTML5 standard. "A W3C working group is currently standardising an "Encrypted Media Extension" (EME), which will allow companies to easily plug in non-free "Content Decryption Modules" (CDM) with DRM functionality, taking away users' control over their own computers. Most DRM technologies impose restrictions on users that go far beyond what copyright and consumers' rights allow."

Full Story (comments: none)

International Day Against DRM

The Free Software Foundation's Defective By Design campaign has announced that today is the International Day Against DRM. "Today we come together for the eighth International Day Against DRM, to insist on a future without restrictions on our media. This is the largest anti-DRM event in the world, and it's growing. [Head over to to take action against DRM with events, petitions and more, then meet the anti-DRM community and enjoy sales on DRM-free media.]"

Full Story (comments: none)

Free Software Supporter - Issue 73, April 2014

The FSF's newsletter for April covers International Day Against DRM, protest the "Windows 8 Campus Tour", statement on Heartbleed, Document Freedom Day, GCC 4.9, GNU MediaGoblin campaign, and several other topics.

Full Story (comments: none)

FSFE Newsletter – May 2014

The Free Software Foundation Europe newsletter for May covers Heartbleed and economic incentives, Internet Censorship and Open Standards, and much more.

Full Story (comments: none)

Calls for Presentations

Open Source Monitoring Conference 2014 – Call for papers and registration now open

OSMC 2014 will take place November 18-20 in Nuremberg, Germany. The call for papers closes June 30. "From presentations for beginners to presentations on deploying monitoring solutions in very large environments or clusters systems, the conference always offers something for everyone. As in the previous years, the conference language will be German and English."

Full Story (comments: none)

1st Call For Papers, 21st Annual Tcl/Tk Conference 2014

The Tcl/Tk conference will take place November 10-14 in Portland, Oregon. The call for papers deadline is September 8. "The program committee is asking for papers and presentation proposals from anyone using or developing with Tcl/Tk (and extensions)."

Full Story (comments: none)

XDC2014: Call for papers

The Developer Conference 2014 will be held in Bordeaux, France from October 8-10. Proposals are due September 10. "As usual, we are open to talks across the layers of the graphics stack, from the kernel to desktop environments / graphical applications and about how to make things better for the developers who build them."

Full Story (comments: none)

CFP Deadlines: May 8, 2014 to July 7, 2014

The following listing of CFP deadlines is taken from the CFP Calendar.

DeadlineEvent Dates EventLocation
May 9 June 10
June 11
Distro Recipes 2014 - canceled Paris, France
May 12 July 19
July 20
Conference for Open Source Coders, Users and Promoters Taipei, Taiwan
May 18 September 6
September 12
Akademy 2014 Brno, Czech Republic
May 19 September 5 The OCaml Users and Developers Workshop Gothenburg, Sweden
May 23 August 23
August 24
Free and Open Source Software Conference St. Augustin (near Bonn), Germany
May 30 September 17
September 19
PostgresOpen 2014 Chicago, IL, USA
June 6 September 22
September 23
Open Source Backup Conference Köln, Germany
June 6 June 10
June 12
Ubuntu Online Summit 06-2014 online, online
June 20 August 18
August 19
Linux Security Summit 2014 Chicago, IL, USA
June 30 November 18
November 20
Open Source Monitoring Conference Nuremberg, Germany
July 1 September 5
September 7
BalCCon 2k14 Novi Sad, Serbia
July 4 October 31
November 2
Free Society Conference and Nordic Summit Gothenburg, Sweden
July 5 November 7
November 9
Jesień Linuksowa Szczyrk, Poland

If the CFP deadline for your event does not appear here, please tell us about it.

Upcoming Events

Events: May 8, 2014 to July 7, 2014

The following event listing is taken from the Calendar.

May 8
May 10
LinuxTag Berlin, Germany
May 12
May 16
Wireless Battle Mesh v7 Leipzig, Germany
May 12
May 16
OpenStack Summit Atlanta, GA, USA
May 13
May 16
Samba eXPerience Göttingen, Germany
May 15
May 16
ScilabTEC 2014 Paris, France
May 17 Debian/Ubuntu Community Conference - Italia Cesena, Italy
May 20
May 22
LinuxCon Japan Tokyo, Japan
May 20
May 21
PyCon Sweden Stockholm, Sweden
May 20
May 24
PGCon 2014 Ottawa, Canada
May 21
May 22
Solid 2014 San Francisco, CA, USA
May 23
May 25
PyCon Italia Florence, Italy
May 23
May 25
FUDCon APAC 2014 Beijing, China
May 24 MojoConf 2014 Oslo, Norway
May 24
May 25
GNOME.Asia Summit Beijing, China
May 30 SREcon14 Santa Clara, CA, USA
June 2
June 4
Tizen Developer Conference 2014 San Francisco, CA, USA
June 2
June 3
PyCon Russia 2014 Ekaterinburg, Russia
June 9
June 10
Erlang User Conference 2014 Stockholm, Sweden
June 9
June 10
DockerCon San Francisco, CA, USA
June 10
June 11
Distro Recipes 2014 - canceled Paris, France
June 10
June 12
Ubuntu Online Summit 06-2014 online, online
June 13
June 15
DjangoVillage Orvieto, Italy
June 13
June 15
State of the Map EU 2014 Karlsruhe, Germany
June 13
June 14
Texas Linux Fest 2014 Austin, TX, USA
June 17
June 20
2014 USENIX Federated Conferences Week Philadelphia, PA, USA
June 19
June 20
USENIX Annual Technical Conference Philadelphia, PA, USA
June 20
June 22
SouthEast LinuxFest Charlotte, NC, USA
June 21
June 22
AdaCamp Portland Portland, OR, USA
June 21
June 28
YAPC North America Orlando, FL, USA
June 23
June 24
LF Enterprise End User Summit New York, NY, USA
June 24
June 27
Open Source Bridge Portland, OR, USA
July 1
July 2
Automotive Linux Summit Tokyo, Japan
July 5
July 6
Tails HackFest 2014 Paris, France
July 5
July 11
Libre Software Meeting Montpellier, France
July 6
July 12
SciPy 2014 Austin, Texas, USA

If your event does not appear here, please tell us about it.

Page editor: Rebecca Sobol

Copyright © 2014, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds