As David Greaves held up a phone to begin his Embedded Linux Conference talk, he said that it would be no surprise that phone was not running Windows Mobile or iOS—the surprise is that it is not running Android either. In fact he was holding up the Qt/Wayland/systemd/Btrfs phone, which is better known as the Jolla phone. In reality, though, it is really a "Linux phone", he said, unlike Android-based phones.
Greaves started out at British Telecom, then worked on Maemo, and then its successor, MeeGo. The latter has been shut down, but Nokia and, especially, Intel did an awful lot of work in creating MeeGo. Two guys (he and Carsten Munk) looked at MeeGo and saw much that was useful there, so they founded the Mer project, he said. MeeGo was a bit over-ambitious, though, so Mer slimmed down MeeGo's 1500 packages to around 300.
That's the background for the Jolla (which Greaves pronounced as "Yah-La", which contrasts with our last attempt at capturing the pronunciation) phone. But he stressed that he was not giving a "pitch for Jolla", who he works for now. He is proud of what the team of 90 people has done in less than a year, in hardware design, and the development of an operating system and user interface, but the fact that Jolla has "proven" the Mer approach is also important.
One thing that Greaves's talk did not do was to dispel the murkiness around the boundaries and capabilities of the various components. Projects like Mer, Nemo Mobile, Sailfish OS, and the Jolla phone have been talked about over the years, but it is (and always has been) a bit unclear where one project stops and the next starts up. Part of that is likely caused by the overlap in participants among those projects, but it does, at times, get rather confusing.
In any case, Sailfish OS can (obviously) run on Jolla devices, but it can also run on the Galaxy S3 and the Nexus 4. That is part of the Sailfish for Android project. Support for more devices is in the works using the Hardware Adaption Development Kit (HADK or "haddock" following the nautical theme). The idea is that any device that is rootable and will run CyanogenMod will also be able to run Sailfish OS. That will allow many more folks to try out the phone operating system without buying any new hardware.
QtWayland replaces Android's SurfaceFlinger as the compositor for Sailfish OS. It is not currently using the Android HWcomposer (for 2D graphics) "strenuously", but there are plans to do more of that. The performance of QtWayland is "pretty damn good". Wayland was chosen because it is not difficult to work with and it meshes well with the Android shared-buffers approach, he said. In the end, the developers don't really notice Wayland at all, since QtWayland basically handles that interaction for them.
One area where Qt has problems is its size. It it is a "big piece of software", which means it eats up a fair amount of phone resources. But Qt 5.2 has started modularizing Qt, so that only certain parts need to be included.
Another technology used in Mer/Sailfish OS is systemd, which has "been quite polarizing" in the open-source world. There is something of a love it or hate it attitude about systemd, but they came down on the "love it" side, Greaves said. There were times where "we swore a little bit" at systemd, but by and large it served their needs well.
Systemd is "really fast", predictable, and well-documented, he said. The Journal is "a good fit for us" as well. There have been some issues with user sessions, but part of that may be that the version of systemd was adopted "a long time ago in systemd time". In fact, the biggest problem they have run into with systemd is its rapid development pace and how closely tied it is to newer kernel versions. He hopes that Debian adopting systemd will help alleviate some of those problems in the future.
For Sailfish OS, there is a mix of systemd and Android init code. In fact, the Android init is being run as a service under systemd. The uevent data from Android is being interpreted by udev rules and mount units are created from the Android rc files. Jolla is still exploring how to manage the "mix of two worlds", he said.
Mer/Sailfish OS uses Btrfs for storing data. No partitions are used; Btrfs's dynamic subvolumes are used instead. Snapshots are used to restore the factory settings, but those settings are old at this point, so Jolla is looking into updating that snapshot. In addition, it is looking at using snapshots for features like rolling back unwanted changes and for easy backups.
For handling the network, ConnMan is used. They ran into some problems using it, but writing ifup/ifdown scripts would have been far worse, he said. A recent upgrade to a newer version of ConnMan has helped, too. There are some difficult issues around network handling, no matter what solution is used. The rules for choosing when to use various data sources (3G, WiFi, etc.) and when to switch between them are rather complicated. For phone features, oFono was used, while PulseAudio handled the audio path; both worked quite well, with few problems, he said.
System-on-Chip (SoC) makers typically only create board support packages (BSPs) for Android. "Open hardware is great", Greaves said, but there is not much of it around. Instead there are a bunch of user-space blobs for various pieces such as the camera and haptic feedback. All of those blobs use the Bionic C library, but Sailfish OS (and many other Linux systems) use glibc. Hybris (or libhybris) is meant to bridge that gap.
The idea is to allow Bionic-using code to co-exist with glibc-using code in the same process address space. It required wrappers around Bionic functions and some name mangling (prepending "android_" on Bionic functions for example). It was done with relatively few patches (around ten) to Bionic, he said, mostly for functionality like POSIX threads, errno, hardware vs. software floating point, and so on.
Using libhybris means that, for example, the Android EGL and GLES libraries can be used by glibc applications. Then it is just a matter of "rinse and repeat" to get wrappers for other Android libraries, like Gralloc, NFC, OpenCL, Camera, and so on. Greaves said that the project(s) will be trying to make libhybris work for any device that has a CyanogenMod build.
The rest of the phone is a "fairly standard Linux stack". It uses Git for phone backups, though some tweaks were made to handle videos since "Git-committing videos is not a bright idea". It uses RPM as its package manager, and Gstreamer for multimedia content. It uses the Linux kernel, too, of course, though the project doesn't do much with it: "if it boots and runs the hardware", it gets left alone.
The Mer project has a Platform SDK that is used to build it. Mer "didn't want to have to worry" about which tool versions were installed on the builder's system, so it includes all of the right versions to be used inside of a chroot(). It uses Scratchbox2 for cross-compilation support. Scratchbox2 is "sane, sensible, and usable", Greaves said, which is quite a departure from the earlier Scratchbox. It is so much different (and better) that he wishes it had been given a new name.
Packages can be created easily. Typically it requires some minor tweaks to an existing package spec file. The result of the build is an RPM file that can be installed on the device. There is some LD_PRELOAD trickery in Scratchbox2 to invoke the cross-compiler (using Lua to do pattern-matching), all of which is "a little ugly", but makes it "seamless to the developer", he said.
The SDK can run in a virtual machine, which allows Windows and Mac users to use it. There are a small number of people in the company and project, so porting all of the tools to multiple platforms is not a viable solution. Virtualization solves that problem, he said.
Jolla and Mer use the Open Build Service (OBS), which is an "incredibly powerful build system", he said. Mer focuses on trying to enable vendors to make products, and OBS fits right into that model. Someone can submit a package to OBS and it will automatically build versions for multiple architectures.
It is not just packages that are built that way, though. The latest development tree of Mer and Sailfish OS are built regularly, typically driven by Git commits. The automated system will invoke mic (originally, the Moblin Image Creator—and the reason Mer had to start with an "M", he said with a chuckle) to create an image that can be flashed onto a device for testing.
Mer and Jolla came out of Maemo and MeeGo, which taught them a lot about working in the open. In the MeeGo days, some of the now-Jolla folks sometimes gave Nokia and Intel a hard time about not working more in the open. But now, those folks appreciate the problem from the other side. It's a hard problem and they are trying to learn from the mistakes that have been made in the past.
For Jolla, there are (at least) two pieces to the problem: working with its upstream (Mer) and collaborating with other vendors on new devices. Jolla's internal policies are geared toward the way it works, some of which may or may not mesh well with open-source projects and Mer in particular. For example, all commits must refer to or close a bug number (as the bug tracker is also used as a task tracker). There is an automated process that adds the commit message to the bug tracker, but that isn't particularly useful for Mer, which has a public bugzilla.
Jolla has an internal open-source policy as well. To start with, employees should participate in open-source projects as themselves, rather than as representatives of Jolla. But not all employees came from an open-source background, so some amount of education is needed. Policies covering interactions with projects is part of that as well. Helping to create those policies sometimes made Greaves feel like Bill & Ted: "Be excellent to each other", he said with a laugh.
The Mer project consolidates all of the open-source work that Jolla does (much of the UI layer is still closed). The Mer project's vision is "to make it easy to make devices". That explains what the project is doing and why, he said. For example, there is no UI for Mer. Nemo Mobile was a project started to create the middleware and UI for Mer, but most who are using Mer also use the middleware, so the middleware has been moved into Mer. That means that Nemo Mobile is just UI now, which shrinks it down considerably and, he hopes, makes it more accessible as a community project.
From the Mer project's point of view, the Jolla device is "huge credibility shot" for the project. But Mer works with others too, including Ubuntu and Intel on libhybris. Mer aspires to be more than just code, because "code is not enough". For devices to be easier to make, it requires best-practices documentation in lots of areas: quality assurance, how to do releases, how to manage regressions, building images, handling source repositories, bug tracking, and more.
But, the Jolla phone has shown that the Mer approach works. It took around nine months for 90 people to deliver a working phone with a brand new UI. Others can do the same.
All of the projects would love to have more participants, he said. For example, the camera does not currently work on Sailfish for Android on the Galaxy S3, but Ubuntu got the camera working for Ubuntu Touch, so "come help make it work". In addition, phone platforms are really just 3G data platforms with a touchscreen and some other interesting hardware; phones are not the only things that can be created on such a platform.
While Mer-based devices may not be free-software compliant (because of the binary blobs), Mer will be ready when free drivers come along. It doesn't make sense to wait for those drivers and then to start working on phones and other devices, he said. In that scenario, devices that are free-software-only will have fallen way too far behind.
LyX is a graphical document editor that serves as a front end to TeX and various TeX extensions (LaTeX, XeTeX, etc.). In that sense, LyX serves as a bridge between the high-precision world of TeX typesetting and the easier-to-use WYSIWYG world of word processors. Version 2.1.0 was released on April 25, incorporating updates to the handling of special-purpose content like equations or phonetic notation, improvements to LaTeX option support, and several new page-layout features.
The new release is available for download as source code and in binary packages for a variety of Linux distributions. It requires a working LaTeX installation (due to LaTeX's modularity, almost any modern version should be compatible) as well as Qt and Python.
LyX aims at a target that, to some, sounds inherently unattainable: it is designed to make TeX documents as easy to work with as generic letters and memos are in a word processor like LibreOffice Writer. The challenge stems from the fact that TeX was created to enable fine-grained manipulation of typesetting features that word processors gloss right over. TeX succeeds at this goal, which is why it is the de-facto document-preparation system for scientific research—after all, when one's PhD or professional reputation is on the line, having the equations look "more-or-less correct" just does not cut it. But, in practice, TeX achieves its precision by relying on well-honed macro collections and predefined document classes. Projects like LaTeX and BibTeX provide useful macros and shortcuts that authors can employ rather than writing raw TeX markup by hand.
Even so, the fact that authors can always drop down into TeX markup to handle the inevitable corner cases or specify mathematical formulas directly in math syntax is a strength—when no LaTeX option generates perfect output, TeX is always there to come to the rescue. Consequently, there is a school of thought which says that LyX, with its word-processor-like interface, does not add to the TeX-writing experience, it simply hides it behind another layer of indirection. That is, LyX users are not freed from the need to understand LaTeX and TeX, since they will eventually confront the same corner cases.
But that viewpoint short-sells much of what LyX has to offer. Yes, LyX provides a word-processor-like graphical user interface (GUI), but the GUI does not hide TeX's features from the user—it provides a way to access them via visual cues in the document and GUI components (e.g., toolbars and menu items). LyX does not reduce TeX to an implementation detail, but it makes it easier to create a valid TeX document—and, perhaps more importantly, it makes it more difficult to create a bad TeX document.
Many of the improvements found in LyX 2.1 illustrate this fact. For example, the most basic decision about a LaTeX document is the class to which it belongs, but not only do generic offerings like "letter" and "article" exist; each field, journal, and professional organization can have its own stylistic rules, with a separate class to represent them. In LyX 2.1, all of these classes are organized into categories (as opposed to the flat list of previous generations), which helps make sense out of the plethora of alternatives.
Similarly, version 2.1 adds GUI support for accessing far more LaTeX options, and it does a better job of presenting and explaining those options to the user. LaTeX provides macros that simplify formatting for common document features, such as code listings; LyX exposes the various options in pop-up dialogs. In 2.1, these option panels have been rewritten to standardize terminology, make all options visible at the same time, and to allow presets for common values. Support for several new commands has been added as well, many of which deal with mathematical expressions or with horizontal spacing tweaks.
Table support got an update in 2.1. Arguably the most useful new feature is the ability to move or swap table rows and columns (either with keystrokes or menu commands). It is also now possible to rotate tables on the page—to any arbitrary angle, not just 90-degree increments. Perhaps it is difficult to imagine a use for such rotation functionality, but that might be overthinking matters.
The removal of arbitrary restrictions is a recurring theme in TeX; LyX 2.1 also adds support for custom paragraph shapes, but in removing the restriction that limited paragraphs to simple rectangles, the project could have settled for implementing a fixed set of polygons. Instead, any shape is supported, including unusual options like shapes with holes in the middle. It is also now possible to nest multiple columns of text within an existing column (which is useful for citing and quoting other documents where the page layout itself is important).
Several specific use-cases for LyX received their own improvements in the 2.1 release, such as the beamer class for creating presentation slides, phonetic notation using the International Phonetic Alphabet (IPA), and mathematical formulas. The beamer improvements center around a rewrite of the formerly awkward beamer layout module, but also add some new features like the ability to overlay content. Real IPA support is new; the IPA characters are accessible to the user through a special toolbar, providing an editing workflow that resembles the one used to access special math characters. In previous releases, the rudimentary IPA support was a hack of the existing math-editing code, so this represents a step forward.
Improved support for math typesetting is hardly a surprise; TeX was originally created by Donald Knuth to help him typeset The Art of Computer Programming, and TeX is used heavily by math journals. Nevertheless, there is always room for advancement. LyX 2.1 improves on its formula-and-equation support by adding a document-wide "math font" setting (which will not get overwritten if one changes the font of other body text), by adding a unicode-math package that supports math OpenType fonts (such as STIX or XITS), and by adding a new inline "equation editor" mode.
Several new languages are supported in LyX 2.1, if one activates the optional XeTeX output engine. XeTeX is best known for its support of OpenType, Apple Advanced Typography (AAT), and Graphite font systems, which enable typesetting many writing systems that are not supported (or are not easily supported) by pdfTeX, the default LyX output engine. XeTeX also supports several microtypographic features like hanging punctuation. It is also important to note that "language support" in LyX has a stronger meaning than it does in some other applications: switching the language setting of a document causes LyX to automatically make adjustments to features like the type of quotation marks used.
Speaking of typography, LyX 2.1 also adds support for several new TeX fonts. Unlike the WYSIWYG word-processor world, in which users can highlight any characters they want and put them in a different font via a drop-down menu, in the TeX world, font settings are often a document-centric decision. If the document class designer specifies one font for subheadings and another for body text, then that is simply how it goes, unless one adds document-specific overrides. LyX now ships with more built-in fonts available, and makes it easier for authors to add their own.
Finally, there are many small additions and enhancements in the new release, such as the ability to write multilingual captions, the ability to insert standard-issue "chemical risk and safety" statements, and a new dependency on libmagic to determine the file type of external resources (as opposed to built-in format detection). Nevertheless, there are also some limitations to make note of. Most importantly, LyX 2.1 is not yet compatible with Python 3; if it is the system default Python interpreter on the machine where LyX is installed, one should expect some trouble.
In all, LyX 2.1 is an incremental improvement over previous LyX releases—at least where everyday document editing is concerned—but there are still some significant enhancements that make a reexamination worthwhile for those who have found LyX too limiting in the past. There is still a learning curve, but the conceptual model of TeX is different enough from WYSIWYG word processing that some amount of retraining is inevitable. But if you think TeX is too difficult to learn, a few minutes with LyX is worth the investment.
[Those interested in TeX development should also see Knuth's TeX Tune-Up of 2014 [PDF] from TUGboat 35–1]
The Electronic Frontier Foundation (EFF) has released a browser add-on called Privacy Badger that repurposes the familiar "ad-blocking extension" concept to filter and block out web-tracking tools, rather than advertisements. Privacy Badger detects a number of behavior-tracking methods, attempting to block those that are either loaded invisibly or otherwise operate without the user's consent. In addition to its emphasis on privacy protection, though, it also offers several controls that distinguish it from ad-centric blockers like AdBlock Plus. Perhaps more interestingly, the extension is accompanied by an EFF policy through which sites can be whitelisted by adhering to privacy-respecting rules.
Privacy Badger was announced on May 1, with builds available for Firefox (although not Firefox for Android) and for Chrome/Chromium. The stated purpose of the extension is to help users combat "intrusive and objectionable practices in the online advertising industry, and many advertisers' outright refusal to meaningfully honor Do Not Track requests."
Do Not Track (DNT), of course, is an HTTP header intended to let users specify that they wish to opt out of web-tracking mechanisms. DNT was designed to be a voluntary mechanism that advertisers and data collectors would use as a means of self-regulation. Those businesses have done their best to undermine DNT, however, as many privacy advocates predicted they would.
Among other tactics, various advertising associations devised their own "interpretations" of DNT that, predictably, still involve tracking DNT users. On April 30, Yahoo's "Privacy Team" publicly announced that the company will start ignoring DNT completely, on the grounds that there is no "single standard" about the meaning of DNT. With the voluntary-self-policing loop now neatly closed, it should probably come as no surprise that the EFF followed up with a technical solution—although the timing of events could still be coincidental.
Privacy Badger is based on a fork of the AdBlock Plus engine; it blocks certain HTTP requests, but rather than blocking ads, the blocked content is limited to third-party requests (scripts, cookies, images, or other embedded resources) that are believed to be used as a user-tracking mechanism. These third-party resources are what Privacy Badger regards as "trackers;" they tend to be invisible to the user, but they allow the third-party domain to follow the user across multiple sites by logging the HTTP requests (usually setting a cookie containing some form of identifying string). Not requesting these resources in the first place prevents the remote party from tracking the user; the majority of these trackers emanate from the domains of third-party services, but some come from sites that otherwise contribute functionality to the page. Since blocking all third-party resources would break functionality of many sites, the extension attempts to distinguish between necessary resources and unnecessary ones. The EFF collected data prior to the release of the extension and created a whitelist of patterns that Privacy Badger will not block.
For third-party trackers not on the whitelist, however, Privacy Badger starts off by giving each site the benefit of the doubt. It includes the DNT header with each request, and does not block the tracker when it is first encountered. But if the tracker is encountered on another, unrelated site, that is regarded as evidence that it is violating the user's privacy, and it is added to the block list.
The status of the current page can be examined by opening the Privacy Badger menu (which, on Firefox, is placed in the "Add-on Bar"). All trackers detected in the current page are shown, color-coded to indicate their blocking state. Green means that the tracker is being allowed, yellow means it is a cross-domain tracker on the whitelist (that is, it is being permitted to prevent the site from breaking), and red means it is being blocked. The very first time a user employs Privacy Badger, all of the trackers will be either green or yellow, but the privacy-violating ones quickly get recognized and turned red after visiting just a few sites.
For the whitelisted tracker domains, Privacy Badger loads the resources (e.g., scripts or images), but it still blocks user-tracking cookies from the domain, which should provide some measure of privacy protection. It is not always possible to determine whether a given cookie is used for user tracking purposes or not, of course; the heuristic used allows cookies that have some other clear purpose (such as setting the preferred language), but the EFF notes that more work on the problem would be helpful.
In practice, the Privacy Badger menu is a nice visualization aid. It shows only the domain name of the tracker, whereas AdBlock Plus and similar extensions generally present lengthy URLs and the full regular expressions used to match them. That means skimming through it is a lot easier.
In addition, the green/yellow/red status of each tracker also has a slider (albeit one that has just three discrete positions), so users can easily toggle between the settings for every tracker if they so desire. That is probably most useful for enabling a blocked tracker that is hampering site functionality, but it can be employed for other tasks, too (like seeing how many yellow trackers one can disable and still have a functioning browser session). Here, again, the ad-blocking extensions tend to expose a significantly less usable interface: if a blocked item is breaking page functionality, one must usually hunt through the blocked-items window, enabling and disabling specific expressions in hopes of finding it.
To be perfectly fair, though, ad blockers have a broader scope of content to try and match against, so it is only natural that they have more complicated tools with which to tune the results. The EFF goes to great lengths to explain that Privacy Badger is not, fundamentally speaking, an ad blocker. It will, as a matter of blocking third-party trackers, block third-party-tracker-laden ads, but users interested in reducing their exposure to advertising will need to find another extension to handle the task.
There are two other important categories of tracker that Privacy Badger does not protect against: "first-party" trackers and trackers that rely only on browser fingerprinting techniques. First-party trackers means tracking elements sent by the domain of the main URL itself. As is the case with whitelisted domains domains mentioned earlier, a concern with blocking resource requests too aggressively would be breaking the site's functionality; nevertheless the EFF notes that it hopes to be able to implement some level of first-party tracker blocking in a subsequent release.
Browser fingerprinting is a different beast entirely. The technique relies on gathering specific information about the user by recording information from the browser's User-Agent string, installed plugins, local time zone, accepted HTTP headers, and other system data that can be queried remotely. The EFF's Panopticlick demonstrates just how much data is leaked in this manner. As with first-party trackers, the Privacy Badger project says it hopes to add fingerprinting countermeasures in a future release, but those countermeasures will certainly involve techniques beyond tracker blocking.
As mentioned earlier, Privacy Badger includes the DNT header in each HTTP request; consequently, sites that respect the header and do not return user trackers do not get blocked. The EFF is using this approach as a means to promote DNT adoption. Specifically, advertisers (and other tracker-using sites) that specify a DNT-respecting policy will, in future versions of Privacy Badger, automatically be unblocked.
The EFF has written a proposed DNT policy as part of the initiative. The plan is that a site would store the policy document in plain text at a well-known location (https://example-domain.com/.well-known/dnt-policy.txt in the current draft), where Privacy Badger and other programs could locate it automatically and take the appropriate action in response (such as whitelisting the site). The hope is that if DNT policy statements become widespread, as robots.txt files are for search-engine exclusion, tracker-blocking programs like Privacy Badger can dispense with the built-in whitelist approach currently in use.
But dispensing with the hand-crafted whitelist is only part of the goal. The ultimate point is for sites to respect the DNT header. For that to happen, Privacy Badger and related tools will have to be deployed in significant enough numbers for advertisers to take notice. The EFF notes on the DNT policy page that it is open to having further discussions about the wording of the DNT policy document. If that policy document does take off, it would in essence be the de-facto standard interpretation of DNT's meaning—which would mean, in turn, that there is a consensus around DNT, which would eliminate the "no one agrees on what DNT means" argument espoused recently by Yahoo.
Of course, if that argument is really a spurious claim only tossed out to provide cursory justification for what the company wants to do anyway, then Yahoo and other tracker-using sites will find another argument and continue to track users. It is hard to handicap the chances that Privacy Badger has for making a significant impact on user-tracking behavior. It may remain a useful tool that only a few users employ (as is the case with ad-blocking extensions and other EFF privacy tools like HTTPS Everywhere). On the other hand, browser makers could take the concept to heart and build it into future releases, changing the game significantly.
For now, Privacy Badger is an alpha release, and much more work is still to come. But it is an easy-to-use tool, and it both offers protection against web trackers and sheds light on just how pervasive web-tracker deployment is; both are useful outcomes. The mobile versions of Chrome and Firefox are on the agenda for future releases, as is Opera support; on the project site, the EFF asks for developers interested in working on Safari and Internet Explorer extensions to make contact. There is no telling how well the project will fare as a DNT enforcement tool, but it may be the best option currently available.
|Package(s):||asterisk||CVE #(s):||CVE-2014-2288 CVE-2014-2289|
|Created:||May 5, 2014||Updated:||May 9, 2014|
|Description:||From the CVE entries:
The PJSIP channel driver in Asterisk Open Source 12.x before 12.1.1, when qualify_frequency "is enabled on an AOR and the remote SIP server challenges for authentication of the resulting OPTIONS request," allows remote attackers to cause a denial of service (crash) via a PJSIP endpoint that does not have an associated outgoing request. (CVE-2014-2288)
res/res_pjsip_exten_state.c in the PJSIP channel driver in Asterisk Open Source 12.x before 12.1.0 allows remote authenticated users to cause a denial of service (crash) via a SUBSCRIBE request without any Accept headers, which triggers an invalid pointer dereference. (CVE-2014-2289)
|Package(s):||chromium-browser||CVE #(s):||CVE-2014-1730 CVE-2014-1731 CVE-2014-1732 CVE-2014-1733 CVE-2014-1734 CVE-2014-1735 CVE-2014-1736|
|Created:||May 5, 2014||Updated:||May 16, 2014|
|Description:||From the CVE entries:
Google V8, as used in Google Chrome before 34.0.1847.131 on Windows and OS X and before 34.0.1847.132 on Linux, does not properly store internationalization metadata, which allows remote attackers to bypass intended access restrictions by leveraging "type confusion" and reading property values, related to i18n.js and runtime.cc. (CVE-2014-1730)
core/html/HTMLSelectElement.cpp in the DOM implementation in Blink, as used in Google Chrome before 34.0.1847.131 on Windows and OS X and before 34.0.1847.132 on Linux, does not properly check renderer state upon a focus event, which allows remote attackers to cause a denial of service or possibly have unspecified other impact via vectors that leverage "type confusion" for SELECT elements. (CVE-2014-1731)
Use-after-free vulnerability in browser/ui/views/speech_recognition_bubble_views.cc in Google Chrome before 34.0.1847.131 on Windows and OS X and before 34.0.1847.132 on Linux allows remote attackers to cause a denial of service or possibly have unspecified other impact via an INPUT element that triggers the presence of a Speech Recognition Bubble window for an incorrect duration. (CVE-2014-1732)
The PointerCompare function in codegen.cc in Seccomp-BPF, as used in Google Chrome before 34.0.1847.131 on Windows and OS X and before 34.0.1847.132 on Linux, does not properly merge blocks, which might allow remote attackers to bypass intended sandbox restrictions by leveraging renderer access. (CVE-2014-1733)
Multiple unspecified vulnerabilities in Google Chrome before 34.0.1847.131 on Windows and OS X and before 34.0.1847.132 on Linux allow attackers to cause a denial of service or possibly have other impact via unknown vectors. (CVE-2014-1734)
Multiple unspecified vulnerabilities in Google V8 before 184.108.40.206, as used in Google Chrome before 34.0.1847.131 on Windows and OS X and before 34.0.1847.132 on Linux, allow attackers to cause a denial of service or possibly have other impact via unknown vectors. (CVE-2014-1735)
From the Debian advisory:
|Package(s):||cups-filters||CVE #(s):||CVE-2014-4336 CVE-2014-4337 CVE-2014-4338|
|Created:||May 6, 2014||Updated:||November 4, 2014|
|Description:||From the Red Hat bugzilla:
According to Sebastian Krahmer, the initial fix for CVE-2014-2707 is incomplete:
"This issue was reported as fixed in 1.0.51:
but it was found that the fix was incomplete with the full fix in 1.0.53:
On June 19, CVE entries CVE-2014-4336, CVE-2014-4337, and CVE-2014-4338 were assigned to this issue. From the Mageia advisory:
The CVE-2014-2707 issue with malicious broadcast packets, which had been fixed in Mageia Bug 13216 (MGASA-2014-0181), had not been completely fixed by that update. A more complete fix was implemented in cups-filters 1.0.53 (CVE-2014-4336).
In cups-filters before 1.0.53, out-of-bounds accesses in the process_browse_data function when reading the packet variable could leading to a crash, thus resulting in a denial of service (CVE-2014-4337).
In cups-filters before 1.0.53, if there was only a single BrowseAllow line in cups-browsed.conf and its host specification was invalid, this was interpreted as if no BrowseAllow line had been specified, which resulted in it accepting browse packets from all hosts (CVE-2014-4338).
|Package(s):||fish||CVE #(s):||CVE-2014-2905 CVE-2014-2914 CVE-2014-2906|
|Created:||May 6, 2014||Updated:||October 9, 2014|
|Description:||From the Red Hat bugzilla:
A number of vulnerabilities were reported in fish versions prior to 2.1.1:
CVE-2014-2905: fish universal variable socket vulnerable to permission bypass leading to privilege escalation
fish, from at least version 1.16.0 to version 2.1.0 (inclusive), does not check the credentials of processes communicating over the fishd universal variable server UNIX domain socket. This allows a local attacker to elevate their privileges to those of a target user running fish, including root.
fish version 2.1.1 is not vulnerable.
CVE-2014-2906: fish temporary file creation vulnerable to race condition leading to privilege escalation
fish, from at least version 1.16.0 to version 2.1.0 (inclusive), creates temporary files in an insecure manner.
Versions 1.23.0 to 2.1.0 (inclusive) execute code from these temporary files, allowing privilege escalation to those of any user running fish, including root.
Additionally, from at least version 1.16.0 to version 2.1.0 (inclusive), fish will read data using the psub function from these temporary files, meaning that the input of commands used with the psub function is under the control of the attacker.
fish version 2.1.1 is not vulnerable.
CVE-2014-2914: fish web interface does not restrict access leading to remote code execution
fish, from version 2.0.0 to version 2.1.0 (inclusive), fails to restrict connections to the Web-based configuration service (fish_config). This allows remote attackers to execute arbitrary code in the context of the user running fish_config.
The service is generally only running for short periods of time.
fish version 2.1.1 restricts incoming connections to localhost only. At this stage, users should avoid running fish_config on systems where there are untrusted local users, as they are still able to connect to the fish_config service and elevate their privileges to those of the user running fish_config.
|Created:||May 6, 2014||Updated:||July 24, 2014|
|Description:||From the Ubuntu advisory:
A flaw was discovered in the Linux kernel's pseudo tty (pty) device. An unprivileged user could exploit this flaw to cause a denial of service (system crash) or potentially gain administrator privileges.
|Package(s):||libpng12||CVE #(s):||CVE-2013-7353 CVE-2013-7354|
|Created:||May 2, 2014||Updated:||June 10, 2014|
From the openSUSE bug reports:
CVE-2013-7353: An integer overflow leading to a heap-based buffer overflow was found in the png_set_sPLT() and png_set_text_2() API functions of libpng. A attacker could create a specially-crafted image file and render it with an application written to explicitly call png_set_sPLT() or png_set_text_2() function, could cause libpng to crash or execute arbitrary code with the permissions of the user running such an application.
The vendor mentions that internal calls use safe values. These issues could potentially affect applications that use the libpng API. Apparently no such applications were identified.
CVE-2013-7354: An integer overflow leading to a heap-based buffer overflow was found in the png_set_unknown_chunks() API function of libpng. A attacker could create a specially-crafted image file and render it with an application written to explicitly call png_set_unknown_chunks() function, could cause libpng to crash or execute arbitrary code with the permissions of the user running such an application.
The vendor mentions that internal calls use safe values. These issues could potentially affect applications that use the libpng API. Apparently no such applications were identified.
|Created:||May 2, 2014||Updated:||May 7, 2014|
|Description:||An unprivileged user can, through a specific sequence of calls, cause the libvirtd daemon to crash.|
|Created:||May 6, 2014||Updated:||May 9, 2014|
|Description:||From the CVE entry:
Cross-site scripting (XSS) vulnerability in includes/actions/InfoAction.php in MediaWiki before 1.21.9 and 1.22.x before 1.22.6 allows remote attackers to inject arbitrary web script or HTML via the sort key in an info action.
|Created:||May 2, 2014||Updated:||December 8, 2014|
From the openSUSE bug report:
A remote, command execution flaw was discovered in Nagios NRPE when command arguments are enabled. A remote attacker could use this flaw to execute arbitrary commands. This issue affects versions 2.15 and older.
|Created:||May 1, 2014||Updated:||May 7, 2014|
|Description:||Version 1.06 of N-DJBDNS includes fixes for two denial-of-service vulnerabilities. See the ndjbdns changelog for more information.|
|Created:||May 6, 2014||Updated:||May 30, 2014|
|Description:||From the Ubuntu advisory:
Aaron Rosen discovered that OpenStack Neutron did not properly perform authorization checks when creating ports when using plugins relying on the l3-agent. A remote authenticated attacker could exploit this to access the network of other tenants.
|Created:||May 2, 2014||Updated:||May 7, 2014|
From the Red Hat advisory:
It was discovered that the mcollective client.cfg configuration file was world-readable by default. A malicious, local user on a host with the OpenShift Broker installed could read sensitive information regarding the mcollective installation, including mcollective authentication credentials. A malicious user able to obtain said credentials would potentially have full control over all OpenShift nodes managed via mcollective.
|Created:||May 5, 2014||Updated:||July 24, 2014|
|Description:||From the Mageia advisory:
A null pointer dereference bug in OpenSSL 1.0.1g and earlier in so_ssl3_write() could possibly allow an attacker to cause generate an SSL alert which would cause OpenSSL to crash, resulting in a denial of service.
|Created:||May 1, 2014||Updated:||May 13, 2014|
|Description:||From the Red Hat advisory:
It was found that Sheepdog, a distributed object storage system, did not properly validate Sheepdog image URIs. A remote attacker able to insert or modify glance image metadata could use this flaw to execute arbitrary commands with the privileges of the user running the glance service. Note that only OpenStack Image setups using the Sheepdog back end were affected.
|Created:||May 6, 2014||Updated:||October 6, 2015|
|Description:||From the Red Hat bugzilla:
It was reported that, on some distributions, PHP FPM (a FastCGI Process Manager for PHP) used a UNIX socket with insecure, default permissions. This would allow local users to execute PHP scripts with the privileges of the "apache" user. This is a similar situation to using mod_php where users can place scripts in their "~/public_html/" directory.
Original report: http://www.openwall.com/lists/oss-security/2014/04/29/5
|Created:||May 7, 2014||Updated:||May 22, 2014|
|Description:||From the Fedora advisory:
Fix two security issues for services using python-fedora's TG1 and flask helpers.
The TG1 fix quotes variables that could have been used to launch an XSS attack.
The flask fix addresses OpenID Covert Redirect for web services which use flask_fas_openid to authenticate against the Fedora Account System.
|Created:||May 5, 2014||Updated:||March 29, 2015|
|Description:||From the Red Hat bugzilla:
The lxml.html.clean module cleans up HTML by removing embedded or script content, special tags, CSS style annotations and much more. It was found that the clean_html() function, provided by the lxml.html.clean module, did not properly clean HTML input if it included non-printed characters (\x01-\x08). A remote attacker could use this flaw to serve malicious content to an application using the clean_html() function to process HTML, possibly allowing the attacker to inject malicious code into a website generated by this application.
|Created:||May 2, 2014||Updated:||January 6, 2015|
From the openSUSE bug report:
It was reported that a patch added to Python 3.2 caused a race condition where a file created could be created with world read/write permissions instead of the permissions dictated by the original umask of the process. This could allow a local attacker that could win the race to view and edit files created by a program using this call. Note that prior versions of Python, including 2.x, do not include the vulnerable _get_masked_mode() function that is used by os.makedirs() when exist_ok is set to True.
|Created:||May 2, 2014||Updated:||December 15, 2014|
From the Fedora bug tracker:
A NULL pointer dereference flaw was found in QGIFFormat::fillRect. If an application using the qt-x11 libraries opened a malicious GIF file, it could cause the application to crash.
|Created:||May 5, 2014||Updated:||June 25, 2014|
|Description:||From the Mageia advisory:
rxvt-unicode (aka urxvt) before 9.20 is vulnerable to a user-assisted arbitrary commands execution issue. This can be exploited by the unprocessed display of certain escape sequences in a crafted text file or program output. Arbitrary command sequences can be constructed using this, and unintentionally executed if used in conjunction with various other escape sequences.
|Created:||May 5, 2014||Updated:||May 7, 2014|
|Description:||From the Debian advisory:
A vulnerability has been found in the ASN.1 parser of strongSwan, an IKE/IPsec suite used to establish IPsec protected links.
By sending a crafted ID_DER_ASN1_DN ID payload to a vulnerable pluto or charon daemon, a malicious remote user can provoke a null pointer dereference in the daemon parsing the identity, leading to a crash and a denial of service.
|Created:||May 7, 2014||Updated:||July 20, 2016|
|Description:||From the Red Hat advisory:
It was found that the Struts 1 ActionForm object allowed access to the 'class' parameter, which is directly mapped to the getClass() method. A remote attacker could use this flaw to manipulate the ClassLoader used by an application server running Struts 1. This could lead to remote code execution under certain conditions.
|Created:||May 6, 2014||Updated:||May 7, 2014|
|Description:||From the Red Hat bugzilla:
Agostino Sarubbo reported on the oss-security mailing list that, on Gentoo, /var/log/varnish is world-accessible and the log files inside the directory are world-readable. This could allow an unprivileged user to read the log files.
Checking on Fedora and EPEL, /var/log/varnish is provided with 0755 permissions. These should be reduced to 0700 permissions, like /var/log/httpd.
|Created:||May 5, 2014||Updated:||May 7, 2014|
|Description:||From the Debian advisory:
Michael Niedermayer discovered a vulnerability in xbuffy, an utility for displaying message count in mailbox and newsgroup accounts.
By sending carefully crafted messages to a mail or news account monitored by xbuffy, an attacker can trigger a stack-based buffer overflow, leading to xbuffy crash or even remote code execution.
Page editor: Jake Edge
Brief itemsreleased on May 4. According to Linus: "There's a few known things pending still (pending fix for some interesting dentry list corruption, for example - not that any remotely normal use will likely ever hit it), but on the whole things are fairly calm and nothing horribly scary. We're in the middle of the calming-down period, so that's just how I like it."
That said, there are few users of remap_file_pages() out there. So few that Kirill Shutemov has posted a patch set to remove it entirely, saying "Nonlinear mappings are pain to support and it seems there's no legitimate use-cases nowadays since 64-bit systems are widely available." The patch is not something he is proposing for merging yet; it's more of a proof of concept at this point.
It is easy to see the appeal of this change; it removes 600+ lines of tricky code from the kernel. But that removal will go nowhere if it constitutes an ABI break. Some kernel developers clearly believe that no users will notice if remap_file_pages() goes away, but going from that belief to potentially breaking applications is a big step. So there is talk of adding a warning to the kernel; Peter Zijlstra suggested going a step further and require setting a sysctl knob to make the system call active. But it would also help if current users of remap_file_pages() would make themselves known; speaking now could save some trouble in the future.
Kernel development newskGraft live-patching mechanism; shortly thereafter, developers at Red Hat came forward with their competing kpatch mechanism. The approaches taken by the two groups show some interesting similarities, but also some significant differences.
Like kGraft, kpatch replaces entire functions within a running kernel. A kernel patch is processed to determine which functions it changes; the kpatch tools (not included with the patch, but available in this repository) then use that information to create a loadable kernel module containing the new versions of the changed functions. A call to the new kpatch_register() function within the core kpatch code will use the ftrace function tracing mechanism to intercept calls to the old functions, redirecting control to the new versions instead. So far, it sounds a lot like kGraft, but that resemblance fades a bit once one looks at the details.
KGraft goes through a complex dance during which both the old and new versions of a replaced function are active in the kernel; this is done in order to allow each running process to transition to the "new universe" at a (hopefully) safe time. Kpatch is rather less subtle: it starts by calling stop_machine() to bring all other CPUs in the system to a halt. Then, kpatch examines the stack of every process running in kernel mode to ensure that none are running in the affected function(s); should one of the patched functions be active, the patch-application process will fail. If things are OK, instead, kpatch patches out the old functions completely (or, more precisely, it leaves an ftrace handler in place that routes around the old function). There is no tracking of whether processes are in the "old" or "new" universe; instead, everybody is forced to the new universe immediately if it is possible.
There are some downsides to this approach. stop_machine() is a massive sledgehammer of a tool; kernel developers prefer to avoid it if at all possible. If kernel code is running inside one of the target functions, kpatch will simply fail; kGraft, instead, will work to slowly patch the system over to the new function, one process at a time. Some functions (examples would include schedule(), do_wait(), or irq_thread()) are always running somewhere in the kernel, so kpatch cannot be used to apply a patch that modifies them. On a typical system, there will probably be a few dozen functions that can block a live patch in this way — a pretty small subset of the thousands of functions in the kernel.
While kpatch, with its use of stop_machine(), may seem heavy-handed, there are some developers who would like to see it take an even stronger approach initially: Ingo Molnar suggested that it should use the process freezer (normally used when hibernating the system) to be absolutely sure that no processes have any running state within the kernel. That would slow live kernel patching even more, but, as he put it:
The hitch with this approach, as noted by kpatch developer Josh Poimboeuf, is that there are a lot of unfreezable kernel threads. Frederic Weisbecker suggested that the kernel thread parking mechanism could be used instead. Either way, Ingo thought, kernel threads that prevented live patching would be likely to be fixed in short order. There was not a consensus in the end on whether freezing or parking kernel threads was truly necessary, but opinion did appear to be leaning in the direction of being slow and safe early on, then improving performance later.
The other question that has come up has to do with patches that change the format or interpretation of in-kernel data. KGraft tries to handle simple cases with its "universe" mechanism but, in many situations, something more complex will be required. According to kGraft developer Jiri Kosina, there is a mechanism in place to use a "band-aid function" that understands both forms of a changed data structure until all processes have been converted to the new code. After that transition has been made, the code that writes the older version of the changed data structure can be patched out, though it may be necessary to retain code that reads older data structures until the next reboot.
On the kpatch side, instead, there is currently no provision for making changes to data structures at all. The plan for the near future is to add a callback that can be packaged with a live patch; its job would be to search out and convert all affected data structures while the system is stopped and the patch is being applied. This approach has the potential to work without the need for maintaining the ability to cope with older data structures, but only if all of the affected structures can be located at patching time — a tall order, in many cases.
The good news is that few patches (of the type that one would consider for live patching) make changes to kernel data structures. As Jiri put it:
So the question of safely handling data-related changes can likely be deferred for now while the question how to change the code in a running kernel is answered. There have already been suggestions that this topic should be discussed at the 2014 Kernel Summit in August. It is entirely possible, though, that the developers involved will find a way to combine their approaches and get something merged before then. There is no real disagreement over the end goal, after all; it's just a matter of finding the best approach for the implementation of that goal.
While it's certainly not an everyday occurrence, getting Linux running on a new CPU architecture needs to be done at times. To someone faced with that task, it may seem rather daunting—and it is—but, as Marta Rybczyńska described in her Embedded Linux Conference (ELC) talk, there are some fairly straightforward steps to follow. She shared those steps, along with many things that she and her Kalray colleagues learned as they ported Linux to the MPPA 256 processor.
When the word "porting" is used, it can mean one of three different things, she said. It can be a port to a new board with an already-supported processor on it. Or it can be a new processor from an existing, supported processor family. The third alternative is to port to a completely new architecture, as with the MPPA 256 (aka K1).
With a new architecture comes a new CPU instruction set. If there is a C compiler, as there was for her team, then you can recompile the existing (non-arch) kernel C code (hopefully, anyway). Any assembly pieces need to be rewritten. There will be a different memory map and possibly new peripherals. That requires configuring existing drivers to work in a new way or writing new drivers from scratch. Also, when people make the effort to create a new architecture, they don't do that just for fun, Rybczyńska said. There will be benefits to the new architecture, so there will be opportunities to optimize the existing system to take advantage of it.
There are several elements that are common to any port. First, you need build tools, such as GCC and binutils. Next, there is the kernel, both its core code and drivers. There are important user-space libraries that need to be ported, such as libc, libm, pthreads, etc. User-space applications come last. Most people start with BusyBox as the first application, then port other applications one by one.
To get started, you have to learn about the new architecture, she said. The K1 is a massively multi-core processor with both high performance and high energy efficiency, she said. It has 256 cores that are arranged in groups of sixteen cores which share memory and an MMU. There are Network-on-Chip interfaces to communicate between the groups. Each core has the same very large instruction word (VLIW) instruction set, which can bundle up to five instructions to be executed in one cycle. The cores have advanced bitwise instructions, hardware loops, and a floating point unit (FPU). While the FPU is not particularly important for porting the kernel, it will be needed to port user-space code.
To begin, you create an empty directory (linux/arch/k1 in her case), but then you need to fill it, of course. The initial files needed are less than might be expected, Rybczyńska said. Code is needed first to configure the processor, then to handle the memory map, which includes configuring the zones and initializing the memory allocators. Handling processor mode changes is next up: interrupt and trap handlers, including the clock interrupt, need to be written, as does code to handle context switches. There is some device tree and Kconfig work to be done as well. Lastly, adding a console to get printk() output is quite useful.
To create that code, there are a couple of different routes. There is not that much documentation on this early boot code, so there is a tendency to copy and paste code from existing architectures. Kalray used several as templates along the way, including MicroBlaze, Blackfin, and OpenRISC. If code cannot be found to fit the new architecture, it will have to be written from scratch. That often requires reading other architecture manuals and code—Rybczyńska can read the assembly language for several architectures she has never actually used.
There is a tradeoff between writing assembly code vs. C code for the port. For the K1, the team opted for as much C code as possible because it is difficult to properly bundle multiple instructions into a single VLIW instruction by hand. GCC handles it well, though, so the K1 port uses compiler built-ins in preference to asm inline functions. She said that the K1 has less assembly code than any other architecture in the kernel.
Once that is all in place, at some point you will get the (sometimes dreaded) "Failed to execute /init" error message. This is actually a "big success", she said, as it means that the kernel has booted. Next up is porting an init, which requires a libc. For the K1, they ported uClibc, but there are other choices, of course. She suggested that the first versions of init be statically linked, so that no dynamic loader is required.
Porting a libc means that the kernel-to-user-space ABI needs to be nailed down. At program startup, which values will be in what registers? Where will the stack be located? And so on. Basically, it required work in both the kernel and libc "to make them work together". System calls will also need to be worked on. Setting the numbers for the calls along with determining how the arguments will be passed (registers? stack?) is needed. Signals will need some work as well, but if the early applications being ported don't use signals, only basic support needs to be added, which makes things much simpler.
Kalray created an instruction set simulator for the K1, which was helpful in debugging. The simulator can show every single instruction with the value in each register. It is "handy and fast", Rybczyńska said, and was a great help when doing the port.
Eventually, booting into the newly ported init will be possible. At that point, additional user-space executables are on the agenda. Again she suggested starting out with static binaries. Work on the dynamic loader required "lots of work on the compiler and binutils", at least for the K1. Also needed is porting or writing drivers for the main peripherals that will be used.
Rybczyńska stressed that testing is "easily forgotten", but is important to the process. When changes are made, you need to ensure you didn't break things that were already working. Her team started by trying to create unit tests from the kernel code, but determined that was hard to do. Instead, they created a "test init" that contained some basic tests of functionality. It is a "basic validation that all of the tools, libc, and the kernel are working correctly", she said.
Further testing of the kernel is required as well, of course. The "normal idea" is to write your own tests, she said, but it would take months just to create tests for all of the system calls. Instead, the K1 team used existing tests, especially those from the Linux Test Project (LTP). It is a "very active project" with "tests for nearly everything", she said; using LTP was much better than trying to write their own tests.
Continuing on is just a matter of activating new functionality (e.g. a new kernel subsystem, filesystem, or driver), fixing things that don't compile, then fixing any functionality that doesn't work. Test-driven development "worked very well for us".
As an example, she described the process undertaken to port strace, which she called a nice debugging tool that is much less verbose than the instruction set simulator. But strace uses the ptrace() system call and requires support for signals. Up until that point, there had not been a need to support signals. The ptrace() tests in LTP were run first, then strace was tried. It compiled easily, but didn't work as there were architecture-specific pieces of the ptrace() code that still needed to be implemented.
Supporting a new architecture requires new code to enable the special features of the chip. For Kalray, the symmetric multi-processing (SMP) and MMU code required a fair amount of time to design and implement. The K1 also has the Network-on-Chip (NoC) subsystem, which is brand new to the kernel. Supporting that took a lot of internal discussion to create something that worked correctly and performed reasonably. The NoC connects the groups of cores, so its performance is integral to the overall performance of the system.
Once the port matures, building a distribution may be next up. One way is to "do it yourself", which is "fine if you have three packages", Rybczyńska said. But if you have more packages than that, it becomes a lot less fun to do it that way. Kalray is currently using Buildroot, which was "easy to set up". The team is now looking at the Yocto Project as another possibility.
The team learned a number of valuable lessons in doing the port. To start with, it is important to break the work up into stages. That allows you to see something working along the way, which indicates progress being made, but it also helps with debugging. "Test test test", she said, and do it right from the beginning. There are subtle bugs that can be introduced in the early going and, if you aren't testing, you won't catch them early enough to easily figure out where they were introduced.
Wherever possible, use generic functionality already provided by the kernel or other tools; don't roll your own unless you have to. Adhere to the kernel coding style from the outset. She suggested using panic() and exit() in lots of places, including putting it in every non-implemented function. That will help not to waste time debugging problems that aren't actually problems. Code that won't compile if the architecture is unknown should be preferred. If an application has architecture dependencies, failing to compile is much easier to diagnose than some strange failure.
Spend time developing advanced debugging techniques and tools. For example, they developed a visualization tool that showed kernel threads being activated during the boot process. Reading the documentation is important, as is reading the comments in the code. Her last tip was that reading code for other platforms is quite useful, as well.
With that, she answered a few questions from the audience. The port took about two months to get it to boot the first init, she said, the rest "takes much more time". The port is completely self-contained as there are no changes to the generic kernel. Her hope is to submit the code upstream as soon as possible, noting that being out of the mainline can lead to problems (as they encountered with a pointer type in the tty functions when upgrading to 3.8). While Linux is not shipping yet for the K1, it will be soon. The K1 is currently shipping with RTEMS, which was easier to port, thus it filled the operating system role while the Linux port was being completed, she said.Last week's article on "Linux and the Internet of Things" discussed the challenge of shrinking the kernel to fit on to computers that, by contemporary standards, are laughably underprovisioned. Shortly thereafter, the posting of a kernel-shrinking patch set sparked a related discussion: what needs to be done to get the kernel to fit into tiny systems and, more importantly, is that something that the kernel development community wants to even attempt?
The patch set in question was a 24-part series from Andi Kleen adding an option to build a minimally sized networking subsystem. Andi is looking at running Linux on systems with as little as 2MB of memory installed; on such systems, the Linux kernel's networking stack, which weighs in at about 400KB for basic IPv4 support, is just too big to shoehorn in comfortably. By removing a lot of features, changing some data structures, and relying on the link-time optimization feature to remove the (now) unneeded code, Andi was able to trim things down to about 170KB. That seems like a useful reduction, but, as we will see, these changes have a rough road indeed ahead of them before any potential merge into the mainline.
Some of the changes in Andi's patch set include:
The above list could be made much longer, but the point should be apparent by now: this patch set was not welcomed by the networking community with open arms. This community has been working with a strong focus on performance and features on contemporary hardware; networking developers (some of them, at least) do not want to be bothered with the challenges of trying to accommodate users of tiny systems. As Eric Dumazet put it:
The networking developers also do not want to start getting bug reports from users of a highly pared-down networking stack wondering why things don't work anymore. Some of that would certainly happen if a patch set like this one were to be merged. One can try to imagine which features are absolutely necessary and which are optional on tiny systems, but other users solving different problems will come to different conclusions. A single "make it tiny" option has a significant chance of providing a network stack with 99% of what most tiny-system users need — but the missing 1% will be different for each of those users.
Still, pointing out some difficulties inherent in this task is different from saying that the kernel should not try to support small systems at all, but that appears to be the message coming from the networking community. At one point in the discussion, Andi posed a direct question to networking maintainer David Miller: "What parts would you remove to get the foot print down for a 2MB single purpose machine?" David's answer was simple: "I wouldn't use Linux, end of story. Maybe two decades ago, but not now, those days are over." In other words, from his point of view, Linux should not even try to run on machines of that class; instead, some sort of specialty operating system should be used.
That position may come as a bit of a surprise to many longtime observers of the Linux development community. As a general rule, kernel developers have tried to make the system work on just about any kind of hardware available. The "go away and run something else" answer has, on rare occasion, been heard with regard to severely proprietary and locked-down hardware, but, even in those cases, somebody usually makes it work with Linux. In this case, though, there is a class of hardware that could run Linux, with users who would like to run Linux, but some kernel developers are telling them that there is no interest in adding support for them. This is not a message that is likely to be welcomed in those quarters.
Once upon a time, vendors of mainframes laughed at minicomputers — until many of their customers jumped over to the minicomputer market. Minicomputer manufacturers treated workstations, personal computers, and Unix as toys; few of those companies are with us now. Many of us remember how the proprietary Unix world treated Linux in the early days: they dismissed it as an underpowered toy, not to be taken seriously. Suffice to say that we don't hear much from proprietary Unix now. It's a classic Innovator's Dilemma story of disruptive technologies sneaking up on incumbents and eating their lunch.
It is not entirely clear that microscopic systems represent this type of disruptive technology; the "wait for the hardware to grow up a bit" approach has often worked well for Linux in the past. It is usually safe to bet on computing hardware increasing in capability over time, so effort put into supporting underpowered systems is often not worth it. But we may be dealing with a different class of hardware here, one where "smaller and cheaper" is more important than "more powerful." If these systems can be manufactured in vast numbers and spread like "smart dust," they may well become a significant part of the computing substrate of the future.
So the possibility that tiny systems could be a threat to Linux should certainly be considered. If Linux is not running on those devices, something else will be. Perhaps it will be a Linux kernel with the networking stack replaced entirely by a user-space stack like lwIP, or perhaps it will be some other free operating system whose community is more interested in supporting this hardware. Or, possibly, it could be something proprietary and unpleasant. However things go, it would be sad to look back someday and realize that the developers of Linux could have made the kernel run on an important class of machines, but they chose not to.
Patches and updates
Core kernel code
Page editor: Jonathan Corbet
The Debian project is engaged in a debate over how the Debian menu is presented in various desktop environments and what policy for application packaging should be derived from that decision. The recent trend in desktop environments has been away from a master "applications menu," but Debian's response to that shift has as much to do with internal project-management processes as it does with updating packaging recommendations.
Back in May 2013, the issue of moving away from the Debian menu (which contains a categorical hierarchy of application launchers, as was a common feature of most desktop environments in years past) was first raised in a bug filed by Sune Vuorela. The original goal of the Debian menu was to provide a consistent way to access installed applications, regardless of which of Debian's many available desktop environments was in use. Over the years, however, the style and interface conventions of those various environments have shifted away from the one-master-menu approach.
Vuorela's bug report noted that the Debian menu was now hidden by default in GNOME, that a similar change was under consideration for KDE, and that many application packages had stopped including the menu-entry description files needed by the Debian menu in the first place. The recommended change was to soften the language found in Debian's official policy manual that told application packagers they should create a menu entry; instead, creating a menu entry would be an option, but the more important factor would be creating a .desktop file that could be used by the search-driven interfaces of GNOME Shell and other recent desktop environments (though it could also be used by menu-driven environments).
Of course, the Debian menu itself has had both fans and critics over the years. When shown, it frequently presents duplicates to a number of entries also found in the GNOME or KDE menu structures, which seems superfluous. On the other hand, Debian can control the content of the menu to a much greater extent, which makes it more predictable—it is always possible to find a utility in the Debian menu, the argument goes, even if the same utility gets removed or reclassified into a hard-to-find spot in the desktop environment's menu structure.
But much of the work of making the Debian menu usable fell on application packagers, who were tasked with creating the menu file for each package, and the Debian menu-entry format uses a distribution-specific syntax. Meanwhile, GNOME, KDE, and the vast majority of other desktop environments have agreed on the Freedesktop.org desktop entry specification for .desktop files, which covers most of the same metadata for each application.
Eventually, a proposal formed to recommend that packagers migrate away from the Debian menu-entry format to the Freedesktop.org .desktop format—in essence, deprecating the Debian menu in favor of the Freedesktop.org system (however it might be implemented in any particular desktop environment). In early 2014, the proposal picked up steam again (after several months of inactivity) as a possible change to be included in the upcoming Debian "jessie" release, and on February 15, Charles Plessy checked in the change to the policy manual.
Not everyone was satisfied, however. Bill Allombert (who along with Andreas Barth, Jonathan Nieder, and Russ Allbery, is a Policy Editor) reverted the change on February 25, arguing that a number of objections to the proposal had not been addressed, and, thus, "there is no consensus in favor of this change, so committing to policy was premature." Vuorela and others disagreed, insisting that all of the objections listed by Allombert had been answered, and concluding that "there is a consensus. Note that consensus doesn't mean unanimous."
Plessy contended that Allombert had sat out of the discussion over the preceding year, and that a majority of Debian Developers and Policy Editors had approved it. Stepping in after the decision and reverting it, he said, amounted to single-handedly vetoing the change. Plessy suggested taking the issue to the Debian Technical Committee (TC) for a resolution, and after another reply from Allombert did not seem to change matters, filed a bug on the subject with the TC on March 14.
While escalating the issue to the TC might seem to focus on the issue of whether one member of a team (in this case, the Policy Editors) can overrule a consensus reached by the others, that is not in fact the direction that the TC discussion took. As Ian Jackson said:
So, no. The TC will not make decisions about the content of policy on the basis of an adjudications about the policy process. We will rule on the underlying question(s), on the merits.
Instead, the TC discussion returned to the original question of whether or not the Debian menu remained a useful feature even in light of the increasing usage of Freedesktop.org standards by desktop environments. As was the case in the original bug's discussion thread, there are arguments from many different perspectives regarding situations where the "traditional" Debian menu might be better than a modern Freedesktop.org menu. They included technical concerns, like the fact that the Debian menu expects application icons in XPM format and at no greater size than 32-by32 pixels (both of which make for a less-than-pleasing image on modern displays), and implementation concerns, such as who would be tasked with creating the .desktop files for the various application packages.
Also raised as a concern is the degree to which the policy manual specifies what developers and packagers must do versus what they optionally can do. Both the old policy wording and the patched version say that shipping applications "should" include a menu entry in the specified format. But "should" can be interpreted either as a requirement—meaning that an application without the menu entry it "should" have will not be included—or as a recommendation that does not necessarily demand every application conform.
And, on that point, the TC does not yet appear to have arrived at a consensus. Debian has thousands of packages; if the policy is changed in such a way that developers and packagers are required to create .desktop files, the result could potentially be thousands of hours of work. Imposing such a requirement with a simple wording change does not seem to be an ideal move.
In that context, I am asking the TC to a) acknowledge that the changes to section 9.6 after the Policy changes process was followed accordingly, and b) ask for Bill's commit 378587 be reverted. In particular, in the absence of Bill's contribution to the resolution of our conflict, I am asking the TC to not discuss the menu systems and focus instead on correcting Bill's misbehaviour.
What is at a stake here is not the Debian Menu system, it is the fact that in Debian, it takes 5 minutes for one person to block one year of effort and patience from multiple other persons.
At this point, it is not clear what the TC will do next. Over the course of the "should" discussion, it became clear that Debian's policy manual is not entirely consistent in the wording with which it describes requirements and recommendations. Worse yet, it became clear that not all members of the project agree, even when confronted with a single word. Whether or not further discussion can resolve those issues is hard to say, but Debian, at least, is no stranger to lengthy debates.
Newsletters and articles of interest
Page editor: Rebecca Sobol
Toward the end of 2012, Google switched the Bluetooth stack in Android—for reasons unknown, though there has always been speculation about licensing—from the GPL-licensed BlueZ to the Apache-licensed BlueDroid. That switch was for the release of Android 4.2 (one of the Jelly Bean releases). Since the switch, though, Intel and the BlueZ project have been working to restore the option of running Android with BlueZ, which provides a whole raft of additional features lacking in BlueDroid. Marcel Holtmann of the Intel Open Source Technology Center reported on the BlueZ option at the Android Builders Summit (ABS) held in San Jose, CA, April 29–May 1.
After the October 2012 Android release with BlueDroid, the initial reviews of the new stack were "not that good", Holtmann said, which is not a huge surprise for a completely new Bluetooth stack. As it turns out, based on Google's February 2014 numbers, 73% of Android devices are actually still running BlueZ because they are running earlier releases. The initial release of Google Glass ran BlueZ as well.
From the perspective of the BlueZ developers, one good thing that came out of the BlueDroid switch was the addition of a Bluetooth hardware abstraction layer (HAL). That meant that Google engineers had to think about and define what features to expose and how to expose them. In the end, Google added a Bluetooth Core HAL and a Profile HAL, he said.
When it was added, BlueDroid was said to be "tiny", but it turns out to be 286K lines of C and C++ code. There are a number of limitations to BlueDroid, Holtmann said. For example, the entire stack, which includes the Bluetooth service, HAL layers, and BlueDroid itself, all runs in a single process.
But there is much more to be concerned about with BlueDroid, according to Holtmann. He had a list of more than a dozen items that are missing or sub-par in BlueDroid. To start with, every new hardware device that will be supported needs to fork the source of BlueDroid. There is build-time configuration for the stack, including which profiles are included and what hardware features are enabled. So there is no single BlueDroid tree with support for multiple hardware platforms. The Android open source project (AOSP) only provides trees for three Nexus devices (4, 5, and 7) which are based on either Broadcom or Qualcomm hardware.
Anything more complicated than supporting the serial (UART) interface to the hardware requires that a kernel shim driver be written, which means that devices connected via USB, PCI, SPI, etc. will require drivers to be written. In addition, the bus power management is done in user space, which we have learned is "not a good idea".
BlueDroid is a lot of new code being introduced into devices, without any kind of known security audit. The Git history for the repository starts in December 2012 and has a grand total of 140 commits. Worse yet, those commits are often huge and don't have commit messages that explain what is being done or why. There is little documentation provided, essentially just the examples, and there are no unit tests.
The stack itself suffers from audio latency problems. Part of that is due to the large number of context switches required for handling every audio frame, host controller interface (HCI) packet, network packet, and other communications. The initial release of BlueDroid had no support for debugging; recompiling was required to get debugging output. Things are a bit better with the Android 4.4 (KitKat) release of BlueDroid, though, he said.
There is no Intel architecture (IA) optimization for the required SBC audio codec, nor support for other Intel-only features, which is obviously a problem for Intel and its customers, he said. The 64-bit support for BlueDroid is unclear as well. Much of it has only been compile-tested on ARM, Holtmann said.
Beyond all of that, BlueDroid is not Bluetooth-certified except for the proprietary Broadcom AirForceBT stack that also uses the Bluetooth HALs. Support for Bluetooth 4.1 is left up to the device makers; BlueDroid only provides code for Bluetooth 4.0.
Given all of that, one might ask why Google switched away from BlueZ (which doesn't have most of the problems identified), as one audience member did. Holtmann said that he has heard rumors about why the switch was made, but that he didn't want to spread them. He is, however, interested in finding out, and suggested that someone from Google should explain the choice. Google attendees were in short supply at ABS; if any were present at the talk, they didn't seem willing (or able) to answer that particular question.
There are now two different ways to support BlueZ features on Android devices. The first is a port of BlueDroid to use the existing Linux kernel drivers for Bluetooth. That allows devices to use all of the existing drivers, so Bluetooth is not limited to just the UART interface, as USB, SDIO, PCMCIA (if you can still find such devices), and others are available. There is a "tiny shim layer" of around 100 lines of kernel code that the upper layers of BlueDroid talk to.
That alternative is called "BlueDroid with HCI user channel" and "it works pretty well", Holtmann said. It allows a few of the problems identified with BlueDroid to be crossed off the list (user-space power management, only a few reference devices, new drivers required, limited debugging capabilities), but most of the rest remain. Fixing those problem is the goal of the second alternative: "BlueZ for Android".
BlueZ for Android (BfA) provides a "drop-in replacement" for BlueDroid, which means that apps do not need to change. That is also true for the HCI user channel alternative since it sits below BlueDroid. The D-Bus APIs that BlueZ normally uses have been replaced by integration with the Android Bluetooth HALs. BfA brings Bluetooth 4.1 support, as well as documentation and a wide of range of tests. It supports an even dozen profiles, with the Health Device Profile (HDP) currently being worked on.
It is a low-latency stack that also supports lower-power audio. BlueZ has had 64-bit support for some time now, as well as codecs optimized for the Intel architecture. It also supports Intel's hardware advanced encryption standard (AES) processing and hardware random number generation (RDRAND instruction). The code has been used and tested in a variety of different desktop and mobile platforms over many years, including Android.
The laundry list of BlueDroid deficiencies also dropped to near zero by swapping BlueZ in. There are still too many context switches for human interface device (HID) reports and radio frequency communication (RFCOMM) streams, but the project is working on eliminating those as well. Other than that, everything on the list has been addressed.
In addition, BfA has been developed as part of the open-source BlueZ project. Its Git repository stretches back much further, with many more, well-documented commits. It is also notable that BlueZ is on its way toward switching to the LGPL. Roughly 80% of the code is already licensed that way, with more coming, though it was not clear when that job would be finished.
While it was never said in the presentation, the clear implication of Holtmann's talk was that Google made a poor choice in switching to BlueDroid. The addition of the Bluetooth HALs was good, but BlueDroid itself simply did not have the right architecture or feature set. Unless Google puts a lot of effort into BlueDroid development, it will likely fall further behind, as things like Bluetooth 4.2 are on the horizon. But it would seem that device makers already have an alternative—it will be interesting to see if (and how much) it gets used.
$ /usr/bin/git cone git: 'cone' is not a git command. See 'git --help'. Did you mean this? clone
You know DAMN WELL what I meant git, and you mock me by echoing it out right in front of me.
Version 0.14.0 of the SciPy numeric computing library for Python has been released. Changes in this version include new functions and classes for interpolation, working with multivariate random variables, signal filtering, and optimization. The release announcement also notes this is "the first release for which binary wheels are available on PyPi for OS X, supporting the python.org Python." There are also several deprecations; existing users should read the release notes for full details.
Version 3.6 of the Tor Browser Bundle (TBB) has been released. Most notably, the update includes the debut of "fully integrated Pluggable Transport support, including an improved Tor Launcher UI for configuring Pluggable Transport bridges." TBB is based on Firefox; version 3.6 on Firefox 24.5.
Newsletters and articles
At his blog, Henri Bergius writes about work from this week's GNOME Developer Experience hackfest in Berlin. One outcome of said hackfest is integration of the NoFlo flow-based programming environment with the GNOME APIs. "What the resulting project does is give the ability to build and debug GNOME applications in a visual way with the Flowhub user interface. You can interact with large parts of the GNOME API using either automatically generated components, or hand-built ones. And while your software is running, you can see all the data passing through the connections in the Flowhub UI." Though there is still more work to come, it is possible to develop and debug GTK+ and Clutter applications with NoFlo.
Page editor: Nathan Willis
Brief itemsOrganizations and companies across the technology industry and political spectrum oppose the bulk collection of data on all internet users. Reset The Net is a day of action to secure and encrypt the web to shut out the government’s mass surveillance capabilities."
Articles of interestA W3C working group is currently standardising an "Encrypted Media Extension" (EME), which will allow companies to easily plug in non-free "Content Decryption Modules" (CDM) with DRM functionality, taking away users' control over their own computers. Most DRM technologies impose restrictions on users that go far beyond what copyright and consumers' rights allow." Today we come together for the eighth International Day Against DRM, to insist on a future without restrictions on our media. This is the largest anti-DRM event in the world, and it's growing. [Head over to DayAgainstDRM.org to take action against DRM with events, petitions and more, then meet the anti-DRM community and enjoy sales on DRM-free media.]"
Calls for PresentationsFrom presentations for beginners to presentations on deploying monitoring solutions in very large environments or clusters systems, the conference always offers something for everyone. As in the previous years, the conference language will be German and English." The program committee is asking for papers and presentation proposals from anyone using or developing with Tcl/Tk (and extensions)." As usual, we are open to talks across the layers of the graphics stack, from the kernel to desktop environments / graphical applications and about how to make things better for the developers who build them."
|May 9||June 10
|Distro Recipes 2014 - canceled||Paris, France|
|May 12||July 19
|Conference for Open Source Coders, Users and Promoters||Taipei, Taiwan|
|May 18||September 6
|Akademy 2014||Brno, Czech Republic|
|May 19||September 5||The OCaml Users and Developers Workshop||Gothenburg, Sweden|
|May 23||August 23
|Free and Open Source Software Conference||St. Augustin (near Bonn), Germany|
|May 30||September 17
|PostgresOpen 2014||Chicago, IL, USA|
|June 6||September 22
|Open Source Backup Conference||Köln, Germany|
|June 6||June 10
|Ubuntu Online Summit 06-2014||online, online|
|June 20||August 18
|Linux Security Summit 2014||Chicago, IL, USA|
|June 30||November 18
|Open Source Monitoring Conference||Nuremberg, Germany|
|July 1||September 5
|BalCCon 2k14||Novi Sad, Serbia|
|July 4||October 31
|Free Society Conference and Nordic Summit||Gothenburg, Sweden|
|July 5||November 7
|Jesień Linuksowa||Szczyrk, Poland|
If the CFP deadline for your event does not appear here, please tell us about it.
|Wireless Battle Mesh v7||Leipzig, Germany|
|OpenStack Summit||Atlanta, GA, USA|
|Samba eXPerience||Göttingen, Germany|
|ScilabTEC 2014||Paris, France|
|May 17||Debian/Ubuntu Community Conference - Italia||Cesena, Italy|
|LinuxCon Japan||Tokyo, Japan|
|PyCon Sweden||Stockholm, Sweden|
|PGCon 2014||Ottawa, Canada|
|Solid 2014||San Francisco, CA, USA|
|PyCon Italia||Florence, Italy|
|FUDCon APAC 2014||Beijing, China|
|May 24||MojoConf 2014||Oslo, Norway|
|GNOME.Asia Summit||Beijing, China|
|May 30||SREcon14||Santa Clara, CA, USA|
|Tizen Developer Conference 2014||San Francisco, CA, USA|
|PyCon Russia 2014||Ekaterinburg, Russia|
|Erlang User Conference 2014||Stockholm, Sweden|
|DockerCon||San Francisco, CA, USA|
|Distro Recipes 2014 - canceled||Paris, France|
|Ubuntu Online Summit 06-2014||online, online|
|State of the Map EU 2014||Karlsruhe, Germany|
|Texas Linux Fest 2014||Austin, TX, USA|
|2014 USENIX Federated Conferences Week||Philadelphia, PA, USA|
|USENIX Annual Technical Conference||Philadelphia, PA, USA|
|SouthEast LinuxFest||Charlotte, NC, USA|
|AdaCamp Portland||Portland, OR, USA|
|YAPC North America||Orlando, FL, USA|
|LF Enterprise End User Summit||New York, NY, USA|
|Open Source Bridge||Portland, OR, USA|
|Automotive Linux Summit||Tokyo, Japan|
|Tails HackFest 2014||Paris, France|
|Libre Software Meeting||Montpellier, France|
|SciPy 2014||Austin, Texas, USA|
If your event does not appear here, please tell us about it.
Page editor: Rebecca Sobol
Copyright © 2014, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds