Linux powered single-board computers (SBCs) are not quite a dime a dozen, although the price is dropping rapidly enough that such a milestone is not entirely out of the question. While the news in this space over the past two years has been dominated by the Raspberry Pi, there are plenty of other choices. One of the more popular options is the BeagleBone Black, which is more than a match for the Pi in terms of hardware power, and offers a software selection worth exploring, too. Determining which system is right for which task depends on weighing a lot of factors.
The BeagleBone Black is the fourth generation SBC from the BeagleBoard project. The first and second devices were called "BeagleBoards;" the BeagleBone moniker referenced the significantly smaller physical dimensions of the third generation product. The BeagleBone Black uses the same form factor as the preceding model. But it also includes updated components and a lower price: $45.
The basic makeup of the board is straightforward. It is powered by a Sitara AM3358 System-on-Chip (SoC) from Texas Instruments (TI), which is built around a Cortex-A8 CPU from the ARMv7 line. There is 512MB of DDR3 RAM on board, plus 2 GB of built-in flash storage and a slot for an additional microSD card. Connectivity includes a wired Ethernet port, one mini USB client port, one full-sized USB host port, and a micro HDMI connector that outputs both audio and video.
But in these modern times, of course, single-board computers need to provide hardware expansion capabilities. This is where the BeagleBone Black first begins to differentiate itself. The long sides of the board include a full spread of pin headers, right up to where the PCB mounting holes get in the way. In grand total there are 92 pins available, which is considerably more than most competing boards. 65 of those pins are available for digital I/O; eight of the 65 can serve as pulse-width modulators (PWMs) for analog output, and four can be configured to serve as timers. Various subsets of these digital pins can also be configured to provide two I2C ports, two SPI ports, and two serial ports (there is also a separate serial debugging header that cannot be used for general purposes).
In addition to the digital I/O pins, there are seven analog input pins, plus an assortment of 3.3V, 5V, and ground pins. If that is not enough, there is also a pad on the back of the board where enterprising users can solder on a JTAG debugging header. Other nice touches include a 5.5mm DC power connector (the device can run off of power supplied by the USB Client port, but having two options is more convenient), hardware power and reset switches, and a built-in array of four programmable LEDs.
Of course, BoneScript is not the only option. One can also SSH into the board and explore it from a terminal like any other Linux system or hook up USB peripherals and an HDMI monitor and use Ångström's lightweight graphical environment. Out of the box, the BeagleBone Black I tested (revision A5C, for the record) runs kernel 3.8.13. Access to the array of expansion pin headers is simple, using sysfs. The traditional first project is to blink an external LED; to do this one should attach the LED's anode to a GPIO pin header, the cathode to a nearby ground header, and export the GPIO pin:
echo 60 > /sys/class/gpio/export
brings up pin number "12" on the left-side expansion header, so that it is accessible as /sys/class/gpio/gpio60 (the GPIO number for the pin has to be looked up in the documentation). Set it as an output pin with:
echo out > /sys/class/gpio/gpio60/direction
and the LED can be flashed on and off by echoing 1 or 0 to /sys/class/gpio/gpio60/value. The only real tricky part of the process is figuring out the correct GPIO number; the numbers on the expansion header do not map cleanly to the various GPIO numbers because of all the positive voltage, ground, and other assorted pins mixed in—and because the board tries to spread the pins around evenly to provide some flexibility when attaching components. The reference manual PDF has a table that provides the correct numbers.
As exhilarating as flashing an LED can be, eventually one needs more from one's hardware, which is where BeagleBone capes come into play. Capes are the equivalent of Arduino shields: pin-compatible circuit boards that fit into the expansion headers on the BeagleBone itself. Most capes have expansion headers on top to match the pins on the bottom that fit into the BeagleBone's headers; that way multiple capes can be stacked, and the pin compatibility means a +5V location on one board does not get turned into a ground pin by a board in between.
Currently the crop of BeagleBone Black capes is somewhat limited because the "Black" model introduced some variations over the original BeagleBone (which was white in color, although it is usually just referred to as "BeagleBone"). The eLinux wiki includes an up-to-date page of available capes, with those that are compatible with the BeagleBone Black marked as such. The simpler capes (such as those providing a breadboard or through-hole prototyping board) are compatible with both models; there are also Black-compatible capes available for attaching small LCD screens, serial ports, GPS receivers, electrical relays, and camera modules.
More specialized capes are slowly appearing, too; there is a CAN Bus cape (for automotive usage), an unmanned aerial vehicle (UAV) altitude-and-orientation cape, and a cape designed to run a RepRap 3D printer. As with Arduino, most of the capes are community-design products, so what is available depends mostly on what itches need scratching. However, CircuitCo (who manufactures the BeagleBone original and Black) has been systematically taking Creative Commons–licensed capes from the original BeagleBone and updating the design to match the Black model's pinout, so in many cases a refreshed cape is on the way if it is not already available. Since the original BeagleBone is no longer being manufactured (and was twice the price of the Black), the old capes will probably disappear from production anyway.
On the software side, updates to the Ångström release on the board are available from the beagleboard.org site. Separate kernel sources are available on GitHub for those who cannot wait for a new release, although only minor 3.8 updates are available now. The kernel used is the mainline Linux kernel, but it may be simpler to recompile for the device when starting from the exact sources and configuration used in the official build; the official kernel is also updated periodically with support for new capes.
Perhaps more interesting is the array of other Linux distributions available. Ubuntu, Debian, Arch, Gentoo, Sabayon, and Fedora releases are all available. Although the board is not an officially-supported platform for these distributions, the relatively recent ARM revision and simple hardware of the BeagleBone evidently make an ARM distribution relatively easy to get running, either by overwriting Ångström on the internal storage or by installing an image on the microSD card slot (which is faster to experiment with, since overwriting the onboard storage can take close to an hour). For the more adventurous, there is also an Android port available (from TI), as well as QNX and FreeBSD releases.
One reason installing outside distributions is comparatively simple is that the BeagleBone Black uses standard distribution components like the U-Boot bootloader, and that it does not require binary blobs to boot. The graphics chip is a PowerVR GPU that does come with a binary-only OpenGL ES firmware blob, but the board will run non-OpenGL applications fine without it.
The binary blob issue is one of the more frequently cited issues where BeagleBone supporters feel their hardware has an advantage over the Raspberry Pi, which by size and price is the nearest competitor (and one which gets noticeably more news attention). The Pi, of course, uses a Broadcom BCM2835 SoC with an open source user-space component that speaks to a proprietary firmware blob. More importantly, though, the Pi uses a non-standard boot sequence that actually starts up the board's VideoCore GPU first, and the binary VideoCore OS subsequently loads the Linux operating system.
There are other differences as well, starting with the fact that the Pi uses a generation-older ARMv6 core at a lower clock speed. The Pi also includes expansion headers, but fewer of them than the BeagleBone Black, and some of them can be difficult to access. From a practical standpoint, the BeagleBone Black's DC power plug is another nice touch, since the Pi can only be powered over its micro USB connector. The Pi also lacks a hardware reset switch, which is inconvenient. In addition, although both boards are roughly the same size as an Altoid mint tin (which in recent years has become the standard international unit for circuit-board size), the BeagleBone is an exact fit, while the Pi's slightly larger size and sharp corners make it a mismatch. Finally, BeagleBone capes are designed to be stackable, while Pi expansion boards are generally not stackable due to the asymmetry of the expansion headers (on a side note, it is also odd that Pi expansion boards do not have an official nickname akin to "capes" or "shields;" "toppings" or "meringues" would seem to be the obvious choice).
On the other hand, Pi supporters will be quick to point out that the Pi has nice features of its own, such as a dedicated camera module connector using the standard Camera Serial Interface (CSI), built-in support for Display Serial Interface (DSI) output, full-size HDMI, RCA video out, 3.5mm audio out, and the availability of hardware multimedia decoding.
It is definitely true that the Pi offers an easier multimedia experience than the BeagleBone Black. The Pi has more connector options, and its GPU is capable of handling 1080p video—capable enough that there are several active media center distribution options for the Pi.
That is why it always comes down to matching the product with the project. The BeagleBone Black will likely not be useful as an XBMC or video gaming front end for the home theater, while the Pi is considerably more limited in how many sensors, servos, and hardware relays it can control. For free software advocates, the BeagleBone Black product is not quite devoid of binary blobs, but it is possible to boot and run it with free software.
Not everyone cares about the presence or absence of binary blobs, of course. And no doubt the Raspberry Pi Foundation has already heard a lot about this subject; it seems plausible that in the virtually inevitable Pi follow-up, the result will have less reliance on proprietary firmware. The BeagleBone Black can exert some market pressure on the Pi in that area, demonstrating that an almost-totally-free software SBC system is possible to make today.
That is fair play, though, since the success of Pi clearly pushed the reduction in size and price seen in the Black as compared to earlier BeagleBoard/BeagleBone revisions. Earlier BeagleBoards were noticeably bigger than the Pi, and the first BeagleBone retailed at $89. At $45, the Black is a little more expensive than the Pi, but not by much. There is also a gray area between the "hobbyist" uses targeted by the BeagleBone Black and the "educational" projects envisioned by the Pi, of course. But any cheap Linux-powered SBC is an educational opportunity, which can quickly turn into a hobby—or more.
Unlike several of the other well-known web browsers, Firefox runs a single process that handles the browser application and the rendering of web content. This is not a policy decision; Firefox has always been single-process. But in 2008, Internet Explorer and Google Chrome both adopted a multiprocess architecture. Mozilla has had restructuring Firefox to also run in multiple processes on its to-do list for quite a while, but the importance of other projects (and the inherent complexity of such a deep change) often kept it on the back burner. Fortunately, that situation has changed in recent months, and on December 5 the first experiments with multiprocess Firefox were made available to the public at large.
There are several potential benefits to splitting up Firefox from a single process into several. For example, running the user interface and the rendering of tab content in separate processes would allow the interface to remain responsive—even if a complex page takes a long time to load. Similarly, separate content processes would make better use of multiple CPU cores. Separate processes can also add more layers of security, so that a malicious site can less easily exploit a bug in Firefox itself to attack a host system, or the OS can sandbox the process rendering page content . But perhaps the most visible advantage would be to isolate crashes and hangs that occur when rendering a page; one page process crashing could leave Firefox itself running, and if each tab runs in a separate process, other pages could survive a crash as well.
Mozilla has known about the benefits for years, of course; the Electrolysis project started exploring a multiprocess redesign back in 2009. In 2010, Firefox 3.6.4 introduced out-of-process plugins (OOPP), which isolated Flash, Java, and the like into a separate process, plugin-container. But Mozilla put Electrolysis on hold in 2011, saying that other work had to take precedence—primarily work on existing responsiveness problems like sluggishness from memory leaks, event loop tuning, garbage collection, and improving the performance of the Places database.
Ironically, though, one of Mozilla's other recent efforts spawned a resurgence in Electrolysis development: the "Boot to Gecko" mobile platform now known as Firefox OS. Firefox OS needed multiple processes for multiple applications, as well as a reliable inter-process communication (IPC) mechanism, which were imported from Electrolysis. In early 2013, serious work on Electrolysis resumed, and was even the subject of a summer internship.
On December 5, Bill McCloskey wrote a blog post explaining the scope and design of the work, and announced that multiprocess was now a configuration option in Firefox nightly builds.
To take the new feature for a spin, users need to launch the nightly build, visit the about:config settings page, and set the browser.tabs.remote to true. Upon restart, Firefox will run the user interface in one process and render web content in another. For now, all web content (i.e., all tabs in all windows) is running in one process, which McCloskey described as a decision to start conservatively, although eventually the plan is to utilize multiple content processes. In particular, he notes that it will take some work to optimize memory consumption in multiprocess mode:
In the nightly build, tab titles are underlined to indicate that multiprocess has been enabled, but users should really only encounter the feature if a content process crashes. In that case, a "Tab crashed, try again?" message appears, akin to the message currently shown when a plugin crashes, but the parent Firefox process continues to run undisturbed. For now, basic navigation, search, and bookmark features work in multiprocess mode, but page printing, saving pages locally, and the web developer tools do not. Extensions are a gamble; some work fine, but some do not. I tested the nightly for an hour or so, and apart from the underlined tab titles, did not observe much difference. That should not be taken as a scientific significant study, of course; I followed instructions to test with a separate Firefox profile to minimize the odds of corrupting my existing profile data, and although browsing did seem faster that could just as easily be the result of having no extensions or profile data (bookmarks and history in particular).
There is some reference documentation for many IPC messages online, though there is not a formal API description. McCloskey noted that there are a great many places where IPC is required (which is part of what makes the project so intrusive), although in many cases the mechanism simply has to forward a message from the UI to the Gecko rendering engine. There is additional documentation on the message manager framework itself, and Mozilla's Tim Taubert wrote an introduction to the IPC message-passing system in August.
The current Firefox nightlies include some workarounds like object wrappers that allow the parent process to sit and wait for a response from the content process. This is a no-no from the responsiveness standpoint, but the long term fix means rewriting Firefox to be completely asynchronous, which is far from a simple task. The object wrappers are only used where lag in responsiveness is unlikely to be observed by the user, McCloskey said, but they may be phased out eventually anyway.
Of course, how the internal message passing works is likely to be of far more interest to Firefox extension authors than to the general public, but add-on developers represent an important sector to mozilla. As is, the multiprocess nightlies break a lot of extensions: extensions are not web content, so they run inside the parent process, which in turn means they cannot directly access the content process as they can in normal Firefox releases. McCloskey said that Mozilla is committed to working with extension developers to adapt their code, and that he hopes to follow up on the subject with a series of blog posts, and even update some popular extensions himself.
For users, there is no set ETA for when multiprocess Firefox will become the norm. As the Electrolysis team notes, this is a large project with a lot of pieces to consider, from memory performance to add-on compatibility. But the benefits certainly outweigh the costs, so it is clear that at some point in the future, users will get to run Firefox without worrying that one bad page will take down the entire application—or, worse, lead to a serious security breach.
Here is LWN's sixteenth annual timeline of significant events in the Linux and free software world for the year. As per tradition, we will divide up the timeline up into quarters; this is our account of July–September 2013. January through March was covered two weeks ago and April through June last week; timelines for the final quarter of the year will appear next week.
There are almost certainly some errors or omissions; if you find any, please send them to email@example.com.
LWN subscribers have paid for the development of this timeline, along with previous timelines and the weekly editions. If you like what you see here, or elsewhere on the site, please consider subscribing to LWN.
For those readers in a historical mood, our timeline index page includes links to the previous timelines and other retrospective articles that date all the way back to 1998.
Kernel 3.10 is released [technically, late on Sunday, June 30]. With 13,637 changesets, 3.10 is the busiest kernel development cycle seen to date (announcement; development statistics; merge window summaries 1, 2, 3; KernelNewbies page).
EuroPython is held in Florence, July 1 to 8 (LWN coverage).
Debian launches sources.debian.net, a public web repository of all Debian source code (announcement).
Qt 5.1 is released. (announcement).
GNU Radio 3.7.0 is released (announcement).
Version 0.7 of the Rust language is released. (announcement).
GNU Health 2.0 is released (announcement).
The Fedora community mourns the loss of longtime contributor Seth Vidal (blurb).
Akademy is held in Bilbao, July 13 to 19 (LWN coverage).
Wayland / Weston 1.2 is released (announcement).
Community Leadership Summit 2013 is held in Portland, July 20 to 21 (LWN article).
Open source camera firmware project Magic Lantern implements a feature not even the official Canon firmware can perform: the ability to shoot one frame with two interleaved exposures (LWN article).
-- Jef Spaleta
Android 4.3 is released (announcement).
Wine 1.6 is released (announcement).
The Razor and LXDE-Qt desktop projects decide to merge (announcement).
Apache OpenOffice 4.0 is released (announcement).
Version 2013.07 of the U-Boot bootloader is released, adding support for cryptographically verified boot (announcement).
FOSS news site The H shuts down (announcement).
OSCON is held in Portland, July 22 to 26 (LWN article).
LibreOffice 4.1 is released (announcement).
-- Larry Ellison shares his thoughts with The Register
GUADEC is held in Brno, August 1 through 8, coinciding with a record heat wave in Central Europe (LWN coverage).
Lead developer Jean-Baptiste Quéru leaves the Android Open Source Project, unhappy about increasing amounts of proprietary code used for flagship Android devices (blurb).
Bison 3.0 is released (announcement).
Debian celebrates its 20th birthday (announcement).
SourceForge initiates a service through which hosted projects are allowed to bundle third-party Windows installers for "side-loaded applications" into their downloadable packages, drawing considerable criticism (LWN article).
Pamela Jones announces that Groklaw will shut down in reaction to the NSA surveillance revelations (announcement).
TypeCon is held in Portland, known for the weekend as "Portl&," August 21 to 25 (LWN coverage).
The first Firefox OS devices hit the market (LWN article).
Google pushes more Android functionality into its proprietary Google Play Services component. Ars technica, among others, views the change as hostile to the project's openness.
Ubuntu launches its "app store" service, courting third-party developers to write apps for Ubuntu devices with a contest (LWN article).
Apache Cassandra 2.0 is released (announcement).
The LibreOffice team leaves SUSE and joins Collabora (announcement).
KDE's Plasma Active 4.0 is released (announcement).
LinuxCon North America is held in New Orleans, September 16 to 18. Here and there, some jazz is heard (LWN coverage).
The OpenZFS project launches, with the goal of developing the filesystem independent of Solaris (announcement).
-- IBM's Brad McCredie, on the Linux-based Watson
Steve "Cyanogen" Kondik and friends form Cyanogen, Inc. to further develop the CyanogenMod Android replacement. (announcement).
NVIDIA agrees to provide GPU documentation to the open source Nouveau driver project (announcement).
Game maker Valve announces Steam OS, a Linux-based operating system the company is making that will power its future gaming consoles (announcement).
GStreamer 1.2 is released (announcement).
Fedora celebrates ten years (announcement).
The GNU project celebrates its 30th birthday (announcement).
Secure messaging has been a seemingly difficult problem to solve. That's unfortunate, as a large percentage of today's communication takes place over some kind of text-based messaging. And we now know—rather than just strongly suspect—that secret services are monitoring, recording, and storing that communication. All of that makes it rather heartening to see that CyanogenMod has started integrating secure messaging into its 10.2 nightlies.
The app that implements secure messaging is called WhisperPush and it uses the TextSecure protocol from WhisperSystems. The lead engineer for WhisperSystems (and for TextSecure) is Moxie Marlinspike, a well-known security researcher whose work has been mentioned here on the LWN Security page several times. He detailed the CyanogenMod work in a blog post.
The Open WhisperSystems project, which is separate from the WhisperSystems company, is developing WhisperPush. That project is an outgrowth of the code that Twitter released under the GPLv3 when it bought WhisperSystems in 2011. What WhisperPush does is allow any CyanogenMod user to securely send text messages to any other CyanogenMod user as well as to any user of the TextSecure Android app. That means there are roughly ten million people who can communicate over that "network". More will be added when the iOS and browser versions of TextSecure are available.
WhisperPush replaces the Short Message Service (SMS) provider in the CyanogenMod system. That means that any SMS app can be used to send and receive the TextSecure-encrypted messages; no special app is required. Underneath, WhisperPush recognizes whether the recipient is using TextSecure (either via the CyanogenMod server or the WhisperSystems server, as the two are federated) and handles the message if so. The message gets transparently encrypted, shipped over the data connection, then decrypted at the other end. All of that is done without the user having to set up or exchange any keys—WhisperPush handles all of that. If the recipient is offline, the encrypted message queues up for them on the server until they come online—just like normal text messages.
If the recipient is not on the TextSecure network, WhisperPush falls back to normal, unencrypted SMS. Currently, there is no indication of whether the message has been encrypted or not, but Marlinspike noted that there are plans to add some "minimal visual feedback to the stock CyanogenMod Messaging app to indicate when the user has an expectation of privacy and when they don't". More technical users can also verify identity keys to authenticate the remote end. Perhaps most importantly, all of the code for WhisperPush and the TextSecure server is freely available.
TextSecure provides a number of security benefits beyond just messages that are encrypted "on the wire" (which, for phones, generally means "on the air" for at least part of the path). Because it uses the data channel, rather than SMS directly, it doesn't give cellular providers easy access to the metadata (e.g. message recipient). It also uses a mechanism for forward secrecy as well as one for deniability. The latter is a property that allows the recipient to be sure who sent the message, but be unable to prove to anyone else who it was sent by.
The biggest concern is, of course, the data on the phone itself. It will have both sides of the conversation in the messaging app. It will also have the cleartext for earlier conversations, so, as always, keeping phones physically secure is important. It is also unclear what kind of information will be stored by the servers. Lists of TextSecure users as well as conversation metadata are both available on the CyanogenMod and TextSecure servers.
One of the main barriers to more widespread encrypted communication has always been the difficulty for users to configure and maintain the secure applications. PGP is an excellent technical solution, but it fails because it is a pain to use and because its web of trust is hard for users to understand and handle correctly. TextSecure/WhisperPush seem to have solved that problem, at least for text messaging. Given Marlinspike's reputation, one would expect the system to be secure, but that doesn't in any way mean the algorithms and code should not be scrutinized. Hopefully, cryptography and other security experts will be giving the system a full audit; the code is out there. If TextSecure can pass that audit, or be fixed based on what is found, it would seem that easy-to-use secure text messaging will be something of a solved problem.
The mistake has had no consequences on the overall network security, either for the French administration or the general public. The aforementioned branch of the IGC/A has been revoked preventively.
[...] ANSSI has found that the intermediate CA certificate was used in a commercial device, on a private network, to inspect encrypted traffic with the knowledge of the users on that network. This was a violation of their procedures and they have asked for the certificate in question to be revoked by browsers. We updated Chrome’s revocation metadata again to implement this.
|Package(s):||chromium-browser||CVE #(s):||CVE-2013-6634 CVE-2013-6635 CVE-2013-6636 CVE-2013-6637 CVE-2013-6638 CVE-2013-6639 CVE-2013-6640|
|Created:||December 9, 2013||Updated:||January 20, 2014|
|Description:||From the CVE entries:
The OneClickSigninHelper::ShowInfoBarIfPossible function in browser/ui/sync/one_click_signin_helper.cc in Google Chrome before 31.0.1650.63 uses an incorrect URL during realm validation, which allows remote attackers to conduct session fixation attacks and hijack web sessions by triggering improper sync after a 302 (aka Found) HTTP status code. (CVE-2013-6634)
The FrameLoader::notifyIfInitialDocumentAccessed function in core/loader/FrameLoader.cpp in Blink, as used in Google Chrome before 31.0.1650.63, makes an incorrect check for an empty document during presentation of a modal dialog, which allows remote attackers to spoof the address bar via vectors involving the document.write method. (CVE-2013-6636)
Multiple unspecified vulnerabilities in Google Chrome before 31.0.1650.63 allow attackers to cause a denial of service or possibly have other impact via unknown vectors. (CVE-2013-6637)
Multiple buffer overflows in runtime.cc in Google V8 before 126.96.36.199, as used in Google Chrome before 31.0.1650.63, allow remote attackers to cause a denial of service or possibly have unspecified other impact via vectors that trigger a large typed array, related to the (1) Runtime_TypedArrayInitialize and (2) Runtime_TypedArrayInitializeFromArrayLike functions. (CVE-2013-6638)
|Created:||December 5, 2013||Updated:||December 11, 2013|
From the openSUSE advisory:
it fixes config directory permission and owner (slightly more info in this Novell bugzilla entry)
|Created:||December 6, 2013||Updated:||January 21, 2014|
From the Slackware advisory:
This update disables the automatic upgrade feature which can be easily fooled into downloading an arbitrary binary and executing it. This issue affects only Slackware 14.0 (earlier versions do not have the feature, and newer ones had already disabled it).
|Package(s):||kernel||CVE #(s):||CVE-2013-6405 CVE-2013-6382 CVE-2013-6380 CVE-2013-6378|
|Created:||December 9, 2013||Updated:||December 23, 2013|
|Description:||From the CVE entries:
Multiple buffer underflows in the XFS implementation in the Linux kernel through 3.12.1 allow local users to cause a denial of service (memory corruption) or possibly have unspecified other impact by leveraging the CAP_SYS_ADMIN capability for a (1) XFS_IOC_ATTRLIST_BY_HANDLE or (2) XFS_IOC_ATTRLIST_BY_HANDLE_32 ioctl call with a crafted length value, related to the xfs_attrlist_by_handle function in fs/xfs/xfs_ioctl.c and the xfs_compat_attrlist_by_handle function in fs/xfs/xfs_ioctl32.c. (CVE-2013-6382)
The aac_send_raw_srb function in drivers/scsi/aacraid/commctrl.c in the Linux kernel through 3.12.1 does not properly validate a certain size value, which allows local users to cause a denial of service (invalid pointer dereference) or possibly have unspecified other impact via an FSACTL_SEND_RAW_SRB ioctl call that triggers a crafted SRB command. (CVE-2013-6380)
The lbs_debugfs_write function in drivers/net/wireless/libertas/debugfs.c in the Linux kernel through 3.12.1 allows local users to cause a denial of service (OOPS) by leveraging root privileges for a zero-length write operation. (CVE-2013-6378)
From the Red Hat bugzilla:
Linux kernel built with the networking support(CONFIG_NET), is vulnerable to a memory leakage flaw. It occurs while doing the recvmsg(2), recvfrom(2), recvmmsg(2) socket calls. A user/program could use this flaw to leak kernel memory bytes. (CVE-2013-6405)
|Created:||December 9, 2013||Updated:||December 11, 2013|
|Description:||From the Ubuntu advisory:
Miroslav Vadkerti discovered a flaw in how the permissions for network sysctls are handled in the Linux kernel. An unprivileged local user could exploit this flaw to have privileged access to files in /proc/sys/net/.
|Created:||December 9, 2013||Updated:||December 11, 2013|
|Description:||From the Red Hat bugzilla [1, 2]:
|Created:||December 11, 2013||Updated:||February 17, 2014|
|Description:||From the MaraDNS update:
While looking over the source code to Deadwood, I discovered that Deadwood 3 releases before Deadwood-3.2.03d have a security issue caused by a programming error I made.
Under certain exceptional circumstances, it may have been possible to perform a blind spoofing attack against unpatched releases of Deadwood. The IP performing the blind spoofing attack needs to appear to have permission to perform full recursion with Deadwood in order to carry out the attack.
MaraDNS 2.0.07d, Deadwood 3.2.03d, and MaraDNS 1.4.13 are patched against this bug. Deadwood 2.3.08 is not affected by this bug.
|Package(s):||firefox, thunderbird, seamonkey||CVE #(s):||CVE-2013-5609 CVE-2013-5612 CVE-2013-5613 CVE-2013-5614 CVE-2013-5616 CVE-2013-5618 CVE-2013-6671|
|Created:||December 11, 2013||Updated:||January 6, 2014|
|Description:||From the Red Hat advisory:
Several flaws were found in the processing of malformed web content. A web page containing malicious content could cause Firefox to terminate unexpectedly or, potentially, execute arbitrary code with the privileges of the user running Firefox. (CVE-2013-5609, CVE-2013-5616, CVE-2013-5618, CVE-2013-6671, CVE-2013-5613)
A flaw was found in the way Firefox rendered web content with missing character encoding information. An attacker could use this flaw to possibly bypass same-origin inheritance and perform cross-site scripting (XSS) attacks. (CVE-2013-5612)
It was found that certain malicious web content could bypass restrictions applied by sandboxed iframes. An attacker could combine this flaw with other vulnerabilities to execute arbitrary code with the privileges of the user running Firefox. (CVE-2013-5614)
|Package(s):||firefox, thunderbird, seamonkey||CVE #(s):||CVE-2013-5610 CVE-2013-5611 CVE-2013-5619 CVE-2013-6672 CVE-2013-6673 CVE-2013-5615|
|Created:||December 11, 2013||Updated:||January 26, 2015|
|Description:||From the Ubuntu advisory:
Ben Turner, Bobby Holley, Jesse Ruderman, Christian Holler and Christoph Diehl discovered multiple memory safety issues in Firefox. If a user were tricked in to opening a specially crafted website, an attacker could potentially exploit these to cause a denial of service via application crash, or execute arbitrary code with the privileges of the user invoking Firefox. (CVE-2013-5609, CVE-2013-5610)
Myk Melez discovered that the doorhanger notification for web app installation could persist between page navigations. An attacker could potentially exploit this to conduct clickjacking attacks. (CVE-2013-5611)
Dan Gohman discovered that binary search algorithms in Spidermonkey used arithmetic prone to overflow in several places. However, this is issue not believed to be exploitable. (CVE-2013-5619)
Vincent Lefevre discovered that web content could access clipboard data under certain circumstances, resulting in information disclosure. (CVE-2013-6672)
Sijie Xia discovered that trust settings for built-in EV root certificates were ignored under certain circumstances, removing the ability for a user to manually untrust certificates from specific authorities. (CVE-2013-6673)
Eric Faust discovered that GetElementIC typed array stubs can be generated outside observed typesets. An attacker could possibly exploit this to cause undefined behaviour with a potential security impact. (CVE-2013-5615)
|Package(s):||munin||CVE #(s):||CVE-2013-6048 CVE-2013-6359|
|Created:||December 10, 2013||Updated:||April 7, 2014|
|Description:||From the Debian advisory:
Christoph Biedl discovered two denial of service vulnerabilities in munin, a network-wide graphing framework. The Common Vulnerabilities and Exposures project identifies the following problems:
CVE-2013-6048: The Munin::Master::Node module of munin does not properly validate certain data a node sends. A malicious node might exploit this to drive the munin-html process into an infinite loop with memory exhaustion on the munin master.
CVE-2013-6359: A malicious node, with a plugin enabled using "multigraph" as a multigraph service name, can abort data collection for the entire node the plugin runs on.
|Created:||December 9, 2013||Updated:||December 24, 2013|
|Description:||From the Mageia advisory:
A missing validation in OpenTTD before 1.3.3 allows remote attackers to cause a denial of service (crash) by forcefully crashing aircraft near the corner of the map. This triggers a corner case where data outside of the allocated map array is accessed.
|Created:||December 11, 2013||Updated:||January 14, 2014|
|Description:||From the Red Hat advisory:
A memory corruption flaw was found in the way the openssl_x509_parse() function of the PHP openssl extension parsed X.509 certificates. A remote attacker could use this flaw to provide a malicious self-signed certificate or a certificate signed by a trusted authority to a PHP application using the aforementioned function, causing the application to crash or, possibly, allow the attacker to execute arbitrary code with the privileges of the user running the PHP interpreter.
|Created:||December 9, 2013||Updated:||December 11, 2013|
|Description:||From the Symfony advisory:
One of the best practices for passwords is to store a hash of the password instead of the raw value. In Symfony, the encoders are responsible for the hash creation (when a user creates an account) and verification (when a user tries to log in).
For most built-in encoders, the time is takes to compute a hash increases significantly with the length of the password. If an attacker submits random large passwords repeatedly, Symfony will be forced to do expensive computation, which can be used to ease a DOS attack.
|Created:||December 9, 2013||Updated:||September 2, 2014|
|Description:||From the Mageia advisory:
Bryan Quigley discovered an integer underflow in pixman. If a user were tricked into opening a specially crafted file, an attacker could cause a denial of service via application crash.
|Created:||December 9, 2013||Updated:||January 14, 2014|
|Description:||From the Debian advisory:
It was discovered that multiple buffer overflows in the processing of DCE-RPC packets may lead to the execution of arbitrary code.
|Created:||December 11, 2013||Updated:||December 16, 2013|
|Description:||From the CVE entry:
The winbind_name_list_to_sid_string_list function in nsswitch/pam_winbind.c in Samba through 4.1.2 handles invalid require_membership_of group names by accepting authentication by any user, which allows remote authenticated users to bypass intended access restrictions in opportunistic circumstances by leveraging an administrator's pam_winbind configuration-file mistake.
|Package(s):||xen||CVE #(s):||CVE-2013-4553 CVE-2013-4554|
|Created:||December 9, 2013||Updated:||December 23, 2013|
|Description:||From the Red Hat bugzilla [1, 2]:
 The locks page_alloc_lock and mm_rwlock are not always taken in the same order. This raises the possibility of deadlock.
The incorrect order occurs only in the implementation of the deprecated domctl hypercall XEN_DOMCTL_getmemlist.
A malicious guest administrator may be able to deny service to the entire host.
 The privilege check applied to hypercall attempts by a HVM guest only refused access from ring 3; rings 1 and 2 were allowed through.
Code running in the intermediate privilege rings of HVM guest OSes may be able to elevate its privileges inside the guest by careful hypercall use.
As far as we are aware no mainstream OS (Linux, Windows, BSD) make use of these rings.
Page editor: Jake Edge
Brief itemsreleased on December 6. Linus said: "I'm still on a Friday release schedule, although I hope that changes soon - the reason I didn't drag this one out to Sunday is that it's already big enough, and I'll wait until things start calming down. Which they really should, at this point. Hint hint."
Stable updates: 3.12.4, 3.10.23, and 3.4.73 were released on December 8. As of this writing, the 3.12.5, 3.10.24, and 3.4.74 updates are in the review process; they can be expected sometime on or after December 12.
Kernel development news
The truth may not be quite so grim. Development on Btrfs continues, with a strong emphasis on stability and performance. Problems are getting fixed, and users are beginning to take another look at this promising filesystem. More users are beginning to play with it, and openSUSE considered the idea of using it by default back in September. Your editor's sense is that the situation may be bottoming out, and that we may, slowly, be heading into a new phase where Btrfs takes its place — still slowly — as one of the key Linux filesystems.
This article is intended to be the first in a series for users interested in experimenting with and evaluating the Btrfs filesystem. We'll start with the basics of the design of the filesystem and how it is being developed; that will be followed by a detailed look at specific Btrfs features. One thing that will not appear in this series, though, is benchmark results; experience says that proper filesystem benchmarking is hard to do right; it's also highly workload- and hardware-dependent. Poor-quality results would not be helpful to anybody, so your editor will simply not try.
Not that long ago, Linux users were still working with filesystems that had evolved little since the Unix days. The ext3 filesystem, for example, was still using block pointers: each file's inode (the central data structure holding all the information about the file) contained a list of pointers to each individual block holding the file's data. That design worked well enough when files were small, but it scales poorly: a 1GB file would require 256K individual block pointers. More recent filesystems (including ext4) use pointers to "extents" instead; each extent is a group of contiguous blocks. Since filesystems work to store data contiguously anyway, extent-based storage greatly reduces the overhead of managing a file's space.
Naturally, Btrfs uses extents as well. But it differs from most other Linux filesystems in a significant way: it is a "copy-on-write" (or "COW") filesystem. When data is overwritten in an ext4 filesystem, the new data is written on top of the existing data on the storage device, destroying the old copy. Btrfs, instead, will move overwritten blocks elsewhere in the filesystem and write the new data there, leaving the older copy of the data in place.
The COW mode of operation brings some significant advantages. Since old data is not overwritten, recovery from crashes and power failures should be more straightforward; if a transaction has not completed, the previous state of the data (and metadata) will be where it always was. So, among other things, a COW filesystem does not need to implement a separate journal to provide crash resistance.
Copy-on-write also enables some interesting new features, the most notable of which is snapshots. A snapshot is a virtual copy of the filesystem's contents; it can be created without copying any of the data at all. If, at some later point, a block of data is changed (in either the snapshot or the original), that one block is copied while all of the unchanged data remains shared. Snapshots can be used to provide a sort of "time machine" functionality, or to simply roll back the system after a failed update.
Another important Btrfs feature is its built-in volume manager. A Btrfs filesystem can span multiple physical devices in a number of RAID configurations. Any given volume (collection of one or more physical drives) can also be split into "subvolumes," which can be thought of as independent filesystems sharing a single physical volume set. So Btrfs makes it possible to group part or all of a system's storage into a big pool, then share that pool among a set of filesystems, each with its own usage limits.
Btrfs offers a wide range of other features not supported by other Linux filesystems. It can perform full checksumming of both data and metadata, making it robust in the face of data corruption by the hardware. Full checksumming is expensive, though, so it remains likely to be used in only a minority of installations. Data can be stored on-disk in compressed form. The send/receive feature can be used as part of an incremental backup scheme, among other things. The online defragmentation mechanism can fix up fragmented files in a running filesystem. The 3.12 kernel saw the addition of an offline de-duplication feature; it scans for blocks containing duplicated data and collapses them down to a single, shared copy. And so on.
It is worth noting that the copy-on-write approach is not without its costs. Obviously, some sort of garbage collection is required or all those block copies will quickly eat up all of the available space on the filesystem. Copying blocks can take more time than simply overwriting them as well as significantly increasing the filesystem's memory requirements. COW operations will also have a tendency to fragment files, wrecking the nice, contiguous layout that the filesystem code put so much effort into creating. Fragmentation hurts less with solid-state devices than on rotational storage, but, even in the former case, fragmented files will not be as quick to access.
So all this shiny new Btrfs functionality does not come for free. In many settings, administrators may well decide that the costs associated with Btrfs outweigh the benefits; those sites will stick with filesystems like ext4 or XFS. For others, though, the flexibility and feature set provided with Btrfs are likely to be quite appealing. Once it is generally accepted that Btrfs is ready for real-world use, chances are it will start popping up on a lot of systems.
One concern your editor has heard in conference hallways is that the pace of Btrfs development has slowed. For the curious, here's the changeset count history for the Btrfs code in the kernel, grouped into approximately one-year periods:
Year Changesets Developers 2008 (2.6.25—29) 913 42 2009 (2.6.30—33) 279 45 2010 (2.6.34—37) 193 33 2011 (2.6.38—3.2) 610 67 2012 (3.3—8) 773 63 2013 (3.9—13) 671 68
These numbers, on their own, do not demonstrate a slowing of development; there was an apparent slow period in 2010, but the number of changesets and the number of developers contributing them has held steady thereafter. That said, there are a couple of things to bear in mind when looking at those numbers. One is that the early work involved the addition of features to a brand-new filesystem, while work in 2013 is almost entirely fixes. So the size of the changes has shrunk considerably, but one could easily argue that things should be just that way.
The other relevant point is that contributions by Btrfs creator Chris Mason have clearly fallen in recent years. Partly that is because he has been working on the user-space btrfs-progs code — work which is not reflected in the above, kernel-side-only numbers — but it also seems clear that he has been busy with other work-related issues. It will be interesting to see how things change now that Chris and prolific Btrfs contributor Josef Bacik have found a new home at Facebook.
In summary, the amount of new code going into Btrfs has clearly fallen in recent years, but that will be seen as good news by anybody hoping for a stable filesystem anytime soon. There is still some significant effort going into this filesystem, and chances are good that developer attention will increase as distributors look more closely at using Btrfs by default.
All told, Btrfs still looks interesting, and it seems like the right time to take a closer look at what is still the next generation Linux filesystem. Now that the introductory material is out of the way, the next article in this series will start to actually play with Btrfs and explore its feature set. Those articles (appearing here as they are published) are:
By the end of the series, we plan to have a reasonably comprehensive introduction to Btrfs in place; stay tuned.
Making kernel interfaces that work for both 32- and 64-bit processors has proved to be something of a challenge over the years. One of the more problematic areas has been passing arguments to ioctl() so that the same code will work on both types of processor—in both big- and little-endian varieties. As a recent thread shows, not all of those problems have been completely worked out over time.
Aurelien Jarno posted a question to the linux-fsdevel mailing list about FS_IOC_GETFLAGS and FS_IOC_SETFLAGS (which query and set inode flags on files). He noted that the definitions of those requests in include/uapi/linux/fs.h listed the argument types as a pointer to long, except for the 32-bit compatibility versions, which specify an int *. The code in the kernel filesystems expects and uses a 32-bit quantity, and most—but not all—user-space code passes a pointer to a 32-bit integer.
Any application that passed a pointer to a 64-bit integer would work, but only on little-endian systems. Since the kernel code treats the pointer as one to a 32-bit quantity, it's a matter of which four bytes are accessed when the pointer is dereferenced. On big-endian processors, it is the most significant four bytes, whereas little-endian systems reference the least significant end. Since all of the flags live in the least significant four bytes, the big-endian systems effectively pass zero to FS_IOC_SETFLAGS or retrieve a value with (undefined) high bits set with FS_IOC_GETFLAGS.
Darrick J. Wong pointed out that the kernel FUSE driver uses the types from that header, which also causes a problem. The kernel driver expects to transfer a 64-bit quantity, but most user-space programs only provide 32 bits. He plans to special case those ioctl() requests for FUSE.
The number of big-endian 64-bit systems (e.g. PowerPC, MIPS, and s390) is dwarfed by the number of x86_64 little-endian processors. That means that few have seen the problem, but it also means that any fix needs to be made carefully to avoid breaking millions of existing systems. That is always an important—overriding—consideration for changes to the kernel, of course, but Ted Ts'o highlighted that concern when he explained a bit about how this had come about and why changing to a long * everywhere would not work. Because the majority of user-space programs pass an int *, a change like that would cause them to break on 64-bit systems regardless of the endian-ness.
But, as Jarno pointed out, anyone trying to do the right thing and look up the argument type in <linux/fs.h> will get it wrong. "The bare minimum would be to add a comment close to the definition to explain to use an int and not a long." It turns out that there are four ioctl() requests (FS_IOC_GETVERSION and FS_IOC_SETVERSION in addition to the get/set flags mentioned above) that have the problematic definition. Jarno posted a patch to make that change by adding a warning to include/uapi/linux/fs.h (which gets installed in include/linux):
/* * WARNING: The next four following ioctls actually take an int argument * despite their definition. This is important to support 64-bit big-endian * machines. */
One might think that just changing the type of the argument to 32 bits in the header file would be a possibility, but that cannot be done either. The mapping of the request name to a number is done using a set of macros that use sizeof() on the "type" argument. For 64-bit systems that is eight for a long, but 32-bit uses four. Since the numbers calculated for the requests are now a fixed part of the Linux ABI, changing the type of the argument in that header would not solve the problem.
Several suggested adding a new request type (FS_IOC_GETFLAGS_NEW or FS_IOC_GETFLAGS_WIDE for example). It would take a pointer to a 64-bit integer on all architectures. That would have the advantage of doubling the number of available flags, which may be getting close to being consumed. There are perhaps ten bits available today for expansion, adding another 32 might cover any upcoming use cases, though some are rather skeptical that 32 will be enough.
The FS_IOC_[GS]ETFLAGS requests were originally added for the ext* filesystems, but have also been used by other filesystems over time. In addition, there are flags for other filesystems that are only available via filesystem-specific ioctl() requests. According to Dave Chinner, XFS already has roughly ten flags available using a different request (XFS_IOC_FSGETXATTR); other filesystems have their own sets. So, if a change is going to be made, Chinner said, why not create one that unifies all of the disparate inode flag handling; one that allows for more than 64 flags that might be completely exhausted soon.
Ts'o is not convinced that the additional complexity is worth it. But Chinner sees XFS adding "tens of new inode flags" over the coming years. Other filesystems may well have similar needs. So a fixed-length bitmap may not be the best solution long-term, but there was little agreement on which alternative should be pursued.
Chinner suggested some kind of attribute-based interface that is open-ended so that it could be expanded to handle any inode flags for any filesystems down the road. He also mentioned the xstat() system call as another possibility. But, as Andreas Dilger pointed out, xstat() has been proposed many times, but has never made it into the kernel.
So there are some possible solutions that "solve the problem once and for all" (as Chinner put it), but it is not at all clear that anyone is planning to push for one of them. In the meantime, Jarno's "fix" to the header file will at least help users pass the right argument types. The user-space applications that pass long pointers (bup and libexplain were mentioned) will need to change, but that shouldn't be too onerous. A more ambitious, global solution may not be in the works anytime soon.
A memory barrier is a directive that prohibits the hardware (and compiler) from reordering operations in specific ways. To see how they might be used, consider the following simple example, taken from a 2013 Kernel Summit session. The lockless insertion of a new element into a linked list can be performed in two steps. The first is to set the "next" pointer of the new item to point to the item that will follow it in the list:
Once that is done, the list itself can be modified to include the new item:
A thread walking the list will either see the new item or it won't, depending on the timing, but it will see a well-formed list in either case. If, however, the operations are reordered such that the second pointer assignment becomes visible before the first, there will be a period of time during which the structure of the list is corrupted. Should a thread follow that pointer at the wrong time, it will end up off in the weeds. To keep that from happening, this sort of list operation must use a memory barrier between the two writes. With a proper barrier in place, the pointer assignments will never be seen in the wrong order.
The kernel offers a wide variety of memory barrier operations adapted to specific situations, but the most commonly used barriers are:
Memory barriers almost invariably come in pairs. If one of two cooperating threads cares about the order in which two values are written, the other side must be equally concerned about the order in which those values are read.
Naturally enough, the full story is rather more complex than described here. Readers with sufficient interest and free time, along with quite a bit of excess brain power, can read Documentation/memory-barriers.txt for the full story.
The primary reason for the proliferation of memory barrier types is performance. A full memory barrier can be an expensive operation; that is something that kernel developers would prefer to avoid in fast paths. Weaker barriers are often cheaper, especially if they can be omitted altogether on some architectures. The x86 architecture, in particular, offers more ordering guarantees than some others do, making it possible to do without barriers entirely in some situations.
A situation that has come up relatively recently has to do with "total store order" (TSO) architectures, where, as Paul McKenney put it, "reads are ordered before reads, writes before writes, and reads before writes, but not writes before reads." The x86 architecture has this property, though some others do not. TSO ordering guarantees are enough for a number of situations, but, in current kernels, a full memory barrier must be used to ensure those semantics on non-TSO architectures. Thus, it would be nice to have yet another memory barrier primitive to suit this situation.
Peter Zijlstra had originally called the new barrier smp_tmb(), but Linus was less than impressed with the descriptive power of that name. So Peter came up with a new patch set adding two new primitives:
These new primitives are immediately put to work in the code implementing the ring buffer used for perf events. That buffer has two pointers, called head and tail; head is where the kernel will next write event data, while tail is the next location user space will read events from. Only the kernel changes head, while only user space can change tail. In other words, it is a fairly standard circular buffer.
The code on the kernel side works like this (in pseudocode form):
tail = smp_load_acquire(ring_buffer->tail); write_events(ring_buffer->head); /* If 'tail' indicates there is space */ smp_store_release(ring_buffer->head, new_head);
The smp_load_acquire() operation ensures that the proper tail pointer is read before any data is written to the buffer. And, importantly, smp_store_release() ensures that any data written to the buffer is actually visible there before the new head pointer is made visible. Without that guarantee, the reader side could possibly see a head pointer indicating that more data is available before that data is actually visible in the buffer.
The code on the read side is the mirror image:
head = smp_load_acquire(ring_buffer->head); read_events(tail); /* If 'head' indicates available events */ smp_store_release(ring_buffer->tail, new_tail);
Here, the code ensures that the head pointer has been read before trying to access any data in the buffer; in that way, head corresponds to the data the kernel side wrote there. This smp_load_acquire() operation is thus paired with the smp_store_release() in the kernel-side code; together they make sure that data is seen in the correct order. The smp_store_release() call here pairs with the smp_load_acquire() call in the kernel-side code; it makes sure that the tail pointer does not visibly change until user space has fully read the data from the buffer. Without that guarantee, the kernel could possibly overwrite that data before it was actually read.
The ring buffer code worked properly before the introduction of these new operations, but it had to use full barriers, making it slower than it needed to be. The new operations allow this code to be optimized while also better describing the exact operations that are being protected by barriers. As it happens, a lot of kernel code may be able to work with the slightly weaker guarantees offered by the new barrier operations; the patch changelog says "It appears that roughly half of the explicit barriers in core kernel code might be so replaced."
The cost, of course, is that the kernel's complicated set of memory barrier operations has become even more complex. Once upon a time that might not have mattered much, since most use of memory barriers was deeply hidden within other synchronization primitives (spinlocks and mutexes, for example). With scalability pressures pushing lockless techniques into more places in the kernel, though, the need to be explicitly aware of memory barriers is growing. There may come a point where understanding memory-barriers.txt will be mandatory for working in much of the kernel.
Patches and updates
Core kernel code
Filesystems and block I/O
Page editor: Jonathan Corbet
Distributionsour last look at a CyanogenMod release. So when the project announced the availability CyanogenMod 11M1 — the first of the CM 11.0 experimental builds — your editor did not hesitate to dedicate a handset to the cause. After all, what could possibly go wrong? It turns out that a few things could, but CM11 appears to be on track to be another solid release regardless.
There are some real advantages to owning a Google Nexus device — a Nexus 4 handset in this case. There is no need to "root" it or otherwise coerce the hardware to allow the installation of alternative software; connecting the device to a Linux machine and running:
fastboot oem unlock
will do the trick. Of course, unlocking the phone in this manner wipes all user data, meaning that it's best done at the outset with a new device, but, if one plans to install a new operating system anyway, a full wipe is already in the cards. Once that's done, the usual install of the ClockworkMod recovery image is called for, followed by the installation of the CyanogenMod image itself. In your editor's case, this process rendered the phone unbootable the first time through, necessitating a return to the stock Android image before the second, successful attempt.
Incidentally, Google's posting of the factory images for its devices is a nice habit; it turns experimenting with those devices into a low-risk affair.
A new CyanogenMod installation (with the separate addition of the proprietary Google applications) takes the user through the usual Google startup routine. It did not, however, automatically install the user's backed-up set of of apps the way a new stock Android installation does. It also evidently was unable to obtain the local wireless network password from Google, despite presenting the usual checkbox to allow it to back up such passwords to Google's servers.
The next step is new since last year (though not new with 11.0): the user is prompted to set up an account with CyanogenMod itself. This account exists for now to facilitate the "find my phone" and remote wipe functionalities. Unfortunately, neither function worked. The CyanogenMod "accounts" page showed a "last seen" time from the past and reported that it was unable to establish a connection to the device. Somehow, the phone was failing to communicate with the CyanogenMod mothership, despite having good connectivity otherwise.
Once the preliminaries were done, the phone asked, with no further explanation, whether it should run "Launcher" or "Launcher3." The choice was presented with the usual "just once" and "always" options; as long as one picks "just once," that question will be repeated every time the home screen is displayed. While trying to figure out how to choose, your editor stumbled across this appalling list of Android launchers; evidently the state of the art in launcher technology is so bad that we need more than sixty of them. In the end, either of the two offered by CyanogenMod 11 seemed fine, so your editor settled on Launcher3.
CyanogenMod's tendency toward lots of configuration options has not changed in the last year. There are few aspects of the device's behavior that cannot be tweaked at will. If you want to control how loudly the phone rings after 8:00PM, or which options appear in the quick settings menu, or which sound is played when the screen locks, or the intensity of the phone's vibration, or how many icons appear in the bottom-of-screen dock, or the appearance of the battery icon, or the color and pulsation period of the notification LED, those options (and more) are available. The proliferation of options can be daunting, but, for those who like to customize their environments, it doesn't take that long to find the one option that cannot be done without.
Beyond configuration options, there are a number of features that are unique to CyanogenMod. The phone can be configured to ring initially at a low volume, getting louder the longer a call remains unanswered. The "Voice+" feature enables any messaging app to send SMS messages with the Google Voice service. There is a log of which applications have been requesting location information. "Profiles" allow the collection of a wide range of configuration options into sets; changing between profiles can be done manually or automatically via a set of "triggers." "Torch" functionality is built into the quick settings screen, eliminating the need for a separate flashlight app. And so on.
One of the more significant CyanogenMod features must be Privacy Guard (formerly incognito mode). In its simpler mode, it can be used to prevent apps from accessing personal information. When an app has been blocked with Privacy Guard, the contact list, phone history, and web browsing history appear to be empty, while GPS is presented as being disabled (regardless of its actual state). In the "advanced" mode, Privacy Guard can disable individual permissions for specific apps, as well as reporting on when those permissions were last used. This mode is, in fact, an interface to the "AppOps" functionality introduced in Android 4.3; on stock Android phones, though, this feature is not available without the installation of an app to expose it.
In summary, CyanogenMod remains an interesting variation of Android for those willing to go through the trouble of installing and configuring it. It provides more functionality, more control over the device and one's personal information, and an upgrade path for devices that are no longer supported by their manufacturers. That much has not changed in a long time.
The most significant thing about the 11M1 release, arguably, is that it is based on the Android 4.4 "KitKat" release, less than one month after KitKat was first shipped. That suggests that the Google Android Open Source Project (AOSP) is getting the code out quickly; the worries that things could falter after Jean-Baptiste Queru's departure from the project have proved unfounded so far. The CyanogenMod project is also getting faster at integrating AOSP releases into releases of its own, at least for hardware that is already well supported by AOSP. So CyanogenMod users — those willing to run test releases, at least — can have the best of both worlds: current Android code with CyanogenMod enhancements.
Brief itemsreleased Red Hat Enterprise Linux 7 beta. There are new or enhanced features in Linux containers, performance management, file systems, networking, storage, and much more. "Based on Fedora 19 and the upstream Linux 3.10 kernel, Red Hat Enterprise Linux 7 will provide users with powerful new capabilities that streamline and automate installation and deployment, simplify management, and enhance ease-of-use, all while delivering the stability that enterprises have come to expect from Red Hat. This further solidifies Red Hat Enterprise Linux's place as the world's leading Linux platform and a standard for the enterprise of the future. Whether rolling out new applications, virtualizing environments or scaling the business with cloud, Red Hat Enterprise Linux 7 delivers the keystone to IT success." See the release notes for more information.
Newsletters and articles of interestreports that effective in FreeBSD 10 (currently RC1 is available), processors from Intel and Via Technologies will no longer be trusted as the sole source of random numbers. "Specifically, "RDRAND" and "Padlock"—RNGs [Random Number Generators] provided by Intel and Via respectively—will no longer be the sources FreeBSD uses to directly feed random numbers into the /dev/random engine used to generate random data in Unix-based operating systems. Instead, it will be possible to use the pseudo random output of RDRAND and Padlock to seed /dev/random only after it has passed through a separate RNG algorithm known as "Yarrow." Yarrow, in turn, will add further entropy to the data to ensure intentional backdoors, or unpatched weaknesses, in the hardware generators can't be used by adversaries to predict their output."
Page editor: Rebecca Sobol
The close() system call is capable of returning errors, but those error codes are often completely ignored. It's often considered a bad practice to do so, but lots of code has been written that way. Though, for Linux at least, there is a real question about what, if anything, can be done when close() returns an error—to the point where some say most close() error returns make no sense.
Ondřej Bílka posted to the libc-alpha mailing list (the development mailing list for the GNU C library or glibc) with a suggestion: turn any EINTR (system call interrupted) returns from close() on Linux into EINPROGRESS (operation in progress). It turns out that The Austin Group (which maintains POSIX) added language to clarify the behavior of close() in August 2012. Previously, the state of the file descriptor was undefined if an EINTR was returned, but the new interpretation says that an EINTR return requires that the descriptor still be open. EINPROGRESS should be returned if the system call is interrupted but the file descriptor is closed; thus Bílka's suggestion.
But others are not at all sure that close() should ever return an error (except for EBADF when passed an invalid file descriptor). As David Miller noted, close() returning errors is downright hazardous:
The widespread overwhelming belief is that close() is just going to always succeed, and there is more harm than good from signalling errors at all from that function.
In fact, it is difficult to even return EINTR from close() on Linux, according to Christoph Hellwig. If the driver or filesystem's release() method returns an error, it is explicitly ignored. The only path that would allow a driver to return EINTR is if it provides a flush() method that does so. Hellwig plans to post a patch that would enforce a no-EINTR policy on that path as well.
If EINTR can never be returned, there is no real reason to map it to EINPROGRESS in glibc. But, since glibc may be used on an older kernel that can return EINTR in some rare situations, mapping it to something probably makes sense. That could be EINPROGRESS or, perhaps better still, just zero for success, as suggested by Rich Felker. There really isn't much the application programmer can do if close() returns an error. As Russ Allbery put it in a reply to Felker:
As Allbery said, the POSIX EINTR semantics are not really possible on Linux. The file descriptor passed to close() is de-allocated early in the processing of the system call and the same descriptor could already have been handed out to another thread by the time close() returns. The Linux behavior could be changed if there were a sufficiently good reason to do so, but, so far, that reason has been elusive.
So the POSIX-suggested handling of an EINTR, which is to retry the close(), could actually be quite dangerous on Linux. For that reason, Mark Mentovai suggested a change to the glibc manual to avoid recommending retrying close() on Linux.
The topic came up on the linux-kernel mailing list back in 2005; Linus Torvalds was fairly adamant that an EINTR return should only be used to show that some other error has occurred (like the data was not flushed to the file), not that the descriptor was still open. In fact, Torvalds said that he didn't believe retrying close() is right for almost any Unix system, not just Linux. Any application that really needs to catch I/O errors when the data gets flushed, should do so using fsync(), he said.
Perhaps there are POSIX systems out there that have a close() that may not actually de-allocate the file descriptor when it gets interrupted, but it's a little hard to see what the advantage of that would be. In many cases, the return code from close() is completely ignored (for good or ill), so leaving it open would just lead to a file descriptor leak. Even if the error is caught, the application can't really do anything to repair the situation, it can only retry the close(), which seems a little pointless. But, evidently, it wasn't pointless to The Austin Group.
The first release of the GNU Eiffel compiler Liberty Eiffel is now available. The compiler is developed from a fork of the SmartEiffel codebase, though the project's goal "is to retain from SmartEiffel its rigour; but not its rigidity."
Version 0.5 of the Guix package management system for GNU has been released. This version includes support for multiple user profiles, support for saving a system's configuration, and support for pulling package metadata from the official "gnumaint" repository.
Newsletters and articles
Page editor: Nathan Willis
Brief itemsAllSeen Alliance, an effort to promote development of the "Internet of Everything." "The members of the AllSeen Alliance will contribute software and engineering resources as part of their collaboration on an open software framework that enables hardware manufacturers, service providers and software developers to create interoperable devices and services. This open source framework allows ad hoc systems to seamlessly discover, dynamically connect and interact with nearby products regardless of brand, transport layer, platform or operating system."
Articles of interestThe Free Software Foundation has been defending computer users' freedoms and privacy for nearly thirty years. No matter the political climate, we have always fought to defend the freedoms of all computer users. Today, in the face of mass surveillance, more people than ever are discovering that free software is a necessary cornerstone of a free society. With this momentum, we can turn our blueprints for a free software future into brick and mortar." looks at the Linux Foundation's collaborative project, the AllSeen Alliance. "So now let’s look at what it takes to make an Internet of Things possible, comprising the wares and services of many different vendors, and types of vendors. It represents roughly the same goal – to create another type of local area network – but this time, there’s no router. Each thing is its own router, and for every other neighboring thing as well, passing along messages from device to device, and perhaps eventually back out to the Internet. That requires more than just a single interoperable communication standard, and more than just devices that can send and receive signals. It also requires all sorts of different types of companies, and not just laptop vendors, to make the investment and take the risk to enable their respective products."
Calls for Presentations
|December 15||February 21
|Southern California Linux Expo||Los Angeles, CA, USA|
|December 31||April 8
|Open Source Data Center Conference||Berlin, Germany|
|January 7||March 15
|Chemnitz Linux Days 2014||Chemnitz, Germany|
|January 10||January 18
|Paris Mini Debconf 2014||Paris, France|
|January 15||February 28
|FOSSASIA 2014||Phnom Penh, Cambodia|
|January 15||April 2
|Libre Graphics Meeting 2014||Leipzig, Germany|
|January 17||March 26
|16. Deutscher Perl-Workshop 2014||Hannover, Germany|
|January 19||May 20
|PGCon 2014||Ottawa, Canada|
|January 19||March 22||Linux Info Tag||Augsburg, Germany|
|January 22||May 2
|LOPSA-EAST 2014||New Brunswick, NJ, USA|
|January 28||June 19
|USENIX Annual Technical Conference||Philadelphia, PA, USA|
|January 30||July 20
|OSCON 2014||Portland, OR, USA|
|January 31||March 29||Hong Kong Open Source Conference 2014||Hong Kong, Hong Kong|
|January 31||March 24
|Linux Storage Filesystem & MM Summit||Napa Valley, CA, USA|
|January 31||March 15
|Women MiniDebConf Barcelona 2014||Barcelona, Spain|
|January 31||May 15
|ScilabTEC 2014||Paris, France|
|February 1||April 29
|Android Builders Summit||San Jose, CA, USA|
|February 1||April 7
|ApacheCon 2014||Denver, CO, USA|
|February 1||March 26
|Collaboration Summit||Napa Valley, CA, USA|
|February 3||May 1
|Linux Audio Conference 2014||Karlsruhe, Germany|
|February 5||March 20||Nordic PostgreSQL Day 2014||Stockholm, Sweden|
|February 8||February 14
|Linux Vacation / Eastern Europe Winter 2014||Minsk, Belarus|
|February 9||July 21
|EuroPython 2014||Berlin, Germany|
If the CFP deadline for your event does not appear here, please tell us about it.
Upcoming EventsLibrePlanet is an annual conference for free software enthusiasts. LibrePlanet brings together software developers, policy experts, activists and computer users to learn skills, share accomplishments and face challenges to software freedom. Newcomers are always welcome, and LibrePlanet 2014 will feature programming for all ages and experience levels."
|SciPy India 2013||Bombay, India|
|30th Chaos Communication Congress||Hamburg, Germany|
|January 6||Sysadmin Miniconf at Linux.conf.au 2014||Perth, Australia|
|Real World Cryptography Workshop||NYC, NY, USA|
|QtDay Italy||Florence, Italy|
|Paris Mini Debconf 2014||Paris, France|
|January 31||CentOS Dojo||Brussels, Belgium|
|FOSDEM 2014||Brussels, Belgium|
|Config Management Camp||Gent, Belgium|
|Open Daylight Summit||Santa Clara, CA, USA|
|Django Weekend Cardiff||Cardiff, Wales, UK|
|devconf.cz||Brno, Czech Republic|
If your event does not appear here, please tell us about it.
Page editor: Rebecca Sobol
Copyright © 2013, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds