User: Password:
|
|
Subscribe / Log in / New account

LWN.net Weekly Edition for June 20, 2013

Trying out the Raspberry Pi

By Jake Edge
June 19, 2013

The Raspberry Pi has clearly made a splash since its debut as a consumer product in April 2012. Thanks to the generosity of the Python Software Foundation, all of the attendees at this year's PyCon were given one of the diminutive ARM computers; a giveaway that was announced just prior to Raspberry Pi founder Eben Upton's keynote. While it has taken a bit to find time to give it a try—conference season is upon us—that has finally come to pass.

Background

[Raspbian desktop]

For anyone living under a rock (or, perhaps, just largely uninterested in such things), the Raspberry Pi—often abbreviated "RPi"—is a credit-card-sized Linux computer that is targeted at children. While it may have been envisioned as an educational tool to teach kids about computers and programming, there seem to be plenty of adults "playing" with the RPi as well. It has modest hardware (a 700MHz ARM11 core with 512M of RAM for the Model B) by today's—even yesterday's—standards, but it is vastly more powerful than the 8-bit microcomputers that served as something of a role model in its design.

The original price tag was meant to be $25, but that couldn't quite be met, so the Model B (which was the first shipped), was priced at $35. Eventually, the Model A (without on-board Ethernet) did hit the $25 price point. In either case, it is a low-cost device that is meant to be affordable to students (or their parents) in both the developed and developing world. It requires a monitor (either composite video or HDMI) and a USB keyboard and mouse, which will add to the cost somewhat, but, at least in some areas, cast-off televisions and input devices may not be all that hard to find. Given its size, an RPi can be easily transported between home and a computer lab at school as well.

The goal is to give students a platform on which they can easily begin programming without having to install any software or do much in the way of configuration; turn it on and start hacking. Because of the price, an interested child could have their own RPi, rather than vying for time on a shared computer at school or at home. That at least is the vision that the project started with, but its reach quickly outgrew that vision as it has been adopted by many in the "maker" community and beyond.

[Scratch]

The "Pi" in the name stands for Python (despite the spelling), which is one of the primary programming environments installed on the device. But that's not all. The Raspbian distribution that came on an SD card with the PyCon RPi also comes with the Scratch visual programming environment and the Smalltalk-based Squeak (which is used to implement Scratch). As its name would imply, Raspbian is based on Debian (7.0 aka "Wheezy"). It uses the resource-friendly LXDE desktop environment and provides the Midori browser, a terminal program, a local Debian reference manual, the IDLE Python IDE (for both Python 2.7.3 and 3.2.3), and some Python games as launcher icons on the desktop.

Firing it up

Starting up the RPi is straightforward: hook up the monitor, keyboard, and mouse, insert the SD card, and apply power. Using three of the general purpose I/O (GPIO) pins on the device will provide a USB serial console, but it isn't generally needed. Once it boots, logging in as "root" (with no password) for the first time will land in the raspi-config tool. Or you can log in as "pi" with password "raspberry" to get to the command line.

The configuration tool allows changing settings for the device, such as the time zone, "pi" user password, starting up X at boot, enabling sshd (set a root password first), and changing the memory split between Linux and the GPU. From the command line, though, the venerable startx command will bring up the LXDE environment. One note: when using an HDMI to VGA converter some tweaking to the video mode may be required.

It should come as no surprise that, once configured, the system behaves like a normal Debian system. The initial "apt-get upgrade" took quite some time, as there were lots of packages to pick up, but subsequent upgrades have been quick. It is entirely suitable for its intended purpose, but can be expanded with the packages available from the Raspbian (and other) repositories.

NOOBS

Of course there are other distribution choices to run on the RPi. In early June, the Raspberry Pi Foundation (the organization behind the device) announced the "New Out Of Box Software" (NOOBS) installer that makes it much easier to get started. The NOOBS zip file needs to be downloaded and unpacked onto a 4G or larger SD card, but once that's done, multiple distributions can be installed without needing network access or requiring special imaging software to put a boot image onto the card.

NOOBS acts like a recovery image, in that it will prompt to install one of several distributions on first boot, but it is always available by holding down the shift key when booting. You can overwrite the existing distribution on the card to recover from a corrupted installation or to switch to one of the others. In addition, it has a tool to edit the config.txt system configuration file for the currently installed distribution or to open a browser to get help right from NOOBS.

Using NOOBS is meant to be easy, and it was—once I could get it to boot. My choice of using a VGA monitor (thus an HDMI to VGA converter) meant that I needed a development version of NOOBS and the config.txt from Raspbian.

NOOBS provides images for several different distributions: Arch Linux ARM, OpenELEC, Pidora, Raspbian (which is recommended), RaspBMC, and RISC OS. OpenELEC and RaspBMC are both XBMC-based media-centric distributions, while Arch Linux ARM, Raspbian, and Pidora are derived from their siblings in the desktop/server distribution world. RISC OS is the original operating system for Acorn computers that used the first ARM processors. It is a proprietary operating system (with source) that is made available free of charge for RPi users.

[Pidora]

Installing Pidora using NOOBS was simple, though it took some time for NOOBS to copy the distribution image to a separate SD card partition. Pidora seems to use the video mode information from the NOOBS config.txt as there were no problems on that score. Using startx appears to default to GNOME (which is not even installed), so the desktop wouldn't start up; switching the default desktop to Xfce in /etc/sysconfig/desktop may be required. Once installed, booting gives a choice of NOOBS (by holding down the shift key) or Pidora (or whatever other distribution is installed). It is a fully functional installation, not LiveCD-style, so there is a writable ext4 partition to store programs and other data (like the screen shot at right) or to add and update packages on the system.

There are a lot of people and projects using the RPi for various interesting things. The front page blog at the RPi home page is regularly updated with stories about things like an RPi lab in Ghana, a sailing robot using an RPi for navigation and control, and the Onion Pi, a Tor proxy running on an RPi. In his PyCon keynote, Upton listed numerous projects that have adopted the RPi for everything from music synthesizers to art installations and aerial photography from weather balloons.

The RPi is being used to research and test new technologies as well. There are plans afoot to switch from X to the Wayland display server protocol, which will make it a useful testing ground for Wayland and Weston. Beyond that, the foundation has been helping to fund PyPy, the Python interpreter written in Python as a way to improve the performance of that language on the device.

It seems that some combination of capabilities, community, and, perhaps, marketing have led to the RPi's popularity. The focus on providing a platform to learn programming that was portable and easy to use has widened far beyond that niche. It has resulted in an ecosystem of companies that are selling accessories for the RPi (including things like cases, add-ons for controlling other devices using the GPIO pins, sensors, and so on). But it is probably the "fun" aspect that is the biggest push behind much of the RPi's momentum—the system really does hearken back to the days of TRS-80s and other 8-bit computers, but with color, sound, video, and a lot more power.

Comments (35 posted)

Dividing the Linux desktop

By Jonathan Corbet
June 17, 2013
The Ubuntu desktop has been committed to the Unity shell for some time; more recently, Canonical also announced that Ubuntu will be moving over to the new, in-house Mir display server. That decision raised a number of eyebrows at the time, given that most of the desktop Linux community had long since settled on Wayland as its way forward. As time passes, though, the degree to which Canonical is breaking from the rest of the community is becoming increasingly clear. The Linux desktop could never be described as being "unified," but the split caused by projects like Mir and SurfaceFlinger may prove to be more profound than the desktop wars of the past.

Canonical developer Former Canonical developer Jonathan Riddell started the most recent discussion with some worries about the future of Kubuntu, the KDE-based flavor of the Ubuntu distribution. KDE does not currently run on Mir, and some KDE developers (such as KWin lead developer Martin Gräßlin) have made it clear that they are not interested in adding Mir support. So Ubuntu will be shipping with a display server that does not support KDE in any sort of native mode. While libraries providing X and Wayland protocol support for Mir will certainly exist, they are unlikely to provide the level of functionality needed by desktop components like the KDE core. The result, Jonathan said, was that "the switch to Mir in Ubuntu seems pretty risky for the existence of Kubuntu"; he wondered about how other Ubuntu flavors might be affected as well.

Unsurprisingly, the developers working on Mir insist that they do not want to throw out the non-Unity desktop environments. Ubuntu community manager Jono Bacon was quick to say:

[I]t would be a failing of the Mir project if it meant that flavors could no longer utilize Ubuntu as a foundation, but this is going to require us to collaborate to find good solutions.

In other words, Canonical has a certain willingness to help make other desktop environments work on Mir, but it will take some effort from the developers of those environments as well. More specifically, Thomas Voß has offered to work with the developers of KWin (the KDE window manager) to find ways to make it work within the Mir environment. Assuming that a path forward is found, it is entirely possible that Kubuntu will be able to run under Mir on a Ubuntu-based system.

The problem is that such solutions are likely to be second-class citizens in general, and there are reasons to believe that the problem could be more acute in this case. The Mir specification does not describe it as a display server for all desktop environments; instead, it says "The purpose of Mir is to enable the development of the next generation Unity." There are a number of implications that come out of a position like that, not the least of which being that Mir and Unity appear to be developed in lockstep with no particular effort to standardize the protocol between them.

Canonical developer Christopher Halse Rogers described the situation in fairly direct terms:

We kinda have an explicit IPC protocol, but not really. We don't intend to support re-implementations of the Mir client libraries, and will make no effort to not break them if someone tries.

This position differs significantly from that of the Wayland project, which has based itself on a stable protocol specification. Leaving the system "protocol-agnostic" (that's the term used in the Mir specification) certainly gives a lot of freedom to the Mir/Unity developers, who can quickly evolve the system as a whole. But it can only make life difficult for developers of any other system who will not have the same level of access to Mir development and who might like a bit more freedom to mix and match different versions of the various components.

The result of this approach to development may well be that Mir support from desktop environments other than Unity ends up being half-hearted at best; it cannot be a whole lot of fun to develop for a display server that exists primarily to support a competing system. Few other distributions have shown interest in using Mir, providing another disincentive for developers. So, as the X Window System starts to fade away into the past, Ubuntu looks to be left running a desktop stack that is not used to any significant degree anywhere else. Ubuntu, increasingly, will be distinct from other distributions, including the Debian distribution on which it is based.

The success of Android (which uses its own display server called SurfaceFlinger) shows that reimplementing the stack can be a workable strategy. But there must certainly be a limit to how many of these reimplementations can survive in the long run, and the resources required to sustain this development are significant. Canonical is taking a significant risk by separating from the rest of the graphics development community in this way.

Over the many years of its dominance, X has been both praised and criticized from many angles. But, perhaps, we have not fully appreciated the degree to which the X Window System has served as a unifying influence across the Linux desktop environment. Running one desktop environment did not preclude using applications from a different project; in the end, they all talked to the X server, and they all worked well (enough). Over the next few years we will see the process of replacing X speed up, but there does not appear to be any single replacement that can take on the same unifying role. We can expect the desktop environment to fragment accordingly. Indeed, that is already happening; very few of us run Android applications on our desktop Linux systems.

"Fragmentation" is generally portrayed as a bad thing, and it certainly can be; the proprietary changes made by each Unix vendor contributed to the decline of proprietary Unix as a whole. But we should remember that the divergence we are seeing now is all happening with free software. That means that a lot of experimentation can go on, with the best ideas being easily copied from one project to the next, even if licensing differences will often prevent the movement of the code itself. If things go well, we will see a quicker exploration of the space than we would have under a single project and a lot of innovation. But the cost may be a long period where nothing is as well-developed or as widely supported as we might like it to be.

Comments (234 posted)

Pencil, Pencil, and Pencil

By Nathan Willis
June 18, 2013

An unfortunate drawback to the scratch-your-own-itch development model on which many free software projects depend is that creators can lose interest. Without a maintainer, code gets stale and users are either stranded or simply jump ship to a competing project. If the community is lucky, new developers pick up where the old ones left off, and a project may be revived or even driven to entirely new levels of success. On the other hand, it is also possible for multiple people to start their own forks of the code base, which can muddy the waters in a hurry—as appears to be happening at the moment with the 2D animation tool Pencil. Plenty of people want to see it survive, which has resulted in a slew of individual forks.

Pencil, for those unfamiliar with it, is a "cell animation" application, which means that it implements old-fashioned animation drawn frame by frame (although obviously the software helps out considerably compared to literally drawing each frame from scratch). In contrast, the other popular open source animation tools Tupi and Synfig are vector-based, where motion comes from interpolating and transforming vector objects over a timeline. Despite its old-fashioned ambiance, though, Pencil has proven itself to be a popular tool, particularly for fast prototyping and storyboarding, even when the animator may create the final work in a vector application.

Original Pencil maintainer Pascal Naidon drifted away from the project by 2009. At that time, the latest release was version 0.4.4, but there were newer, unpackaged updates in the Subversion source repository. Version 0.4.4 eventually started showing signs of bit-rot, however, particularly as newer versions of the Qt framework (against which Pencil is built) came out. Users of the application, however, have continued to maintain a community on the official site's discussion forum.

A box of forks

Understandably, there were never a ton of Pencil users, at least as compared to a general-purpose desktop application. But the dormant project picked up a dedicated follower when the Morevna Project, an open source anime movie project, adopted it for its workflow. Morevna's Konstatin Dmitriev began packaging his own fork of Pencil in late 2009, based on the latest official Subversion code. He added keybindings for commands, command-line options to integrate Pencil with Morevna's scripted rendering system, and fixed a number of bugs. Subsequently, he began adding new features as well, adding user-selectable frame rates, some new editing tools, and support for multiple-layer "onion skinning." Onion skinning in animation is the UI technique of overlaying several (usually translucent) frames onto the current drawing, so the animator can visualize motion. But there are also a lot of bug fixes in the Morevna fork that deal with audio/video import and export, since the team uses Pencil to generate fill-in sequences for unfinished shots. Since Morevna is a Linux-based effort, only Linux packages are available, and they are still built against Qt 4.6.

In contrast, the Pencil2D fork started by Chris Share eschewed new features and focused squarely on fixing up the abandoned Subversion code for all three major desktop OSes. Share's fork is hosted at SourceForge. One of the fixes he applied was updating the code for Qt 5, but that decision caused major problems when Qt 5 dropped support for pressure-sensitive Wacom pen tablets, which are a critical tool for animators. In early June 2013, Matt Chang started his own fork also at the Pencil2D site, using Qt 4.8.4. Whether Share's fork hit a brick wall with the Qt 5 port or has simply stagnated for other reasons, Chang's is still active, to the point where he has posted a roadmap on the Pencil2D forum and is taking feature suggestions. Chang has only released binaries for Windows, but he believes the code will run on Linux and OS X as well, and maintains it for all three.

Both of the forks at Pencil2D headed off on their own, rather than working with Dmitriev's Morevna fork. More to the point, Chang's roadmap includes a different set of drawing tools and a separate implementation of function keybindings. Luckily, the two forks' editing tool additions do not conflict; Morevna's adds a "duplicate this frame" button and adds controls for moving layers, while Chang's include object transformations and canvas rotation.

In contrast to the other Pencil forks, the Institute for New Media Art Technology (Numediart) at the University of Mons took its fork in an entirely different direction as part of its "Eye-nimation" project. Eye-nimation is used to produce stop-motion animation. Numediart's Thierry Ravet integrated support for importing images directly from a USB video camera into Pencil, where the images can be traced or otherwise edited. It uses the OpenCV library to grab live input from the camera, and adds image filters to reduce the input to black and white (bi-level, not grayscale) and smooth out pixelation artifacts. Ravet spoke about the project at Libre Graphics Meeting in April. The work is cross-platform, although it is built on top of an earlier release of the original Pencil code, 0.4.3.

As if three concurrent forks were not enough, many Linux distributions still package the final official release from the original project, 0.4.4. And there are several independent Pencil users who maintain their own builds of the unreleased Subversion code, some of which refer to it as version 0.5.

Sharpening up

On the off chance that one might lose count, the total currently stands at five versions of Pencil: the final release from the original maintainer (0.4.4), the unreleased Subversion update, the Morevna fork, Chang's Pencil2D fork, and Numediart's Eye-nimation. The situation is a source of frustration for fans of the program, but how to resolve it is still quite up in the air. Dmitriev maintains the Morevna fork for utilitarian reasons (to get things done for Morevna); his preference is to work on Synfig and he does not have time to devote to maintaining Pencil in the long run, too. Chang does seem interested in working on Pencil and in maintaining his fork as open project that is accessible to outside contributors.

But combining the efforts could be a substantial headache. The Morevna fork is considerably further along, but Chang has already refactored his fork enough that merging the two (in either direction) would be non-trivial, to say the least. And it is not clear whether the Eye-nimation feature set is something that other Pencil users want; Dmitriev expressed some interest in it in his post-LGM blog report, though he was concerned that Numediart had not based its work on the Morevna fork.

The primary competition for Pencil is the prospect that cell-animation support will get added to another program. Krita has a Google Summer of Code (GSoC) student working on the feature (in addition to the partial support already added), while Dmitriev said in a private email that he hopes one day to implement cell-animation features in Synfig. If either effort bears fruit, that would be a positive development, but in the near term news of something like a GSoC project can sap energy from existing efforts, yet still might ultimately fall short.

It is a fairly common problem in the free software community for a well-liked project to fizzle out because the maintainers can no longer spend the time required to develop it and no one else steps up. It is rarer for multiple parties to independently take up the mantle and produce competing derivatives—especially in an obscure niche like traditional animation software. But when that does happen, the surplus energy, if it remains divided, can still end up doing little to revitalize a project that many users want to see make a return.

Comments (11 posted)

Page editor: Nathan Willis

Security

Tor peels back Browser Bundle 3.0 alpha

By Nathan Willis
June 19, 2013

The Tor project has now posted the first alpha builds of the soon-to-be-released Tor Browser Bundle 3.0, which provides a newer and faster anonymous-browsing experience from previous editions, but revamps a number of interface settings for simplicity. Tor's architecture can be on the confusing side for many people, so (in theory) improved ease-of-use translates into fewer accidentally-insecure browsing sessions. The project is also taking the first steps into other important features, like a means for verifying binary builds.

The browser at the heart of the Tor Browser Bundle is a derivative of Firefox; the 3.0 release will be based on Firefox 17 Extended Support Release (ESR). It incorporates several changes from the upstream Firefox, including settings and extensions that guard the user's anonymity and a pre-configured pipeline to the anonymizing Tor network. In addition to piping all browser traffic through Tor, the bundle includes the HTTPS Everywhere extension to force TLS/SSL connections to a wide variety of sites, NoScript to selectively disable JavaScript and other executable content, and Torbutton for one-click toggling of Tor transport.

The new bundles are available on the Tor site. There are packages for OS X and Windows as well as both 32-bit and 64-bit Linux systems, all in a variety of localizations. The Linux builds are compressed tar archives; they can be uncompressed to virtually any location and run with standard user permissions.

Previous releases of the bundle included Vidalia, a standalone Tor controller which allowed the user to start and stop the Tor network connection, as well as tweak its settings. In the 3.0 browser series, Vidalia has been replaced with a Tor Launcher browser extension, which performs the same basic function. Users who require more customization can still run Vidalia separately. As such, there is a tad less "bundle" to the new Tor Browser Bundle, but there is also less complexity to fret over.

This streamlining of the user experience is evidently a conscious decision on the project's part; it is mentioned first in the blog announcement of the alpha. But there is more. The new release also includes a new default home page, a local about:tor URI.

[Tor Browser about:tor]

This page provides Tor status information, a "secure search" bar utilizing the Startpage search engine, and links to some informational resources about both privacy and how to get more involved in the Tor project. Perhaps the biggest difference, though, is that this page reports whether or not Tor has been successfully started.

This has the potential to be an important change for users in the field; the old version of the browser was set to visit https://check.torproject.org/ as the default homepage. While it, too, checks that Tor is running, it has the drawback of doing so by immediately requesting a remote page, and that could be a security risk for those users who run the Tor browser to evade surveillance. After all, if Tor is not running for some reason when the browser launches, that information could be intercepted via the HTTPS request. In addition, although Tor has greatly improved its bandwidth in recent years, connecting to a remote site could be slow. The about:tor page performs a local test to ensure that Tor is in fact functioning, and check.torproject.org is still accessible as a link.

The Tor Launch extension also fires up a "first run" wizard the first time it is run (obviously) that asks whether the user's Internet connection is "clear of obstacles" or is "censored, filtered, or proxied." Choosing the first option launches Tor in normal mode without any special settings; choosing the second provides a set of settings windows into which one can enter proxy addresses, open firewall ports that Tor should use, and bridge relay addresses to which Tor should connect. Manually entering bridge relay addresses is an added security layer; the addresses are not published, so they are much harder for censors to monitor or block in advance. On the other hand, one must obtain the addresses "out of band" so to speak—usually by emailing the Tor project.

[Tor Browser launcher]

The first-run wizard is a nice feature, although it is puzzling why it is configured to only run one time. After all, surely it is fairly common for Tor Browser users to run the software from a laptop. The user can get to the wizard again by punching the "Options" button on the "Tor is starting up" window that appears when the browser is launched, but speed is required on anything resembling modern hardware. On my machine, the startup window only appeared for 1.5 seconds at most. Alternatively, resetting the extensions.torlauncher.prompt_at_startup preference to "true" in about:config brings it back as well; it is simply odd not to have a setting available.

There are other changes to the 3.0 alpha builds, including a "guided" extraction for Windows users, which assists the user to install the browser in a convenient and hopefully difficult-to-forget location on the system, and overall reductions in the sizes of the downloaded packages. All builds are now less than 25MB in size, a size chosen because it makes it possible to send the package as an attachment in GMail.

The announcement also highlights a change in the project's build infrastructure. The Tor Browser Bundle is now built with Gitian trusted-build tool, which is designed to allow independent developers to compile bit-identical binaries, thus providing a means for verifying the integrity of a binary package. The Tor Browser is not yet "quite at the point where you always get a matching build," the announcement says, but it is getting closer. Gitian is already in use by a handful of other projects like Bitcoin.

As a browser, naturally, the Tor Browser is quite solid. The update to Firefox 17 ESR brings with it a host of improved web features—although one notable addition, Firefox's built-in PDF viewer, was not introduced until Firefox 19, so its functionality in Tor Browser comes via the official add-on instead. The PDF reader extension is (like more and more Mozilla projects) implemented in JavaScript. But users will inevitably find using Tor Browser a somewhat frustrating affair simply because of how many sites these days rely on JavaScript and other potentially-privacy-harming techniques. There is no silver bullet for that problem; the best one can do is delve into NoScript exception rules to restore functionality for specific, trusted sites.

There does not appear to be a full list of the preferences that Tor Browser changes from the upstream Firefox release, although there are several (e.g., it is set to never record browsing history or save passwords). It is also a bit strange that the bundled extensions do not include a cookie-management tool, but perhaps this is in the interest of simplicity for the user. Finally, it is also surprising that the builds offer no tools for finding Tor hidden services. Hidden services are not directly related to anonymous access to the Internet, but the project does use the browser bundle to promote other efforts, like SSL Observatory, which is included in the HTTPS Everywhere Extension. Still, perhaps providing any sort of hidden service index would simply be crossing into services best left to others.

So far there are few known issues to report, but there will certainly be some during the alpha and beta testing cycle. The only real caveat for power users is that the increased simplicity of the bundle means less flexibility. The absence of Vidalia has already been mentioned; one can also run the browser with an existing transparent Tor router (a feature that in previous releases was explicitly presented to the user) by jumping through some hoops. Using the browser with a transparent router now requires setting the TOR_SKIP_LAUNCH environment variable to 1. Of course, with a Tor router already running, adding the Tor Browser to the mix essentially just gives the user Firefox with fewer extensions and plugins, but perhaps that is desirable from time to time. Then again, where anonymity is concerned, maybe you can't be too careful.

Comments (7 posted)

Brief items

Security quotes of the week

For the past several years, we've been seeing a steady increase in the weaponization, stockpiling, and the use of exploits by multiple governments, and by multiple *areas* of multiple governments. This includes weaponized exploits specifically designed to "bridge the air gap", by attacking software/hardware USB stacks, disconnected Bluetooth interfaces, disconnected Wifi interfaces, etc. Even if these exploits themselves don't leak (ha!), the fact that they are known to exist means that other parties can begin looking for them.

In this brave new world, without the benefit of anonymity to protect oneself from such targeted attacks, I don't believe it is possible to keep a software-based GPG key secure anymore, nor do I believe it is possible to keep even an offline build machine secure from malware injection anymore, especially against the types of adversaries that Tor has to contend with.

Mike Perry

For instance, did you know that it is a federal crime to be in possession of a lobster under a certain size? It doesn't matter if you bought it at a grocery store, if someone else gave it to you, if it's dead or alive, if you found it after it died of natural causes, or even if you killed it while acting in self defense. You can go to jail because of a lobster.

If the federal government had access to every email you've ever written and every phone call you've ever made, it's almost certain that they could find something you've done which violates a provision in the 27,000 pages of federal statues or 10,000 administrative regulations. You probably do have something to hide, you just don't know it yet.

Moxie Marlinspike (Thanks to Paul Wise.)

Many of you have seen my talk about medical devices and general software safety [YouTube]. In fact, I'm up in the Boston area, having given a similar talk yesterday at the Women's Leadership Community Luncheon alongside the Red Hat Summit. Well, I seem to have gotten through, at least a little! While I was giving the talk yesterday, the FDA finally admitted that there is a big problem. In their Safety Communication, the FDA says that medical devices can be vulnerable to attack. They recommend that manufacturers assure that appropriate safeguards are in place to prevent security attacks on devices, though they do not recommend how this should be accomplished.
Karen Sandler (ICS-CERT alert.)

Comments (11 posted)

New vulnerabilities

autotrace: denial of service

Package(s):autotrace CVE #(s):CVE-2013-1953
Created:June 19, 2013 Updated:July 9, 2013
Description: From the Red Hat bugzilla:

A buffer overflow flaw was reported in autotrace's input_bmp_reader() function. When autotrace is compiled with FORTIFY_SOURCE, this is caught and turned into a simple denial of service.

Alerts:
Fedora FEDORA-2013-12032 autotrace 2013-07-09
Fedora FEDORA-2013-11904 autotrace 2013-07-09
Mandriva MDVSA-2013:190 autotrace 2013-07-02
Mageia MGASA-2013-0195 autotrace 2013-07-01
openSUSE openSUSE-SU-2013:1049-1 autotrace 2013-06-19
openSUSE openSUSE-SU-2013:1044-1 autotrace 2013-06-19

Comments (none posted)

dbus: denial of service

Package(s):dbus CVE #(s):CVE-2013-2168
Created:June 13, 2013 Updated:August 23, 2013
Description:

From the Debian announcement:

Alexandru Cornea discovered a vulnerability in libdbus caused by an implementation bug in _dbus_printf_string_upper_bound(). This vulnerability can be exploited by a local user to crash system services that use libdbus, causing denial of service. Depending on the dbus services running, it could lead to complete system crash.

Alerts:
openSUSE openSUSE-SU-2014:1239-1 dbus-1 2014-09-28
Gentoo 201308-02 dbus 2013-08-22
Slackware SSA:2013-191-01 dbus 2013-07-10
openSUSE openSUSE-SU-2013:1118-1 dbus-1 2013-07-02
Fedora FEDORA-2013-11198 dbus 2013-06-27
Mandriva MDVSA-2013:177 dbus 2013-06-25
Mageia MGASA-2013-0173 dbus 2013-06-18
Ubuntu USN-1874-1 dbus 2013-06-13
Debian DSA-2707-1 dbus 2013-06-13

Comments (none posted)

fail2ban: denial of service

Package(s):fail2ban CVE #(s):CVE-2013-2178
Created:June 17, 2013 Updated:March 10, 2014
Description: From the Debian advisory:

Krzysztof Katowicz-Kowalewski discovered a vulnerability in fail2ban, a log monitoring and system which can act on attack by preventing hosts to connect to specified services using the local firewall.

When using fail2ban to monitor Apache logs, improper input validation in log parsing could enable a remote attacker to trigger an IP ban on arbitrary addresses, thus causing a denial of service.

Alerts:
Gentoo 201406-03 fail2ban 2014-06-01
openSUSE openSUSE-SU-2014:0493-1 fail2ban 2014-04-08
openSUSE openSUSE-SU-2014:0348-1 fail2ban 2014-03-08
openSUSE openSUSE-SU-2013:1121-1 fail2ban 2013-07-02
openSUSE openSUSE-SU-2013:1120-1 fail2ban 2013-07-02
Mandriva MDVSA-2013:191 fail2ban 2013-07-02
Mageia MGASA-2013-0192 fail2ban 2013-07-01
Fedora FEDORA-2013-10830 fail2ban 2013-06-28
Fedora FEDORA-2013-10806 fail2ban 2013-06-28
Debian DSA-2708-1 fail2ban 2013-06-16

Comments (none posted)

gallery3: insecure URL handling

Package(s):gallery3 CVE #(s):CVE-2013-2138
Created:June 14, 2013 Updated:June 19, 2013
Description:

From the Fedora bug:

A security flaw was found in the way uploadify and flowplayer SWF files handling functionality of Gallery version 3, an open source project with the goal to develop and support leading photo sharing web application solutions, processed certain URL fragments passed to these files (certain URL fragments were not stripped properly when these files were called via direct URL request(s)). A remote attacker could use this flaw to conduct replay attacks.

Alerts:
Fedora FEDORA-2013-10138 gallery3 2013-06-14
Fedora FEDORA-2013-10168 gallery3 2013-06-14

Comments (none posted)

kernel: multiple vulnerabilities

Package(s):kernel CVE #(s):CVE-2013-2851 CVE-2013-2148 CVE-2013-2140 CVE-2013-2147 CVE-2013-2852 CVE-2013-2164
Created:June 13, 2013 Updated:November 1, 2013
Description:

From the Fedora advisory:

Bug #969515 - CVE-2013-2851 kernel: block: passing disk names as format strings https://bugzilla.redhat.com/show_bug.cgi?id=969515

Bug #971258 - CVE-2013-2148 Kernel: fanotify: info leak in copy_event_to_user https://bugzilla.redhat.com/show_bug.cgi?id=971258

Bug #971146 - CVE-2013-2140 kernel: xen: blkback: insufficient permission checks for BLKIF_OP_DISCARD https://bugzilla.redhat.com/show_bug.cgi?id=971146

Bug #971242 - CVE-2013-2147 Kernel: cpqarray/cciss: information leak via ioctl https://bugzilla.redhat.com/show_bug.cgi?id=971242

Bug #969518 - CVE-2013-2852 kernel: b43: format string leaking into error msgs https://bugzilla.redhat.com/show_bug.cgi?id=969518

Bug #973100 - CVE-2013-2164 Kernel: information leak in cdrom driver https://bugzilla.redhat.com/show_bug.cgi?id=973100

Alerts:
SUSE SUSE-SU-2015:0812-1 kernel 2015-04-30
openSUSE openSUSE-SU-2014:0766-1 Evergreen 2014-06-06
Debian DSA-2906-1 linux-2.6 2014-04-24
SUSE SUSE-SU-2014:0536-1 Linux kernel 2014-04-16
Red Hat RHSA-2014:0284-01 kernel 2014-03-11
openSUSE openSUSE-SU-2013:1971-1 kernel 2013-12-30
Oracle ELSA-2014-3002 kernel 2014-02-12
Mageia MGASA-2013-0375 kernel-vserver 2013-12-18
Mageia MGASA-2013-0373 kernel-tmb 2013-12-18
Mageia MGASA-2013-0374 kernel-rt 2013-12-18
Mageia MGASA-2013-0372 kernel-linus 2013-12-18
Mageia MGASA-2013-0371 kernel 2013-12-17
Scientific Linux SLSA-2013:1645-2 kernel 2013-12-16
Ubuntu USN-2050-1 linux-ti-omap4 2013-12-07
Red Hat RHSA-2013:1783-01 kernel 2013-12-05
Ubuntu USN-2039-1 linux-ti-omap4 2013-12-03
Ubuntu USN-2038-1 kernel 2013-12-03
Oracle ELSA-2013-2585 kernel 2013-11-28
Oracle ELSA-2013-2585 kernel 2013-11-28
openSUSE openSUSE-SU-2013:1773-1 kernel 2013-11-26
Red Hat RHSA-2013:1645-02 kernel 2013-11-21
Ubuntu USN-2018-1 linux-ti-omap4 2013-11-08
Ubuntu USN-2020-1 linux-lts-raring 2013-11-08
Ubuntu USN-2023-1 kernel 2013-11-08
Ubuntu USN-2017-1 kernel 2013-11-08
Ubuntu USN-2015-1 kernel 2013-11-08
Ubuntu USN-2016-1 EC2 kernel 2013-11-08
CentOS CESA-2013:X012 Xen4CentOS kernel 2013-11-06
Oracle ELSA-2013-1645 kernel 2013-11-26
openSUSE openSUSE-SU-2013:1619-1 kernel 2013-11-01
Red Hat RHSA-2013:1450-01 kernel 2013-10-22
Ubuntu USN-1999-1 linux-ti-omap4 2013-10-21
Ubuntu USN-1997-1 linux-ti-omap4 2013-10-21
Ubuntu USN-1994-1 linux-lts-quantal 2013-10-21
Ubuntu USN-1996-1 kernel 2013-10-21
Debian DSA-2766-1 linux-2.6 2013-09-27
SUSE SUSE-SU-2013:1474-1 Linux kernel 2013-09-21
SUSE SUSE-SU-2013:1473-1 Linux kernel 2013-09-21
Oracle ELSA-2013-2543 kernel 2013-08-29
Debian DSA-2745-1 kernel 2013-08-28
Oracle ELSA-2013-1166 kernel 2013-08-22
CentOS CESA-2013:X007 Xen4CentOS kernel 2013-08-22
Oracle ELSA-2013-1166 kernel 2013-08-22
Scientific Linux SLSA-2013:1166-1 kernel 2013-08-21
CentOS CESA-2013:1166 kernel 2013-08-21
Red Hat RHSA-2013:1166-01 kernel 2013-08-20
Ubuntu USN-1934-1 linux-ti-omap4 2013-08-20
Ubuntu USN-1933-1 linux-ti-omap4 2013-08-20
Ubuntu USN-1930-1 linux-ti-omap4 2013-08-20
Ubuntu USN-1936-1 linux-lts-raring 2013-08-20
Ubuntu USN-1931-1 linux-lts-quantal 2013-08-20
Ubuntu USN-1935-1 kernel 2013-08-20
Ubuntu USN-1932-1 kernel 2013-08-20
Ubuntu USN-1929-1 kernel 2013-08-20
Ubuntu USN-1920-1 linux-ti-omap4 2013-07-30
Ubuntu USN-1918-1 linux-ti-omap4 2013-07-29
Ubuntu USN-1916-1 linux-lts-raring 2013-07-29
Ubuntu USN-1915-1 linux-lts-quantal 2013-07-29
Ubuntu USN-1919-1 kernel 2013-07-29
Ubuntu USN-1917-1 kernel 2013-07-29
Ubuntu USN-1914-1 kernel 2013-07-29
Ubuntu USN-1912-1 kernel 2013-07-29
Ubuntu USN-1913-1 EC2 kernel 2013-07-29
Oracle ELSA-2013-2538 kernel 2013-07-18
Oracle ELSA-2013-2538 kernel 2013-07-18
Oracle ELSA-2013-2537 kernel 2013-07-18
Oracle ELSA-2013-2537 kernel 2013-07-18
Scientific Linux SL-kern-20130717 kernel 2013-07-17
CentOS CESA-2013:X002 kernel 2013-07-17
Oracle ELSA-2013-1051 kernel 2013-07-16
CentOS CESA-2013:1051 kernel 2013-07-17
Red Hat RHSA-2013:1080-01 kernel 2013-07-16
Red Hat RHSA-2013:1051-01 kernel 2013-07-16
Mandriva MDVSA-2013:194 kernel 2013-07-11
Mageia MGASA-2013-0212 kernel-vserver 2013-07-16
Mageia MGASA-2013-0213 kernel-tmb 2013-07-16
Mageia MGASA-2013-0209 kernel-tmb 2013-07-16
Mageia MGASA-2013-0215 kernel-rt 2013-07-16
Mageia MGASA-2013-0211 kernel-rt 2013-07-16
Mageia MGASA-2013-0214 kernel-linus 2013-07-16
Mageia MGASA-2013-0210 kernel-linus 2013-07-16
Mageia MGASA-2013-0204 kernel 2013-07-09
Mageia MGASA-2013-0203 kernel 2013-07-06
Ubuntu USN-1900-1 linux-ec2 2013-07-04
Ubuntu USN-1899-1 linux 2013-07-04
Ubuntu USN-1947-1 linux-lts-quantal 2013-09-06
Ubuntu USN-1943-1 linux-lts-raring 2013-09-06
Fedora FEDORA-2013-9123 kernel 2013-07-01
Oracle ELSA-2013-2546 enterprise kernel 2013-09-17
Red Hat RHSA-2013:1264-01 kernel-rt 2013-09-16
Ubuntu USN-1946 linux-ti-omap4 2013-09-06
Ubuntu USN-1945-1 linux-ti-omap4 2013-09-06
Ubuntu USN-1941-1 kernel 2013-09-06
CentOS CESA-2013:0620 kernel 2013-06-21
Oracle ELSA-2013-2546 enterprise kernel 2013-09-17
Ubuntu USN-1942-1 linux-ti-omap4 2013-09-06
Ubuntu USN-1944-1 kernel 2013-09-06
Ubuntu USN-1938-1 kernel 2013-09-05
Oracle ELSA-2013-2542 kernel 2013-08-29
Oracle ELSA-2013-2542 kernel 2013-08-29
Oracle ELSA-2013-2543 kernel 2013-08-29
Fedora FEDORA-2013-10695 kernel 2013-06-13

Comments (none posted)

kernel: denial of service

Package(s):linux CVE #(s):CVE-2013-2146
Created:June 14, 2013 Updated:June 19, 2013
Description:

From the Ubuntu advisory:

A flaw was discovered in the Linux kernel's perf events subsystem for Intel Sandy Bridge and Ivy Bridge processors. A local user could exploit this flaw to cause a denial of service (system crash).

Alerts:
Oracle ELSA-2013-1645 kernel 2013-11-26
CentOS CESA-2013:1173 kernel 2013-08-28
Red Hat RHSA-2013:1173-01 kernel 2013-08-27
Oracle ELSA-2013-1173 kernel 2013-08-27
Scientific Linux SLSA-2013:1173-1 kernel 2013-08-28
Red Hat RHSA-2013:1264-01 kernel-rt 2013-09-16
Mandriva MDVSA-2013:176 kernel 2013-06-24
Red Hat RHSA-2013:1195-01 kernel 2013-09-03
Ubuntu USN-1882-1 linux-ti-omap4 2013-06-14
Ubuntu USN-1881-1 linux 2013-06-14
Ubuntu USN-1880-1 linux-lts-quantal 2013-06-14
Ubuntu USN-1879-1 linux-ti-omap4 2013-06-14
Ubuntu USN-1878-1 linux 2013-06-14

Comments (none posted)

kernel: information disclosure

Package(s):linux-lts-quantal CVE #(s):CVE-2013-2141
Created:June 14, 2013 Updated:June 19, 2013
Description:

From the Ubuntu advisory:

An information leak was discovered in the Linux kernel's tkill and tgkill system calls when used from compat processes. A local user could exploit this flaw to examine potentially sensitive kernel memory.

Alerts:
Oracle ELSA-2014-1392 kernel 2014-10-21
SUSE SUSE-SU-2014:0536-1 Linux kernel 2014-04-16
openSUSE openSUSE-SU-2013:1971-1 kernel 2013-12-30
Scientific Linux SLSA-2013:1801-1 kernel 2013-12-16
Oracle ELSA-2013-1801 kernel 2013-12-12
CentOS CESA-2013:1801 kernel 2013-12-13
Red Hat RHSA-2013:1801-01 kernel 2013-12-12
Oracle ELSA-2013-1292 kernel 2013-09-27
Oracle ELSA-2013-1292 kernel 2013-09-27
Debian DSA-2766-1 linux-2.6 2013-09-27
Scientific Linux SLSA-2013:1292-1 kernel 2013-09-27
Red Hat RHSA-2013:1292-01 kernel 2013-09-26
CentOS CESA-2013:1292 kernel 2013-09-27
Ubuntu USN-1900-1 linux-ec2 2013-07-04
Ubuntu USN-1899-1 linux 2013-07-04
Red Hat RHSA-2013:1264-01 kernel-rt 2013-09-16
Mandriva MDVSA-2013:176 kernel 2013-06-24
Ubuntu USN-1882-1 linux-ti-omap4 2013-06-14
Ubuntu USN-1881-1 linux 2013-06-14
Ubuntu USN-1880-1 linux-lts-quantal 2013-06-14

Comments (none posted)

nfs-utils: information disclosure

Package(s):nfs-utils CVE #(s):CVE-2013-1923
Created:June 14, 2013 Updated:December 9, 2014
Description:

From the Novell bug report:

It was reported [1],[2] that rpc.gssd in nfs-utils is vulnerable to DNS spoofing due to it depending on PTR resolution for GSSAPI authentication. Because of this, if a user where able to poison DNS to a victim's computer, they would be able to trick rpc.gssd into talking to another server (perhaps with less security) than the intended server (with stricter security). If the victim has write access to the second (less secure) server, and the attacker has read access (when they normally might not on the secure server), the victim could write files to that server, which the attacker could obtain (when normally they would not be able to). To the victim this is transparent because the victim's computer asks the KDC for a ticket to the second server due to reverse DNS resolution; in this case Krb5 authentication does not fail because the victim is talking to the "correct" server.

Alerts:
Gentoo 201412-02 nfs-utils 2014-12-08
Mandriva MDVSA-2013:178 nfs-utils 2013-06-25
openSUSE openSUSE-SU-2013:1048-1 nfs-utils 2013-06-19
Mageia MGASA-2013-0178 nfs-utils 2013-06-19
openSUSE openSUSE-SU-2013:1016-1 nfs-utils 2013-06-14
openSUSE openSUSE-SU-2013:1012-1 nfs-utils 2013-06-14

Comments (none posted)

owncloud: cross-site scripting

Package(s):owncloud CVE #(s):CVE-2013-2150 CVE-2013-2149
Created:June 17, 2013 Updated:June 24, 2013
Description: From the Mandriva advisory:

Cross-site scripting (XSS) vulnerabilities in js/viewer.js inside the files_videoviewer application via multiple unspecified vectors in all ownCloud versions prior to 5.0.7 and 4.5.12 allows authenticated remote attackers to inject arbitrary web script or HTML via shared files (CVE-2013-2150).

Cross-site scripting (XSS) vulnerabilities in core/js/oc-dialogs.js via multiple unspecified vectors in all ownCloud versions prior to 5.0.7 and other versions before 4.0.16 allows authenticated remote attackers to inject arbitrary web script or HTML via shared files (CVE-2013-2149).

Alerts:
Fedora FEDORA-2013-10440 owncloud 2013-06-24
Mageia MGASA-2013-0171 owncloud 2013-06-18
Mandriva MDVSA-2013:175 owncloud 2013-06-17

Comments (none posted)

perl-Dancer: header injection

Package(s):perl-Dancer CVE #(s):CVE-2012-5572
Created:June 13, 2013 Updated:June 28, 2013
Description:

From the Red Hat Bugzilla entry:

A security flaw was found in the way Dancer.pm, lightweight yet powerful web application framework / Perl language module, performed sanitization of values to be used for cookie() and cookies() methods. A remote attacker could use this flaw to inject arbitrary headers into responses from (Perl) applications, that use Dancer.pm.

Alerts:
Mandriva MDVSA-2013:184 perl-Dancer 2013-06-27
Mageia MGASA-2013-0183 perl-Dancer 2013-06-26
Fedora FEDORA-2013-9950 perl-Dancer 2013-06-13
Fedora FEDORA-2013-9961 perl-Dancer 2013-06-13

Comments (none posted)

perl-Module-Signature: code execution

Package(s):perl-Module-Signature CVE #(s):CVE-2013-2145
Created:June 18, 2013 Updated:October 4, 2013
Description: From the Red Hat bugzilla:

The perl Module::Signature module adds signing capabilities to CPAN modules. The 'cpansign verify' command will automatically download keys and use them to check the signature of CPAN packages via the SIGNATURE file.

The format of the SIGNATURE file includes the cipher to use to match the provided hash; for instance:

SHA1 955ba924e9cd1bafccb4d6d7bd3be25c3ce8bf75 README

If an attacker were to replace this (SHA1) with a special unknown cipher (e.g. 'Special') and were to include in the distribution a 'Digest/Special.pm', the code in this perl module would be executed when 'cpansign -verify' is run. This will execute arbitrary code with the privileges of the user running cpansign.

Because cpansign will download public keys from a public key repository, the GPG key used to sign the SIGNATURE file may also be suspect; an attacker able to modify a CPAN module distribution file and sign the SIGNATURE file with their own key only has to make their key public. cpansign will download the attacker's key, validate the SIGNATURE file as being correctly signed, but will then execute code as noted above, if the SIGNATURE file is crafted in this way.

Alerts:
Gentoo 201310-01 perl-Module-Signature 2013-10-04
openSUSE openSUSE-SU-2013:1178-1 perl-Module-Signature 2013-07-11
openSUSE openSUSE-SU-2013:1185-1 perl-Module-Signature 2013-07-12
Ubuntu USN-1896-1 libmodule-signature-perl 2013-07-03
Mandriva MDVSA-2013:185 perl-Module-Signature 2013-06-27
Mageia MGASA-2013-0184 perl-Module-Signature 2013-06-26
Fedora FEDORA-2013-10415 perl-Module-Signature 2013-06-18
Fedora FEDORA-2013-10430 perl-Module-Signature 2013-06-18

Comments (none posted)

puppet: code execution

Package(s):puppet CVE #(s):CVE-2013-3567
Created:June 19, 2013 Updated:August 22, 2013
Description: From the Ubuntu advisory:

It was discovered that Puppet incorrectly handled YAML payloads. An attacker on an untrusted client could use this issue to execute arbitrary code on the master.

Alerts:
Red Hat RHSA-2013:1284-01 ruby193-puppet 2013-09-24
Red Hat RHSA-2013:1283-01 puppet 2013-09-24
Gentoo 201308-04 puppet 2013-08-23
openSUSE openSUSE-SU-2013:1370-1 puppet 2013-08-22
SUSE SUSE-SU-2013:1304-1 puppet 2013-08-06
Mandriva MDVSA-2013:186 puppet 2013-06-28
Mageia MGASA-2013-0187 puppet 2013-06-26
Debian DSA-2715-1 puppet 2013-06-26
Ubuntu USN-1886-1 puppet 2013-06-18

Comments (none posted)

rrdtool: denial of service

Package(s):rrdtool CVE #(s):CVE-2013-2131
Created:June 18, 2013 Updated:December 15, 2014
Description: From the Fedora advisory:

This is an update that adds explicit check to the imginfo format. It may prevent crash/exploit of user space applications which pass user supplied format to the library call without checking.

Alerts:
openSUSE openSUSE-SU-2014:1646-1 rrdtool 2014-12-15
Fedora FEDORA-2013-10309 rrdtool 2013-06-18

Comments (none posted)

subversion: code execution

Package(s):subversion CVE #(s):CVE-2013-2088
Created:June 14, 2013 Updated:June 19, 2013
Description:

From the Novell bug report:

Subversion releases up to 1.6.22 (inclusive), and 1.7.x tags up to 1.7.10 (inclusive, but excepting 1.7.x releases made from those tags), include a contrib/ script prone to shell injection by authenticated users, which could result in arbitrary code execution.

Alerts:
Gentoo 201309-11 subversion 2013-09-23
Fedora FEDORA-2013-13672 subversion 2013-08-15
openSUSE openSUSE-SU-2013:1139-1 subversion 2013-07-04
openSUSE openSUSE-SU-2013:1006-1 subversion 2013-06-14

Comments (none posted)

wireshark: multiple vulnerabilities

Package(s):wireshark CVE #(s):CVE-2013-4075 CVE-2013-4076 CVE-2013-4077 CVE-2013-4078 CVE-2013-4082
Created:June 18, 2013 Updated:September 30, 2013
Description: From the CVE entries:

epan/dissectors/packet-gmr1_bcch.c in the GMR-1 BCCH dissector in Wireshark 1.8.x before 1.8.8 does not properly initialize memory, which allows remote attackers to cause a denial of service (application crash) via a crafted packet. (CVE-2013-4075)

Buffer overflow in the dissect_iphc_crtp_fh function in epan/dissectors/packet-ppp.c in the PPP dissector in Wireshark 1.8.x before 1.8.8 allows remote attackers to cause a denial of service (application crash) via a crafted packet. (CVE-2013-4076)

Array index error in the NBAP dissector in Wireshark 1.8.x before 1.8.8 allows remote attackers to cause a denial of service (application crash) via a crafted packet, related to nbap.cnf and packet-nbap.c. (CVE-2013-4077)

epan/dissectors/packet-rdp.c in the RDP dissector in Wireshark 1.8.x before 1.8.8 does not validate return values during checks for data availability, which allows remote attackers to cause a denial of service (application crash) via a crafted packet. (CVE-2013-4078)

The vwr_read function in wiretap/vwr.c in the Ixia IxVeriWave file parser in Wireshark 1.8.x before 1.8.8 does not validate the relationship between a record length and a trailer length, which allows remote attackers to cause a denial of service (heap-based buffer overflow and application crash) via a crafted packet. (CVE-2013-4082)

Alerts:
Fedora FEDORA-2013-17635 wireshark 2013-12-19
Fedora FEDORA-2013-17661 wireshark 2013-09-28
Gentoo GLSA 201308-05:02 wireshark 2013-08-30
Gentoo 201308-05 wireshark 2013-08-28
Mageia MGASA-2013-0181 wireshark 2013-06-26
Debian DSA-2709-1 wireshark 2013-06-17

Comments (none posted)

xen: multiple vulnerabilities

Package(s):xen CVE #(s):CVE-2013-2076 CVE-2013-2077 CVE-2013-2078
Created:June 14, 2013 Updated:June 19, 2013
Description:

From the Fedora bugzilla:

On AMD processors supporting XSAVE/XRSTOR (family 15h and up), when an exception is pending, these instructions save/restore only the FOP, FIP, and FDP x87 registers in FXSAVE/FXRSTOR. This allows one domain to determine portions of the state of floating point instructions of other domains.

A malicious domain may be able to leverage this to obtain sensitive information such as cryptographic keys from another domain. (CVE-2013-2076)

Processors do certain validity checks on the data passed to XRSTOR. While the hypervisor controls the placement of that memory block, it doesn't restrict the contents in any way. Thus the hypervisor exposes itself to a fault occurring on XRSTOR. Other than for FXRSTOR, which behaves similarly, there was no exception recovery code attached to XRSTOR.

Malicious or buggy unprivileged user space can cause the entire host to crash. (CVE-2013-2077)

Processors do certain validity checks on the register values passed to XSETBV. For the PV emulation path for that instruction the hypervisor code didn't check for certain invalid bit combinations, thus exposing itself to a fault occurring when invoking that instruction on behalf of the guest.

Malicious or buggy unprivileged user space can cause the entire host to crash. (CVE-2013-2078)

Alerts:
Debian DSA-3006-1 xen 2014-08-18
SUSE SUSE-SU-2014:0446-1 Xen 2014-03-25
Gentoo 201309-24 xen 2013-09-27
openSUSE openSUSE-SU-2013:1404-1 xen 2013-09-04
SUSE SUSE-SU-2013:1314-1 Xen 2013-08-09
Mageia MGASA-2013-0197 xen 2013-07-01
openSUSE openSUSE-SU-2013:1392-1 xen 2013-08-30
SUSE SUSE-SU-2013:1075-1 Xen 2013-06-25
Fedora FEDORA-2013-10136 xen 2013-06-14
Mageia MGASA-2017-0012 xen 2017-01-09

Comments (none posted)

xml-security-c: multiple vulnerabilities

Package(s):xml-security-c CVE #(s):CVE-2013-2153 CVE-2013-2154 CVE-2013-2155 CVE-2013-2156
Created:June 19, 2013 Updated:June 28, 2013
Description: From the Debian advisory:

CVE-2013-2153: The implementation of XML digital signatures in the Santuario-C++ library is vulnerable to a spoofing issue allowing an attacker to reuse existing signatures with arbitrary content.

CVE-2013-2154: A stack overflow, possibly leading to arbitrary code execution, exists in the processing of malformed XPointer expressions in the XML Signature Reference processing code.

CVE-2013-2155: A bug in the processing of the output length of an HMAC-based XML Signature would cause a denial of service when processing specially chosen input.

CVE-2013-2156: A heap overflow exists in the processing of the PrefixList attribute optionally used in conjunction with Exclusive Canonicalization, potentially allowing arbitrary code execution.

Alerts:
Mageia MGASA-2013-0193 xml-security-c 2013-07-01
Debian DSA-2717-1 xml-security-c 2013-06-28
Debian DSA-2710-1 xml-security-c 2013-06-18

Comments (none posted)

Page editor: Jake Edge

Kernel development

Brief items

Kernel release status

The current development kernel is 3.10-rc6, which was released on June 15. In the announcement, Linus Torvalds noted that the patch rate (226 changes since -rc5) seems to be slowing a little bit. "But even if you're a luddite, and haven't yet learnt the guilty pleasures of a git workflow, you do want to run the latest kernel, I'm sure. So go out and test that you can't find any regressions. Because we have fixes all over..."

Stable updates: The 3.9.6, 3.4.49, and 3.0.82 stable kernels were released by Greg Kroah-Hartman on June 13. The 3.2.47 stable kernel was released by Ben Hutchings on June 19.

The 3.9.7, 3.4.50, and 3.0.83 kernels are in the review process and should be expected June 20 or shortly after that.

Comments (none posted)

Quotes of the week

OK, I haven't found a issue here yet, but youss are being trickssy! We don't like trickssy, and we must find precccciouss!!!

This code is starting to make me look like Gollum.

Steven Rostedt (Your editor will tactfully refrain from comment on how he looked before).

As far as I'm concerned, everything NetWare-related is best dealt by fine folks from Miskatonic University, with all the precautions due when working with spawn of the Old Ones...
Al Viro

Besides, hamsters really are evil creatures.

Sure, you may love your fluffy little Flopsy the dwarf hamster, but behind that cute and unassuming exterior lies a calculating and black little heart.

So hamster-cursing pretty much doesn't need any excuses. They have it coming to them.

Linus Torvalds

Sure, I'll gladly accept "I can do it later" from anyone, as long as you don't mind my, "I will merge it later" as well :)
Greg Kroah-Hartman

Comments (3 posted)

Kernel development news

A power-aware scheduling update

By Jonathan Corbet
June 19, 2013
Earlier this month, LWN reported on the "line in the sand" drawn by Ingo Molnar with regard to power-aware scheduling. The fragmentation of CPU power management responsibilities between the scheduler, CPU frequency governors, and CPUidle subsystem had to be replaced, he said, by an integrated solution that put power management decisions where the most information existed: in the scheduler itself. An energetic conversation followed from that decree, and a possible way forward is beginning to emerge. But the problem remains difficult.

Putting the CPU scheduler in charge of CPU power management decisions has a certain elegance; the scheduler is arguably in the best position to know what the system's needs for processing power will be in the near future. But this idea immediately runs afoul of another trend in the kernel: actual power management decisions are moving away from the scheduler toward low-level hardware driver code. As Arjan van de Ven noted in a May Google+ discussion, power management policies for Intel CPUs are being handled by CPU-specific code in recent kernels:

We also, and I realize this might be controversial, combine the control algorithm with the cpu driver in one. The reality is that such control algorithms are CPU specific, the notion of a generic "for all cpus" governors is just outright flawed; hardware behavior is key to the algorithm in the first place.

Arjan suggests that any discussion that is based on control of CPU frequencies and voltages misses an important point: current processors have a more complex notion of power management, and they vary considerably from one hardware generation to the next. The scheduler is not the right place for all that low-level information; instead, it belongs in low-level, hardware-specific code.

There is, however, fairly widespread agreement that passing more information between the scheduler and the low-level power management code would be helpful. In particular, there is a fair amount of interest in better integration of the scheduler's load-balancing code (which decides how to distribute processes across the available CPUs) and the power management logic. The load balancer knows what the current needs are and can make some guesses about the near future; it makes sense that the same code could take part in deciding which CPU resources should be available to handle that load.

Based on these thoughts and more, Morten Rasmussen has posted a design proposal for a reworked, power-aware scheduler. The current scheduler would be split into two separate modules:

  1. The CPU scheduler, which is charged with making the best use of the CPU resources that are currently available to it.

  2. The "power scheduler," which takes the responsibility of adjusting the currently available CPU resources to match the load seen by the CPU scheduler.

The CPU scheduler will handle scheduling as it is done now. The power scheduler, instead, takes load information from the CPU scheduler and, if necessary, makes changes to the system's power configuration to better suit that load. These changes can include moving CPUs from one power state to another or idling (or waking) CPUs. The power scheduler would talk with the current frequency and idle drivers, but those drivers would remain as separate, hardware-dependent code. In this design, load balancing would remain with the CPU scheduler; it would not move to the power scheduler.

Of course, there are plenty of problems to be solved beyond the simple implementation of the power scheduler and the definition of the interface with the CPU scheduler. The CPU scheduler still needs to learn how to deal with processors with varying computing capacities; the big.LITTLE architecture requires this, but more flexible power state management does too. Currently, processes are charged by the amount of time they spend executing in a CPU; that is clearly unfair to processes that are scheduled onto a slower processor. So charging will eventually have to change to a unit other than time; instructions executed, for example. The CPU scheduler will need to become more aware of the power management policies in force. Scheduling processes to enable the use of "turbo boost" mode (where a single CPU can be overclocked if all other CPUs are idle) remains an open problem. Thermal limits will throw more variables into the equation. And so on.

It is also possible that the separation of CPU and power scheduling will not work out; as Morten put it:

I'm aware that the scheduler and power scheduler decisions may be inextricably linked so we may decide to merge them. However, I think it is worth trying to keep the power scheduling decisions out of the scheduler until we have proven it infeasible.

Even with these uncertainties, the "power scheduler" approach should prove to be a useful starting point; Morten and his colleagues plan to post a preliminary power scheduler implementation in the near future. At that point we may hear how Ingo feels about this design relative to the requirements he put forward; he (along with the other core scheduler developers) has been notably absent from the recent discussion. Regardless, it seems clear that the development community will be working on power-aware scheduling for quite some time.

Comments (1 posted)

Tags and IDs

By Jonathan Corbet
June 19, 2013
Our recent coverage of the multiqueue block layer work touched on a number of the changes needed to enable the kernel to support devices capable of handling millions of I/O operations per second. But, needless to say, there are plenty of additional details that must be handled. One of them, the allocation of integer tags to identify I/O requests, seems like a relatively small issue, but it has led to an extensive discussion that, in many ways, typifies how kernel developers look at proposed additions.

Solid-state storage devices will only achieve their claimed I/O rates if the kernel issues many I/O operations in parallel. That allows the device to execute the requests in an optimal order and to exploit the parallelism inherent in having multiple banks of flash storage. If the kernel is not to get confused, though, there must be a way for the device to report the status of specific operations to the kernel; that is done by assigning a tag (a small integer value) to each request. Once that is done, the device can report that, say, request #42 completed, and the kernel will know which operation is done.

If the device is handling vast numbers of operations per second, the kernel will somehow have to come up with an equal number of tags. That suggests that tag allocation must be a fast operation; even a small amount of overhead starts to really hurt when it is repeated millions of times every second. To that end, Kent Overstreet has proposed the merging of a per-CPU tag allocator, a new module with a simple task: allocate unique integers within a given range as quickly as possible.

The interface is relatively straightforward. A "tag pool," from which tags will be allocated, can be declared this way:

    #include <linux/percpu-tags.h>

    struct percpu_tag_pool pool;

Initialization is then done with:

    int percpu_tag_pool_init(struct percpu_tag_pool *pool, unsigned long nr_tags);

where nr_tags is the number of tags to be contained within the pool. Upon successful initialization, zero will be returned to the caller.

The actual allocation and freeing of tags is managed with:

    unsigned percpu_tag_alloc(struct percpu_tag_pool *pool, gfp_t gfp);
    void percpu_tag_free(struct percpu_tag_pool *pool, unsigned tag);

A call to percpu_tag_alloc() will allocate a tag from the given pool. The only use for the gfp argument is to be checked for the __GFP_WAIT flag; if (and only if) that flag is present, the function will wait for an available tag if need be. The return value is the allocated tag, or TAG_FAIL if no allocation is possible.

The implementation works by maintaining a set of per-CPU lists of available tags; whenever possible, percpu_tag_alloc() will simply take the first available entry from the local list, avoiding contention with other CPUs. Failing that, it will fall back to a global list of tags, moving a batch of tags to the appropriate per-CPU list. Should the global list be empty, percpu_tag_alloc() will attempt to steal some tags from another CPU or, in the worst case, either wait for an available tag or return TAG_FAIL. Most of the time, with luck, tag allocation and freeing operations can be handled entirely locally, with no contention or cache line bouncing issues.

The attentive reader might well be thinking that the API proposed here looks an awful lot like the IDR subsystem, which also exists to allocate unique integer identifiers. That is where the bulk of the complaints came from; Andrew Morton, in particular, was unhappy that no apparent attempt had been made to adapt IDR before launching into a new implementation:

The worst outcome here is that idr.c remains unimproved and we merge a new allocator which does basically the same thing.

The best outcome is that idr.c gets improved and we don't have to merge duplicative code.

So please, let's put aside the shiny new thing for now and work out how we can use the existing tag allocator for these applications. If we make a genuine effort to do this and decide that it's fundamentally hopeless then this is the time to start looking at new implementations.

The responses from Kent (and from Tejun Heo as well) conveyed their belief that IDR is, indeed, fundamentally hopeless for this use case. The IDR code is designed for the allocation of identifiers, so it works a little differently: the lowest available number is always returned and the number range is expanded as needed. The lowest-number guarantee, in particular, forces a certain amount of cross-CPU data sharing, putting a limit on how scalable the IDR code can be. The IDR API also supports storing (and quickly looking up) a pointer value associated with each ID, a functionality not needed by users of tags. As Tejun put it, even if the two allocators were somehow combined, there would still need to be two distinct ways of using it, one with allocation ordering guarantees, and one for scalability.

Andrew proved hard to convince, though; he suggested that, perhaps, tag allocation could be implemented as some sort of caching layer on top of IDR. His position appeared to soften a bit, though, when Tejun pointed out that the I/O stack already has several tag-allocation implementations, "and most, if not all, suck." The per-CPU tag allocator could replace those implementations with common code, reducing the amount of duplication rather than increasing it. Improvements of that sort can work wonders when it comes to getting patches accepted.

Things then took another twist when Kent posted a rewrite of the IDA module as the basis for a new attempt. "IDA" is a variant of IDR that lacks the ability to store pointers associated with IDs; it uses many of the IDR data structures but does so in a way that is more space-efficient. Kent's rewrite turns IDA into a separate layer, with the eventual plan of rewriting IDR to sit on top. Before doing that, though, he implemented a new per-CPU ID allocator implementing the API described above on top of the new IDA code. The end result should be what Andrew was asking for: a single subsystem for the allocation of integer IDs that accommodates all of the known use cases.

All this may seem like an excessive amount of discussion around the merging of a small bit of clearly-useful code that cannot possibly cause bugs elsewhere in the kernel. But if there is one thing that the community has learned over the years, it's that kernel developers are far less scalable than the kernel itself. Duplicated code leads to inferior APIs, more bugs, and more work for developers. So it's worth putting some effort into avoiding the merging of duplicated functionality; it is work that will pay off in the long term — and the kernel community is expecting to be around and maintaining the code for a long time.

Comments (none posted)

Merging Allwinner support

By Jake Edge
June 19, 2013

Getting support for their ARM system-on-chip (SoC) families into the mainline kernel has generally been a goal for the various SoC vendors, but there are exceptions. One of those, perhaps, is Allwinner Technology, which makes an SoC popular in tablets. Allwinner seems to have been uninterested in the switch to Device Tree (DT) in the mainline ARM kernel (and the requirement to use it for new SoCs added to the kernel tree). But the story becomes a bit murkier because it turns out that developers in the community have been doing the work to get fully DT-ready support for the company's A1X SoCs into the mainline. While Allwinner is not fully participating in that effort, at least yet, a recent call to action with regard to support for the hardware seems to be somewhat off-kilter.

The topic came up in response to a note from Ben Hutchings on the debian-release mailing list (among others) that was not specifically about Allwinner SoCs at all; it was, instead, about his disappointment with the progress in the Debian ARM tree. Luke Leighton, who is acting as a, perhaps self-appointed, "go-between" for the kernel and Allwinner, replied at length, noting that the company would likely not be pushing its code upstream:

well, the point is: the expectation of the linux kernel developers is that Everyone Must Convert To DT. implicitly behind that is, i believe, an expectation that if you *don't* convert to Device Tree, you can kiss upstream submission goodbye. and, in allwinner's case, that's simply not going to happen.

As might be guessed, that didn't sit well with the Linux ARM crowd. ARM maintainer Russell King had a sharply worded response that attributed the problem directly to Allwinner. He suggested that, instead of going off and doing its own thing with "fex" (which serves many of the same roles that DT does in the mainline), the company could have pitched in and helped fix any deficiencies in DT. In addition, he is skeptical of the argument that DT was not ready when Allwinner needed it:

DT has been well defined for many many years before we started using it on ARM. It has been used for years on both PowerPC and Sparc architectures to describe their hardware, and all of the DT infrastructure was already present in the kernel.

Leighton, though, points to the success of the Allwinner SoCs, as well as the ability for less-technical customers to easily reconfigure the kernel using fex as reasons behind the decision. There are, evidently, a lot of tablet vendors who have limited technical know-how, so not having to understand DT or how to transform it for the bootloader is a major plus:

the ODMs can take virtually any device, from any customer, regardless of the design, put *one* [unmodified, precompiled] boot0, boot1, u-boot and kernel onto it, prepare the script.fex easily when the customer has been struggling on how to start that DOS editor he heard about 20 years ago, and boot the device up, put it into a special mode where the SD/MMC card becomes a JTAG+RS232 and see what's up... all without even removing any screws.

The discussion continued in that vein, with ARM kernel developers stating that the way forward was to support DT while Leighton insisted that Allwinner would just continue to carry its patches in its own tree and that Linux (and its users) would ultimately lose out because of it. Except for one small problem: as Thomas Petazzoni pointed out, Maxime Ripard has been working on support for the Allwinner A1X SoCs—merged into the 3.8 kernel in arch/arm/mach-sunxi.

In fact, it turns out that Ripard has been in contact with Allwinner and gotten data sheets and evaluation boards from it. He pointed Leighton to a wiki that is tracking the progress of the effort. That work has evidently been done on a volunteer basis, as Ripard is interested in seeing mainline support for those SoCs.

In the end, Leighton's messages start to degenerate into what might seem like an elaborate troll evidencing a serious misunderstanding of how Linux kernel development happens. In any case, he seems to think he is in a position to influence Allwinner's management to pursue an upstream course, rather than its current development path. But his demands and his suggestion that he apologize on behalf of the Linux kernel community for "not consulting with you (allwinner) on the decision to only accept device tree" elicited both amazement and anger—for obvious reasons.

Leighton appears to start with the assumption that the Linux kernel and its community need to support Allwinner SoCs, and that they need to beg Allwinner to come inside the tent. It is a common starting point for successful silicon vendors, but time and again has been shown to not be the case at all. In fact, Allwinner's customers are probably already putting pressure on the company to get its code upstream so that they aren't tied to whichever devices and peripherals are supported in the Allwinner tree.

As far as fex goes, several in thread suggested that some kind of translator could be written to produce DT from fex input. That way, customers who want to use a Windows editor to configure their device will just need to run the tool, which could put the resulting flattened DT file into the proper place in the firmware. Very little would change for the customers, but they would immediately have access to the latest Linux kernel with its associated drivers and core kernel improvements.

Alternatively, Allwinner could try to make a technical case for the superiority of fex over DT, as Russell King suggested. It seems unlikely to be successful, as several developers in the thread indicated that it was a less-general solution than DT, but it could be tried. Lastly, there is nothing stopping Allwinner from continuing down its current path. If its customers are happy with the kernels it provides, and it is happy to carry its code out of tree, there is no "Linux cabal" that will try force a change.

Evidently, though, that may not actually be what Allwinner wants. Its efforts to support Ripard's work, along with contacts made by Olof Johansson, Ripard, and others, indicate that Allwinner is interested in heading toward mainline. It essentially started out where many vendors do, but, again like many SoC makers before it, decided that it makes sense to start working with upstream.

We have seen this particular story play out numerous times before—though typically with fewer comedic interludes. In a lot of ways, it is the vendors who benefit most from collaborating with the mainline. It may take a while to actually see that, but most SoC makers end up there eventually—just as with other hardware vendors. There are simply too many benefits to being in the mainline to stay out of tree forever.

Comments (23 posted)

Patches and updates

Kernel trees

Architecture-specific

Core kernel code

Development tools

Device drivers

Documentation

Filesystems and block I/O

Memory management

Networking

Security-related

Virtualization and containers

Miscellaneous

Page editor: Jake Edge

Distributions

A Debian GNU/Hurd snapshot

By Nathan Willis
June 19, 2013

Debian has released the first usable snapshot of its port to the GNU Hurd kernel (or, technically speaking, a microkernel and its servers). The snapshot is still very much a work-in-progress, and the announcement makes it clear that the system is not to be taken as an "official Debian release," but it still makes for an interesting look at the microkernel-based Hurd. A significant portion of the Debian archive runs on the snapshot, which provides a convenient way to test drive a Hurd system, although those using it should be ready for a few problems.

The release was announced on May 22. Officially dubbed Debian GNU/Hurd 2013, the system is based on a snapshot of Debian Sid at the time when the stable Debian 7.0 ("Wheezy") release was made, so most of the software packages are based on the same source as their Wheezy counterparts. Three forms of installation media are available for downloading from Debian Ports: CD images, DVD images, and Debian network install (netinst) images. There are also disk images available with the release pre-installed, which can be run on compatible hardware or in virtual machines.

It's Debian, but not as we know it

"Compatible hardware" is a bit of a tricky subject. The Hurd port of Debian is in most ways the same operating system as Debian GNU/Linux, albeit without the Linux kernel underneath. But Hurd itself is not as mature as Linux, nor does it support as wide a range of hardware, so there are limitations. Debian GNU/Hurd 2013 is available for the 32-bit x86 architecture only, and Hurd can currently only make use of one CPU (or CPU core). That is to say, it will still run on multi-core and SMP machines, but only utilizing a single processor. There are plans in the works for a 64-bit Hurd system layer that would support a 32-bit user space, but that appears to be a ways off. Addressing the single-processor limitation is also on the roadmap, but considerably further out.

Apart from the processor support, it is also important to note that Hurd generally uses device drivers ported from Linux 2.0, plus network interface drivers from Linux 2.6.32. So the latest and greatest shiny hardware might cause trouble. On the plus side, SATA disks are supported (again, generally), so getting a basic working system together is not likely to be all that problematic.

I tested the new release using the pre-installed images in QEMU, following the YES_REALLY_README instructions, and had no trouble getting things up and running. Out of the box, this pre-installed image comes with a minimal environment; basic utilities and applications are there, but one must perform an apt-get update to pull in repository information for the full list of available packages. Nevertheless, X.org does work out of the box (with IceWM) and, as one might expect, Emacs is pre-installed. Should you need something other than Emacs, there is also Mutt, Python, the w3m browser, a collection of shells, and quite a few general X utilities available.

But there is little reason to limit yourself to the pre-installed packages. A hefty percentage of the Debian archive compiles for GNU/Hurd and is available through Apt. The Debian wiki estimates that 76% of Debian packages work on the Hurd port. Naturally, most GNU projects are available, and the list of high-profile desktop applications seems diverse as well (for example, Iceweasel and Icedove, Debian's re-brandings of Mozilla Firefox and Thunderbird). You can also browse the more recent snapshots to find additional packages.

From Debian's perspective, there is a lot of work remaining to bring the Hurd port to a completed state. Some packages will not be ported, either because they are Linux-specific or because the Hurd otherwise covers the same territory in different ways. But, reading through the Debian GNU/Hurd porting page and the Hurd project's own porting guide, there are clearly some common problems with upstream packages that require attention. Some programs that are otherwise cross-platform make use of Linux-specific features like ALSA, or mistakenly include a dependency on a Linux-specific version of a library. But there are also a lot of programs that mostly just require fixes to the configuration and build system. As the Debian wiki page pointed out about the 76% number, the other main non-Linux port, Debian GNU/kFreeBSD, was only at 85% when it was accepted for Wheezy.

Have you heard the news?

Of course, simply swapping out the kernel and using the release like a run-of-the-mill Debian machine is not all that much fun; the interesting part is seeing what the Hurd looks like up close and personal. The core idea of Hurd is that all of the tasks normally handled by the kernel in a "monolithic" design run as separate processes—networking, filesystems, process accounting, etc. With each "server" (as these components are known, although "translator" appears to be the term for filesystem servers) running as a separate entity, they are independent in some important ways; if you want to reimplement a particular server you can do so without touching the rest of the servers, and when your experiment fails, it will not crash the entire system.

[Debian GNU/Hurd]

Hurd uses the Mach microkernel for basic interprocess communication, and supplies an assortment of servers. A look at ps -ef|grep hurd showed 328 processes running on the Debian GNU/Hurd image in QEMU, everything from /hurd/term processes providing virtual terminals to /hurd/storeio processes running for each storage device (and, of course, /hurd/null providing its valuable service).

It is an interesting experiment to play around with, even if the general experience is not very different from any other Unix-like OS. Perhaps the biggest difference at the moment comes from the Hurd's take on filesystems, which is a bit like Filesystem in Userspace (FUSE) taken to its extreme. Essentially, any translator can be bound to any node in the filesystem hierarchy, and the implementations can serve up whatever they choose in response to file access calls.

The Debian release includes many of the usual suspects, such as translators for ext2 and NFS, but it also includes ftp: and http: translators, which allow programs to treat remote sites exactly like they would local files. Thus, the dpkg utility can query .deb packages on the live Debian FTP server, and one can discover exactly what lives at a web site's top level by running (for example) ls /http:/www.google.com/.

This is a post-Unix approach, extending the "everything is a file" mantra in a new way, and it is fun to experiment with. Certainly there is more to translators than simply treating remote sites like mount points; the Hurd's documentation notes that every user can alter his or her own view of the filesystem using translators, without affecting any other users on the machine. The Hurd, however, is still far from complete. But that does not mean it is not a worthwhile project to look at, and Debian GNU/Hurd 2013 offers one of the most painless opportunities to do so that most users are likely to find.

So, should we all stop working on Linux now?

In years past, Richard Stallman was quoted as saying that Linux was a good option to use while the Hurd was still incomplete, a sentiment that was met with much derision as the Hurd continued to develop slowly. But despite its slow growth, the Hurd seems to be here to stay and, like the Debian GNU/kFreeBSD port, offers an intriguing look at alternative perspectives in the free software community. Given how complete the Hurd port is already, perhaps by the next stable Debian release we will also get the chance to play with Plan 9. Although we wouldn't want that to distract the project from completing its work on Hurd.

Comments (14 posted)

Brief items

Distribution quote of the week

In my personal opinion, it's worthwhile to cooperate and put effort into Gentoo to improve the distribution as a whole. It is NOT worthwhile to focus on isolated solutions, refuse to cooperate with others, and waste energy on turf wars.
-- Andreas K. Huettel

Comments (none posted)

Debian 7.1 released

The Debian project has announced the first update of its stable distribution Debian 7 "wheezy". "This update mainly adds corrections for security problems to the stable release, along with a few adjustments for serious problems. Security advisories were already published separately and are referenced where available."

Full Story (comments: 4)

openSUSE 12.1 EOL, 13.1 Milestone 2 released

openSUSE has announced the end of life for openSUSE 12.1, along with the release of 13.1 Milestone 2. "For those of you waiting for (or working on) openSUSE 13.1, we have good news: milestone 2 is now out for you to download. As to be [expected], the inclusion of newer software versions is the highlight of this release. Broken in M1 and fixed now are automake, boost, and webyast. But first, let’s talk openSUSE 12.1: it is no longer maintained."

Comments (none posted)

Newsletters and articles of interest

Distribution newsletters

Comments (none posted)

Red Hat discloses RHEL roadmap (TechTarget)

TechTarget has an interview with Denise Dumas, Red Hat's director of software engineering, about RHEL 6.5 and 7. In it, Dumas outlines some changes coming in those releases, particularly in the areas of storage, networking, in-place upgrades from RHEL 6, and the default desktop:
We think that people who are accustomed to Gnome 2 will use classic mode until they're ready to experiment with modern mode. Classic mode is going to be the default for RHEL 7, and we're in the final stages now. We're tweaking it and having people experiment with it. The last thing we want to do is disrupt our customers' workflows.

I think it's been hard for the Gnome guys, because they really, really love modern mode, because that's where their hearts are. But they've done a great job putting together classic mode for us, and I think it's going to keep people working on RHEL 5, 6 and 7 who don't want to retrain their fingers each time they switch operating systems -- I think classic mode's going to be really helpful for them.

Comments (245 posted)

Page editor: Rebecca Sobol

Development

Mozilla testing new Places API

By Nathan Willis
June 19, 2013

Mozilla's Jordan Santell has posted a proposal for a new API slated to arrive in Firefox, through which extensions can query and manipulate the browser history and bookmarks. Bookmark management is already a popular target for extension authors, of course, but it has not had a formal API. This proposal changes that situation, and also rolls in access to the Firefox History system, which could eventually offer users new functionality.

The new API is called Places, which is the name for Firefox's unified bookmark and history service. The Places feature is perhaps best known as the underpinnings of the Firefox "Awesome Bar", which can provide both history and bookmark URL pattern matching as the user types in the location bar. In January of 2013, developers began replacing synchronous Places methods with asynchronous ones, which is a precursor to exposing the services to arbitrary extensions. The initial implementation of the new API is scheduled to arrive in Firefox 24, due in the "Aurora" testing channel in late June. Santell asked readers for feedback on the proposal, which should land in the Firefox Add-on SDK around the same time. Based on the Mozilla rapid release schedule, both the updated SDK and the new feature would be available to end users about twelve weeks later.

Santell's post deals strictly with the bookmarks side of the Places API, with the history functionality due to arrive later. However, the full API is described in the GitHub documentation. The history functionality seems to be limited to querying the browser history; options are available to sort the returned list of history items by title, URL, date, visit-count, modification date, and keyword, and to limit the number of results to return.

Bookmarks are a bit more complicated; while there is no method to (for example) remove a history entry, bookmarks can be created, deleted, modified, rearranged, grouped, and searched. There are three primitives: Bookmark, Group (i.e., folder), and Separator. A separator is simply a marker (displayed as a horizontal line in Firefox's user interface). A bookmark's properties include the title and URL, plus the group it belongs to, its integer position in the group, and a list of any attached tags. Groups have a name and an optional parent group, plus an index indicating their position in the parent group.

Are you save()d?

Both creating a new bookmark and simply updating a bookmark's properties are done with the save() function, for example:

     let thebookmark = {
         type: 'bookmark',
         title: 'LWN',
         tags: new Set(['news', 'mustread']),
         url: 'http://lwn.net'
     }

     save(thebookmark);

The trick is that Firefox does not track state for the user's bookmarks; there is no overseer that watches the set of bookmarks to monitor for changes coming from other extensions or direct user input. Providing this type of oversight would have required observers caching and synchronizing changes to items coming from other add-ons (or even, speaking of synchronization, the Firefox Sync service). That means a lot of additional overhead; consequently, Santell notes, bookmark items in the Places API are "snapshots" of an item's state at a particular point in time. One can manipulate them, but there is no guarantee that they will not change between the initial read and any save operation the extension wishes to perform.

Extensions that decide to run roughshod over changes from outside can simply so do, but the API also provides an optional resolve() callback function with mine and theirs objects that can be called by save() and used to politely handle disagreements. The mine argument is the item (e.g., bookmark or group) attempting to be saved, while theirs is the current version of the same item saved on disk. The resolve() function is only called when the version of the object on disk has a more recent modification date than the one being held by the extension. The theory is that the extension can use resolve() to either ask for the user's help or fail non-destructively to modify the bookmark item, but that is ultimately up to the extension author. The extension can return mine to override everyone else, or return theirs to back down.

The save() function returns an EventEmitter that pushes the save or update onto the stack for the browser to handle—again, because there is no guarantee that the item will be updated, since another extension or the user could delete it first. In fact, bookmark deletion is performed by an extensions setting the removed property on the bookmark to "true" and saving it. The deletion happens when the queued save() is handled by the browser.

The save() function also helps out by implicitly saving larger structures. For example, if an extension tries to save bookmarks with parent groups that do not yet exist, Firefox will create (and save) the missing groups first. It is also important to note that a save() can be called on a single bookmark, an array of bookmarks, a group, or an array of groups. Santell also points out that bookmark and group objects passed to save() as arguments are new objects, not what is returned from the original constructors. This, again, goes back to the notion that each Places object is a snapshot at a particular point in time, not the canonical object itself.

Data everywhere

Bookmarks can be searched using the same query types as history items, plus the addition of tags. Firefox's bookmark tagging feature may be the least-used option of the service (at least, compared to simply dumping bookmarks into folders). As Santell points out, the tag database is internally a separate data source, nsITaggingService. That, together with nsINavBookmarksService and nsINavHistoryService, make up the three services that extension authors have had access to in the past. Unifying them with a single API ought to result in less messy bookmark and history functionality.

At first glance, it is easy to think of bookmarks as trivial objects. They are URLs that sit in a menu, waiting to be clicked. But as Firefox has evolved, bookmarks have added more and more functionality. As I found last week when looking at extension-based RSS readers, we increasingly count on our bookmarks to store "to read later" information and we expect them to be available and up to date from multiple machines.

To my dismay during the RSS extension search, I discovered that Firefox Sync had been regularly duplicating several dozen of my bookmarks for reasons I can still only guess at, evidently for quite some time. The upshot is that many bookmarks were repeated 154 times (and one, for some reason, 484 times), which adds up to a lot of hassle to sort through and to clean up. Evidently bookmark de-duplication is a popular subject for extension authors, but the options were discouraging. Some did not find every copy of a duplicate, others did not allow you to see which duplicate lived where.

Attempting to move RSS feed subscriptions into the bookmark system only adds more responsibility to this portion of Firefox; it is, in a very real sense, a "cloud service" in the most buzzword-compliant sense of the term. The current version of the Places API, while it does not address Live Bookmarks explicitly, will hopefully bring some more sanity to handling these important bits of data. And, by doing so, open up bookmarking to a wide array of new possibilities—annotation, sharing, temporary bookmarking; who knows?

Comments (1 posted)

Brief items

Quotes of the week

Identi.ca is converting to pump.io June 1.
Identi.ca is converting to pump.io June 8.
Identi.ca is converting to pump.io June 15.
Identi.ca is converting to pump.io sometime this week.
— The evolving notice posted at Identi.ca about the service's upcoming swap-over to the new pump.io platform.

<dizzylizzy> has anybody else noticed that Apple's new headquarters are LITERALLY a walled garden?
<dizzylizzy> http://cdn.mactrast.com/wp-content/uploads/2011/12/Apple-Spaceship-Render.jpg
<Tekk_> I never knew that the apple higher-ups were such big fans of tron
— From a discussion in the Free As In Freedom oggcast IRC channel.(Thanks to Paul Wise)

In light of the recent leaks about the NSA's illegal spying, I've decided to go back to using M-x spook output in my email signatures.

cypherpunk anthrax John Kerry rail gun security plutonium Guantanamo wire transfer JPL number key military MD5 SRI FIPS140 Uzbekistan

John Sullivan.

Comments (8 posted)

Subversion 1.8.0 released

The Apache Software Foundation has announced a new release of "the most popular and widely-used Open Source version control system" — Subversion 1.8.0. "Since their introduction in prior releases, Subversion’s merge tracking and tree conflict detection features have been critical to its ability to serve projects where branching and merging happens often. The 1.8.0 version improves these features, further automating the client-side merge functionality and improving both tree conflict detection during merge operations and tree conflict resolution during update operations. Additionally, the Subversion client now tracks moves of working copy items as first-class operations, which brings immediate benefit to users today and is a key step toward more thorough system-wide support for moved and renamed objects in a future release."

Comments (65 posted)

LLVM 3.3 released

Version 3.3 of the LLVM compiler suite is out. It adds support for a number of new architectures, features a number of performance improvements, and more. "3.3 is also a major milestone for the Clang frontend: it is now fully C++'11 feature complete. At this point, Clang is the only compiler to support the full C++'11 standard, including important C++'11 library features like std::regex. Clang now supports Unicode characters in identifiers, the Clang Static Analyzer supports several new checkers and can perform interprocedural analysis across C++ constructor/destructor boundaries, and Clang even has a nice 'C++'11 Migrator' tool to help upgrade code to use C++'11 features and a 'Clang Format' tool that plugs into vim and emacs (among others) to auto-format your code." See the release notes for details.

Full Story (comments: 15)

Zato 1.1 available

Dariusz Suchojad has released version 1.1 of Zato, the LGPL-licensed Enterprise Service Bus (ESB). It provides a framework for writing loosely-coupled software services running on the same network, and is one of the few such projects written in Python. This release adds a unified installer for OS X, Ubuntu, Linux Mint, and Fedora. RHEL and SLES are expected to follow.

Comments (none posted)

WebRTC Test Day, June 21

Mozilla's Aaron Train announces a "test day" for Firefox's WebRTC implementation, scheduled for Friday, June 21. "We would like for you to use the new version of Firefox on your Android phone and desktop or laptop machine, and take a close look at the latest Nightly builds in order to assist us in identifying any noticeably major issues found with our WebRTC implementation, and ensure that all feature functionality that is included in this upcoming release is on its way to a feature and testing complete state." More details are available at the post, along with a link to specific instructions on getting started.

Comments (none posted)

Newsletters and articles

Development newsletters from the past week

Comments (none posted)

Meeks: LibreOffice's under-the-hood progress in 4.1.0 (beta)

On his blog, Michael Meeks has a look at some of the less visible (to the user) changes to LibreOffice for 4.1. He describes changes like the completion of the switch to GNU make, code cleanup (including more German comment translation), eliminating bugs that result in crashes, refactoring the Calc spreadsheet core, and more. "One of the tasks that most irritates and has distracted new developers from doing interesting feature work on the code-base over many years has been our build system. At the start of LibreOffice, there was an incomplete transition to using GNU make, which required us to use both the horrible old dmake tool as well as gnumake, with configure using a Perl script to generate a shell script configuring a set of environment variables that had to be sourced into your shell in order to compile (making it impossible to re-configure from that shell), with a Perl build script that batched compilation with two layers of parallelism, forcing you to over- or undercommit on any modern builder."

Comments (115 posted)

Ardour 3.2 adds video support (The H)

The H looks at the recent 3.2 release of the Ardour digital audio workstation, highlighting the addition of video support. Specifically, Ardour does not edit video, but allows users to import it for synchronizing with audio content. "The new video feature can display imported video tracks with frame-by-frame granularity in a timeline and allows users to lock audio tracks to individual video frames. After the editing work is done, users can then export the mixed audio track into a new video file."

Comments (none posted)

Castro: The Watercooler Reboot, progress report

Canonical's Jorge Castro has written an update on his quest to revamp the Ubuntu project's online discussion forums into something that developers do not loathe to use. The setup is based on Discourse, which as Castro points out, allows discussion threads to be integrated with WordPress blogs, in addition to connecting the main Ubuntu site itself.

Comments (none posted)

Page editor: Nathan Willis

Announcements

Brief items

SCO v. IBM reopened

Groklaw reports that the SCO lawsuit against IBM has officially been reopened. "The thing that makes predictions a bit murky is that there are some other motions, aside from the summary judgment motions, that were also not officially decided before SCO filed for bankruptcy that could, in SCO's perfect world, reopen certain matters. I believe they would have been denied, if the prior judge had had time to rule on them. Now? I don't know. There was a SCO motion for reconsideration pending and one objection to an earlier ruling, and a motion to supplement its list of allegedly misused materials. How any of this survives the Novell victory is unknown to me, but SCO are a clever, clever bunch."

Comments (52 posted)

The Document Foundation welcomes France's MIMO in the Advisory Board

The Document Foundation has announced that MIMO (Inter-Ministry Mutualisation for an Open Productivity Suite) has joined TDF's advisory board. "MIMO has standardised on LibreOffice, developed by the Document Foundation, and is contributing to the development of the office suite through a commercial support agreement provided by certified developers. The role of MIMO is to validate successive versions of LibreOffice and make them compatible with the IT infrastructure and processes of member ministries. A single, standard LibreOffice version is validated and approved every year, according to the roadmap planned by MIMO members."

Full Story (comments: none)

OSI Individual Members Election 2013

The Open Source Initiative has announced that nominations are open for the Individual Members Election. "This election is open to all individual members of the OSI, both as candidates and as voters. Any individual member can nominate themselves as a candidate, and when the election itself starts, all individual members will get an email explaining the voting process." Nominations close July 5.

Comments (none posted)

Articles of interest

MySQL man pages silently relicensed away from GPL (MariaDB blog)

In what might be seen as a harbinger of license changes to come for MySQL, the MariaDB blog is reporting that the man pages for MySQL 5.5.31 have changed licenses. Formerly covered by the GPLv2, the man pages are now under a more restrictive license, the crux of which seems to be: "You may create a printed copy of this documentation solely for your own personal use. Conversion to other formats is allowed as long as the actual content is not altered or edited in any way. You shall not publish or distribute this documentation in any form or on any media, except if you distribute the documentation in a manner similar to how Oracle disseminates it (that is, electronically for download on a Web site with the software) or on a CD-ROM or similar medium, provided however that the documentation is disseminated together with the software on the same medium. Any other use, such as any dissemination of printed copies or use of this documentation, in whole or in part, in another publication, requires the prior written consent from an authorized representative of Oracle."

[Update: A MySQL bug report indicates that a build system problem led to the relicensing, which will presumably be fixed in the next release.]

Comments (39 posted)

New Books

Realm of Racket--New from No Starch Press

No Starch Press has released "Realm of Racket" by Matthias Felleisen, Conrad Barski and David Van Horn, assisted by Forrest Bice, Rose DeMaio, Spencer Florence, Feng-Yun Mimi Lin, Scott Lindeman, Nicole Nussbaum, Eric Peterson and Ryan Plessner.

Full Story (comments: none)

Education and Certification

The Linux Foundation announces Linux Training Scholarship Program

The Linux Foundation has announced the annual Linux Training Scholarship Program is open for applications. "The Linux Foundation’s Linux Training Scholarship Program in 2013 will award five scholarships to individuals who demonstrate the greatest need and who have already demonstrated some knowledge of Linux and open source software. In addition, winners this year will receive a 30-minute, one-on-one mentoring session with one of The Linux Foundation’s Linux training instructors."

Full Story (comments: none)

Calls for Presentations

hack.lu 2013 call for papers

Hack.lu will take place October 22-24 in Luxembourg. The call for papers deadline is July 15. "We would like to announce the opportunity to submit papers, and/or lightning talk proposals for selection by the hack.lu technical review committee. This year we will be doing workshops on the first day PM and talks of 1 hour or 30 minutes in the main track for the three days."

Full Story (comments: none)

Upcoming Events

Events: June 20, 2013 to August 19, 2013

The following event listing is taken from the LWN.net Calendar.

Date(s)EventLocation
June 18
June 20
Velocity Conference Santa Clara, CA, USA
June 18
June 21
Open Source Bridge: The conference for open source citizens Portland, Oregon, USA
June 20
June 21
7th Conferenza Italiana sul Software Libero Como, Italy
June 22
June 23
RubyConf India Pune, India
June 26
June 28
USENIX Annual Technical Conference San Jose, CA, USA
June 27
June 30
Linux Vacation / Eastern Europe 2013 Grodno, Belarus
June 29
July 3
Workshop on Essential Abstractions in GCC, 2013 Bombay, India
July 1
July 5
Workshop on Dynamic Languages and Applications Montpellier, France
July 1
July 7
EuroPython 2013 Florence, Italy
July 2
July 4
OSSConf 2013 Žilina, Slovakia
July 3
July 6
FISL 14 Porto Alegre, Brazil
July 5
July 7
PyCon Australia 2013 Hobart, Tasmania
July 6
July 11
Libre Software Meeting Brussels, Belgium
July 8
July 12
Linaro Connect Europe 2013 Dublin, Ireland
July 12 PGDay UK 2013 near Milton Keynes, England, UK
July 12
July 14
GNU Tools Cauldron 2013 Mountain View, CA, USA
July 12
July 14
5th Encuentro Centroamerica de Software Libre San Ignacio, Cayo, Belize
July 13
July 19
Akademy 2013 Bilbao, Spain
July 15
July 16
QtCS 2013 Bilbao, Spain
July 18
July 22
openSUSE Conference 2013 Thessaloniki, Greece
July 22
July 26
OSCON 2013 Portland, OR, USA
July 27 OpenShift Origin Community Day Mountain View, CA, USA
July 27
July 28
PyOhio 2013 Columbus, OH, USA
July 31
August 4
OHM2013: Observe Hack Make Geestmerambacht, the Netherlands
August 1
August 8
GUADEC 2013 Brno, Czech Republic
August 3
August 4
COSCUP 2013 Taipei, Taiwan
August 6
August 8
Military Open Source Summit Charleston, SC, USA
August 7
August 11
Wikimania Hong Kong, China
August 9
August 12
Flock - Fedora Contributor Conference Charleston, SC, USA
August 9
August 11
XDA:DevCon 2013 Miami, FL, USA
August 9
August 13
PyCon Canada Toronto, Canada
August 11
August 18
DebConf13 Vaumarcus, Switzerland
August 12
August 14
YAPC::Europe 2013 “Future Perl” Kiev, Ukraine
August 16
August 18
PyTexas 2013 College Station, TX, USA

If your event does not appear here, please tell us about it.

Page editor: Rebecca Sobol


Copyright © 2013, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds