Content-centric networking (CCN) is a novel approach to networking that abstracts away the specifics of the connection, and focuses on disseminating the content efficiently. This is in contrast to the connection-oriented approach used in most IP applications, which requires establishing a channel between two nodes with known addresses. CCN excels at the comparatively common task of fetching static documents for multiple end users, which causes significant strain on the network as it is implemented in the one-to-one-connection-oriented TCP. The concept has been discussed for decades, but Palo Alto Research Center (PARC, formerly a subsidiary of Xerox) is actively developing a real-life implementation called CCNx, which is usable on Linux and other UNIX-like systems today.
CCNx is the brainchild of PARC's Van Jacobson, and if anyone is qualified to rethink core Internet protocols, Jacobson is. Among other things, he fixed TCP flow control and designed the IP multicast backbone. CCNx clearly draws on the lessons Jacobson has learned about network congestion over the years; in a 2006 talk at Google, he described how the NBC television network was slowed to a crawl during the Olympics by thousands of web users requesting copies of the same video clip. The data was identical and there was no secrecy required; if the backbone of the network could only recognize that the requests were identical, it could dispense with retransmitting it from the originating server — and make use of the existing copies closer to the final hop.
That said, CCNx (and CCN in general) is not a replacement for existing transport protocols; it is designed to run on top of them, and in fact to be oblivious as to which mechanisms are used underneath: TCP, UDP, IP multicast, link-level broadcast, or even point-to-point wireless. The goal is that a party sends a request for a document out into the open — with no destination address — and anyone who hears the request and has a copy of the document can respond to it. It is irrelevant whether the copy that is eventually returned originates from disk storage on a server, memory in a gateway router, or any other source. Naturally, making the network efficient means that the closest party who both hears the request and has the document should return it. In practice, CCN expects nodes to intelligently cache the documents that they route to the end-user nodes; doing so (and keeping popular documents close to the final hop of the route) is what prevents congestion.
For the scheme to work, of course, the authenticity of the content must be verifiable from the data itself. If that property holds, the most noticeable benefit is that, when popular content is requested by numerous end-users, there is far less congestion on the network — ideally no additional congestion, as routers at the edges of the network retransmit their existing copies of the content, without even needing to propagate the requests upstream. There are other benefits as well, such as the fact that participating nodes do not need static or globally-unique names. This allows low-power sensors to respond to requests (e.g., "what is the current temperature") without needing a complete multilayer network stack, and it allows clients to send such requests without knowing the topology of the network.
On the flip side, CCN does pack more information into the names and metadata of documents, incorporating things like versioning and timestamps. For example, once a server publishes a document over CCN, it no longer has control over it, because it propagates across the network. Consequently, all updates to a document must be issued as superseding publications that can be identified as updates referring to the original, and that can be verified as authentic.
CCNx tackles both the document-updating question and the authentication question in its messaging scheme. Nodes ask for content with an Interest message, in which the only required field is the name of the desired content (although time-limits, maximum number of hops, and other fields are available). Such sending nodes could be either end-user applications making the original request, or network infrastructure nodes passing along requests they cannot answer.
A Data message that can be authenticated as consistent with the original publisher is required to complete the puzzle; however the original publisher never needs to be made aware of the request. The Data message includes the requested data plus a cryptographic signature. The signature is generated against the data and an information block that contains a time stamp, the digest of the publisher's public key (which is required for nodes to verify the signature), and may optionally include other information such as the data type. Nodes are supposed to check the signatures and discard any content that fails verification; this "lazy" invalidation is intended to cut down on spoofing attacks without introducing significant overhead.
That is essentially all there is to CCNx; there are just two message types. Additional features like encryption and application state management are left entirely up to the layer above CCNx. Participating nodes are allowed to shape traffic as best they see fit. On the application side, that could mean interleaving requests for chunks of large file downloads with higher-priority requests to check mail. Because CCNx does not keep persistent connections open between nodes, Quality of service (QoS) is in the hands of the end-user.
Interestingly enough, CCNx does not impose any restrictions on the formatting of the actual document name, other than that it be a sequence of bytes and be hierarchical. The hierarchical dimension exists to allow publishers to publish related content using the same prefix. That could be interpreted as a given prefix representing a directory, or as a given prefix representing small chunks of a single file that needs to be reassembled further up in the application stack. The documentation describes an URL-like syntax for CCNx names of the form ccnx:/PARC/%00%01%02 and includes some recommended naming conventions, but they are advisory only. For example, it suggests using a DNS name for the first component in order to ease the transition, and it recommends encoding the timestamp as another component. Although optional, these conventions should allow nodes to perform efficient matching of content names by comparing the prefixes and without examining the data itself.
The strategy for running an efficient CCNx node is also left up to the implementer, although here again the project's documentation includes recommendations (under the "CCNx Node Model" sub-heading). The recommendation includes maintaining a content store (CS) indexed by document name, a table of unsatisfied Interest requests, and a table of outbound interfaces on which unsatisfied Interests have been forwarded. It is anticipated that a node will have multiple options at its disposal for forwarding Interest messages it cannot fulfill; choosing which links or routes are best at any given moment allows the node to be opportunistic.
The CCNx distribution contains a handful of utilities that allow one to test CCNx on a single machine or on the local network. The latest release is 0.6.2, from October 3. It includes C source for both a simple CCNx forwarder and a content repository, a simple CCNx chat application written in Java, CCNx plugins for the VLC media player and Wireshark packet sniffer, and Android versions of the repository and chat applications. Ubuntu is the only Linux distribution tested, but the dependencies are lightweight: libcrypto, expat, libpcap, and libxml2.
With the software built, the first step is to start the CCNx daemon with
bin/ccndstart. This is a script that launches the
ccnd daemon and directs output messages to the terminal,
although you can also monitor its status from
http://localhost:9695 in a web browser. The ccnd daemon is
what passes CCNx messages to other nodes; how it does so depends on
the network transports defined in its configuration. For
testing on one machine, ccnd does not require any configuration;
however, editing the ~/.ccnx/ccnd.conf is required to forward
CCNx requests between machines. The example configuration file is
light on detail; its only example entry is the line
add ccnx:/ccnx.org udp 22.214.171.124 59695which tells ccnd to route all ccnx: URL requests that begin with ccnx.org to UDP port 59695, over the 126.96.36.199 multicast address. This address is reserved for CCNx with IANA.
The content repository can be started with the bin/ccnr binary. It defaults to running the repository on the current directory, but another location can be specified by setting the CCNR_DIRECTORY environment variable. Similarly, a name prefix for the available files can be set using the CCNR_GLOBAL_PREFIX variable. The repository's other key settings are configured in the data/policy.xml file, the most important setting being which prefixes the repository should answer for. By default, however, this prefix is empty, so the repository will answer all requests — good for testing, but not terribly practical for deployment.
The file utilities include the command-line tools ccnls, ccnputfile, and ccngetfile, as well as the graphical file browser ccnexplore. Dropping files in and rearranging them gets old after a few hours, but the chat application and VLC plugin offer more amusement. Both make it clearer how CCNx's network abstraction simplifies things from the user's perspective. To join a chat room, for example, one needs only the name of the room (e.g., ccnchat ccnx:/testroom1); the underlying transport and the network addresses of the participants never factor in.
In that sense, working in CCN is reminiscent of Zeroconf service discovery, except that there is no discrete discovery step involved. The long hierarchical document names suggest the route-embedding features of IPv6 addresses as well; similarly, the ability to retrieve a valid chunk of data from any source reminds one of Bittorrent. Of course, it is difficult to assess the congestion-prevention capabilities of CCN with just one or two machines, but the same would be true for most traffic-shaping or QoS techniques.
There are still aspects of CCNx that have yet to be finalized, how to avoid content naming collisions or spoofing for example. Perhaps the advisory naming conventions will be formalized, or perhaps if CCNx becomes an IETF standard, other techniques will arise. Also, CCN offers better aggregate throughput on the network by answering content requests with a nearby copy of the document, rather than fetching the original again. The downside is that publishers generally want to know page view statistics, so some form of reporting may need to be devised.
In his Google talk, Jacobson described CCN as a different perspective on how to use the network, rather than as a new suite of protocols. He compared it to the difference between telephone companies' circuit-switched networks and the first packet-switching data networks. The wires and the nodes were the same — the difference is in how the conversations and connections are expressed. Pessimists are understandably unhappy with the glacial pace of the IETF or of widespread IPv6 adoption, and the same people might argue that CCN will never replace the entrenched protocols like HTTP that dominate today. Perhaps it will not; it is still intriguing to experiment with, however, and one should certainly never discount the commercial Internet players' drive to adopt a new technology when it offers the prospect of saving them money — which CCN certainly could.
While GNOME is used predominantly on Linux desktops, it has historically supported other platforms as well. But the project is again debating whether or not it should add hard dependencies on low-level system components when those dependencies could have the side effect of making the GNOME desktop unusable on non-Linux operating systems and a large portion of Linux distributions. In this instance, workarounds for other systems are not too onerous, but the topic invariably leads to a larger discussion about how the project operates and establishes its direction.
On October 19, Bastien Nocera announced to the GNOME desktop-devel-list that he would make systemd a hard requirement for gnome-settings-daemon's power plugin. Doing so, he said, would allow the plugin to better handle suspending the system, and it would simplify the power management codebase. The patch set also drops support for ConsoleKit and UPower. Nocera enumerated several benefits of using systemd for suspends, including providing better information to applications about suspending.
Perhaps predictably, several developers replied that adding a systemd dependency would make GNOME harder to deploy on systems that do not use systemd: in particular Ubuntu, Gentoo, the BSDs, and Solaris. At the very least, these downstream projects would have to patch gnome-settings-daemon, which makes maintenance more difficult for everyone. The distributions will have to dedicate time to patching and testing their branches, but a number of bug reports will invariably make their way back to Nocera and the other GNOME developers, too — bug reports that GNOME can do little about, since they involve downstream work.
Antoine Jacoutot said that the change would impact OpenBSD's decision to ship GNOME in future releases; Sebastien Bacher echoed the sentiment for Ubuntu, as did Brian Cameron for Solaris, and Alexandre Rostovtsev for Gentoo. For the present, however, the non-systemd projects are looking for a workaround, starting with a way to re-implement the systemd features expected by the power plugin. Although many seemed to initially interpret Nocera's "hard requirement" message to mean that gnome-settings-daemon would have a build-time dependency on systemd, he later clarified that the change relied on systemd's D-Bus interface, and that through run-time detection, the system would simply disable the power plugin if systemd was unavailable. According to another message, the power plugin only uses two systemd interfaces, inhibitor locks and session tracking.
But if the plugin that introduces the dependency only needs to access two well-known interfaces over D-Bus, the question becomes whether or not systemd is actually required at all. In the email cited above, Bacher asked if Nocera had considered defining the interfaces as a standard, so that GNOME would work on any system that implemented them. Nocera responded that the question should be directed at the systemd developers, as he was not interested in taking on the task. That suggestion garnered little support; although Rostovtsev expressed some interest, Bacher replied that he did not have time to undertake the task.
Complicating matters further is the prospect that if systemd becomes a dependency for one GNOME module, it is increasingly likely that additional modules will start expecting it as well. That could lead to the point where the non-systemd OSes conclude that the GNOME desktop environment is more work to support than it is worth, and the OS projects simply drop it. Jacoutot, who maintains the GNOME packages for OpenBSD, expressed concern over that possibility:
Similarly, in his message, Cameron noted that Solaris already has its own power management features, several of which support enterprise and cluster-management features that are not addressed by systemd. As a result, although Solaris is interested in making GNOME run if possible, the potential loss of features expected by customers is a likely deal-breaker.
For his part, Nocera argued that as module maintainer, the decision was his, and ultimately he needed to make it on the basis of how it affected his ability to maintain the code — not on the needs of downstream projects. As he told Jacoutot:
But there were critics of adding the dependency within the GNOME project, too. Florian Müllner pointed out that other packages would be affected by the patches, including GNOME Shell, and Colin Walters argued that because the full GNOME desktop depends on gnome-settings-daemon, the move did affect other packages and other maintainers.
In particular, Walters took issue with dropping support for ConsoleKit and UPower systems, because doing so would cause a major regression for GNOME on systems that used them. He offered to take over maintainership of the relevant bits of code. Nocera objected to that idea, characterizing it as a "we 'support it, kind of, but not really'" approach that would result in numerous bugs. Walters replied that bugs of that sort are an ongoing burden already, and that his objection was one based on general principle:
I'm all for making GNOME+systemd kick ass. But not at the cost of giving up the "rough consensus and working code" aspect that forms GNOME and other FOSS projects.
Your process here was to post on a Friday that you were going to delete the code, have a lot of feedback even over the weekend, then on Monday *do it anyways*. That's just not the way we should approach problems in GNOME.
Walters also pointed out another issue, which was that maintaining ConsoleKit support is unnecessarily complex because of how little code is shared between the various GNOME modules. Matthias Clasen agreed, saying "in some cases (such as power or randr), we have dbus interfaces, in others we share code in libraries (randr again, xkb, backgrounds), and we also copy some glue code around (user accounts come to mind)."
Certainly, no one would argue that Nocera and other module maintainers are not free to make the decisions that they see as the best path forward; in fact GNOME has long held "maintainers rule" as a mantra. As this case illustrates, though, that philosophy is far trickier to live by in the real world than in the abstract. Maintaining systemd and ConsoleKit support in parallel is a considerable amount of work, and it does not seem fair to impose that burden on Nocera. But as Walters and others pointed out, GNOME's modules do not live in isolation — introducing an external dependency in one module pulls it in for others (which can become chaotic), while dropping support for existing configurations harms users (and should not be done cavalierly).
Systemd is also something of a special case because its availability is a decision historically made by the distributions (and other downstream OS projects), based on system-wide factors that GNOME does not control. As Jacoutot explained, implementing workarounds for the gnome-settings-daemon power plugin is not likely to be a Herculean ordeal, but adding such a low-level dependency suggests that the number of workarounds require will begin to increase. Systemd's maintainers have no problem with this; they are intent on making the tool useful for more and more tasks.
But GNOME as a cross-platform project has different considerations. In this case, perhaps the long-term impact of the decision means "maintainers rule" is insufficient. Vincent Untz thought so, saying "this is exactly the kind of stuff that, at least from my perspective, was raised during the [Annual General Meeting] at GUADEC." At the meeting, several suggested that GNOME needed a "Technical Board" of some sort to set long-term strategy and make broader decisions that would affect multiple modules.
There has not been significant movement on that point since GUADEC; at the time the GNOME Release Team told the audience that it had been serving as sort of an ad hoc decision-making board in recent years, but that it was not entirely comfortable with the role. Nevertheless, it is still functioning in that capacity; Nocera pushed the changes through on October 22, in response to which Frederic Peters from the Release Team commented:
Of course this is still just 3.7.1, but anyway. I'd suggest we do *not* ship gnome-settings-daemon 3.7.1 in GNOME 3.7.1 and wait for a project-wide decision on how support of ConsoleKit systems should be (dis)continued.
Thus, GNOME users should know in a few days whether or not GNOME 3.8 is likely to require systemd or to drop support for ConsoleKit. But whichever happens, the debate is likely to continue.
The Boxee Box is based on the Boxee software which, in turn, is based on the XBMC media player. It gained an enthusiastic early following as the result of its open-source roots and the device's plugin infrastructure. The Boxee Box handled a wide variety of media types from the outset; in places where it fell short, others could easily provide plugins to fill in the gaps. So the Boxee Box became known as a device that could play almost anything.
Boxee Box users lost some of their enthusiasm over time. Early versions could be "unboxed" and made to run arbitrary software, but the company closed that hole in 2010 and it does not appear that anybody has figured out how to break newer versions. Bug fixes and improvements from Boxee slowed down over time, leading to user frustration. And now those users, the people who have supported Boxee to this point, have been informed that Boxee is abandoning the device in favor of its upcoming, USA-only "Boxee TV" product.
One can maybe understand a company that feels the need to declare end-of-life for a two-year-old consumer electronics product; such offerings often don't last anywhere near that long. But Boxee has not just left a product behind; it also left the entire community that had embraced that product. The new "Boxee TV" is a clear step backward in a number of regards: no plugin support, no support for arbitrary file formats, and a highly proprietary architecture throughout. It now features a new deal with US cable provider Comcast (ensuring that Boxee will not be blocked by the just-allowed encryption of basic cable content in the US) and features designed to warm the entertainment industry's heart. This article in The Verge describes the situation clearly:
What has happened here is clear: Boxee has gone from trying to make its customers happy to making the entertainment industry happy instead. If that meant dumping its old customers and the development community that had built itself around the older product, then so be it. As XBMC developer Nathan Betzen put it, Boxee has moved from trying to expand its users' rights under copyright law to actively restricting those rights. In a sense, Boxee is telling us that we cannot have a box with plugin support and the ability to play "weird video files" — much less a truly open system — under the current copyright regime.
Boxee has also driven home a lesson we've heard many times before: just putting free software onto a device does not make the device free. Most of what is in the Boxee Box is freely licensed, but, without the ability to replace the software, the Boxee Box itself is not under its owner's control. It can have features taken away, contain evil software, or be turned into an obsolete, unsupported paperweight at a corporation's whim. Purchasing such a device may or may not be a rational decision, depending on what the purchaser's goals are. Developing for this kind of device seems like a mistake; one is working to improve an edifice whose foundation can be yanked out at any time.
Suitably skilled users who are aware of these issues will, of course, have avoided a device like the Boxee Box from the outset. It is certainly possible to put XBMC onto a properly equipped computer and have a truly free device to feed one's video consumption habits. That option has not gone away, but the world has still gotten a little worse; from Nathan's post again:
Without an off-the-shelf open system, most viewers are going to be stuck with whatever the entertainment industry is willing to let them to have. Those who want something more flexible will need to build their own systems, run into all kinds of issues trying to access content that is rightfully available to them, and live under the assumption that their primary motivation is piracy.
Version 3 of the GPL will not save us here; manufacturers have shown every sign of being willing to dump software when its licensing gets in the way of their business objectives. Boxee went from being "passionate about open source software" to embracing a fully proprietary solution even without the extra requirements found in GPLv3 to worry about. Solutions to this problem, if they exist, will have to come from elsewhere.
What is needed is a combination of truly free alternatives, a willingness among buyers to insist on free devices, and copyright reform. In the handset market, buyers have begun to understand how nice it is to have alternatives like CyanogenMod — and to not have to go through a scary "jailbreaking" process to install it. As the content industry tries to tighten its grip on what our systems can do, awareness of the value of freedom may grow in this market as well. But it will be too late for Boxee Box owners who are now discovering that they lack the freedom to improve a device after its manufacturer has lost interest.
As even moderately sophisticated users of the web are aware by now, the great majority of web sites that we visit have a keen interest in tracking their users. At the simplest end of the scale, visitor tracking takes the form of web server logs that record the source IP address of an HTTP request, the HTTP request itself, and the browser's user agent string. Further along the scale are simple cookie-based systems used to track the number of unique visitors to a site or to track each user's navigation around a site. Going further still are the cookie-based systems and widget-based systems (Facebook's "like" buttons, Google's "+1" buttons, and the like) that an increasing number of companies are using to track users' surfing habits across web sites, typically to gather a picture of our browsing habits in order to target us with more "personalized" advertising.
Furthermore, many of the free web services to which we provide any kind of personal information have a keen interest in monetizing that information as far their stated privacy policies allow. And in some cases those companies are prepared to be flexible about their policies when it suits their business goals. To take just one of the most noted examples, Facebook's constantly morphing range of privacy settings, and their defaults, appear to be designed more to suit the requirements of Facebook's paying advertisers rather than its users. As has been pithily observed, "if you are not paying for it, you're not the customer; you're the product being sold."
However, even for sophisticated users, preventing tracking and controlling the privacy of personal data can be challenging. Less sophisticated users can have trouble to even find which part of a web service's interface is used to control the privacy settings that determine how a company uses their data. While many users may be aware of cookies, probably only a minority actively try to control their use. And few of us have any idea how much the information that we provide to free web services might be worth to the companies providing the services.
Privacyfix, a plug-in for the Firefox and Chrome browsers released earlier this month, aims to educate users on how they are tracked and how their personal data is used; it also assists them with the task of locking down the privacy of their personal data on some web services. And perhaps most eye-catchingly, it attempts to give the user an estimate of the value of their web surfing habits for a couple of the web service giants. The plug-in is free as in beer, but while the web site mentions some collaborations with open source projects, no mention is made of the plug-in itself as being under a free license; one assumes that it is not.
Installation of the plug-in is accomplished by clicking a link on the Privacyfix home page. The actual installation takes just a few seconds, but is followed by a set-up phase whose duration depends on the speed of the user's Internet link. During this phase the plug-in is downloading a data set containing information about a large number of commonly used web sites. The Privacyfix FAQ emphasizes that the data exchange that is going on at this point is almost entirely one way. No browser information (such as cookies, history, or bookmarks) is sent to the Privacyfix site. The only information that goes to the site is unavoidable technical information such as the user's IP address, which PrivacyChoice, the company that produces the plug-in, claims to delete immediately.
Once the download is complete, the plug-in analyzes your browser's cookies and browser history, and—if you are logged in—your Facebook privacy settings and Google account settings to give you a picture of just how tracked your life on the web is. The plug-in then presents its results in a tabbed browser display of the form shown to the right.
The first two tabs provide information relating to the two web giants, Facebook and Google. In the lower right portion of each tab, the plug-in gives an indication of the extent to which your browsing is tracked or analyzed, and, based on the last 60 days of browser activity, estimates the annual monetary value of your browsing habits to the service. Based on the database of web sites that Privacyfix checks, the plug-in provides an indication of just how pervasive Facebook tracking is: an astonishing 83% of the sites that I visited are tracked by Facebook. In addition, I was informed that Facebook makes just a few US cents per year at my level of activity. Although my usage of Facebook is so low as to almost put me in the non-user category, this does seem like an underestimate, especially given the fact that Privacyfix tells me that Facebook tracks so many of the sites I visit. The developers note that these monetary estimates are based on the work of TREFIS, a company that estimates the monetization of users' interaction with major web services; the estimates shown by Privacyfix are necessarily imprecise.
The right-hand side of the browser display is more practically interesting. A series of horizontal bands provides visual feedback on how locked down your Facebook privacy settings are; hovering the mouse over each indicator provides further explanation about the setting. In this display, a green band indicates that Privacyfix considers your current setting to be good from a privacy point of view. An orange band indicates a setting that needs attention; the display shown above indicates what one unsophisticated Facebook user in this editor's household sees when using the "Facebook" tab. (And yes, there will be a talk at home tonight about Facebook privacy settings.)
Simply reading the pop-up explanation on each privacy indicator is informative; I didn't previously know that Facebook may automatically share my profile information when I visit certain web sites. One of the nice features of the plug-in is that each of the indicators can be clicked to change the privacy setting, typically by navigating the user to the appropriate part of the Facebook web interface that controls the setting—a boon to those who have, like your editor, struggled to navigate around Facebook's privacy settings. Once the settings have been changed (in any way), Privacyfix sets the corresponding indicator green.
Privacyfix takes a policy-neutral approach to your privacy settings. It will indicate privacy settings that may need attention, but won't automatically change any settings for you. The rationale for that approach is that you may have some quite practical reasons for surrendering some level of privacy; for example, disabling Facebook's "like" button may interfere with the rendering of some web pages. Similarly, disabling Google's recording of your web search history means that future searches may lead to less personalized results. Privacyfix leaves the user to make those choices.
The display in the "Google" tab is similar to the Facebook tab. The lower right portion tells me that Google collects data on 60% of the pages I visited in the last 60 days. The big surprise here is the monetary value of my browsing habits for Google: Privacyfix estimates these at US$1179 per year. Although I spend a lot of the day on the web, this number does seem implausibly high, especially when compared to the Facebook number. However, the point is made: our browsing habits are worth a lot of money to Google. Again, a set of clickable indicators on the right-hand side of the display provides a basic education on how Google uses data about the user and allows privacy settings to be changed.
The "Websites" tab displays the favicons of web sites that the user has visited that Privacyfix has rated as having some privacy issues, based on the sites' privacy policies. Sites that share data outside the parent company and its affiliates are placed in a special section at the top of the display. (I was surprised to find that the Deutsche Bahn, the German railway company, reserves the right to share the personal data that I've given to them with third parties.) A "fix" button in this part of the display allows you to automatically generate an email requesting removal of personal data on these sites; of course, in many cases there is no guarantee that such a request will be honored. Clicking each favicon drills down to a page displaying further information about the corresponding web site's policies and which other companies track your visits to the site and what their tracking policies are.
Privacyfix's "Tracking" tab provides a visual overview of which companies are currently using tracking cookies to monitor user visits. This sort of visual display provides an impressive reminder of just how tracked we are: most frequent web users are likely to see that they are tracked by at least a couple of hundred web sites. Again, each icon is clickable, leading to further information about the site's tracking policies, and there are "fix" buttons to disable tracking cookies and ad tracking.
The final tab, "Healthbar", places a "privacy health" button at the far right of the browser address bar. While browsing the web, you can click this button to obtain a pop-up privacy assessment of the site, if it is one of those in the Privacyfix database. To the right is Privacyfix's health display for Google.com. Again, this sort of display is an effective tool for educating users about web privacy. Most of the web sites that I visited that Privacyfix knows about showed at least some orange indicators to indicate potential privacy issues; notably, Wikipedia had a clean green bill of health.
When it comes to understanding and controlling how our private data is used on the web, Privacyfix seems a useful tool on many dimensions. First and foremost among these is its use as an educational tool for web users of all levels of sophistication to gain a better understanding of how they are tracked on the web and to learn about the privacy policies of the companies who are tracking them. Increasing user understanding in this area can only be a good thing, inasmuch as it may lead to greater public pressure on companies to act according to more ethical privacy and tracking policies.
|Package(s):||chromium||CVE #(s):||CVE-2012-2874 CVE-2012-2876 CVE-2012-2877 CVE-2012-2878 CVE-2012-2879 CVE-2012-2880 CVE-2012-2881 CVE-2012-2882 CVE-2012-2883 CVE-2012-2884 CVE-2012-2885 CVE-2012-2886 CVE-2012-2887 CVE-2012-2888 CVE-2012-2889 CVE-2012-2891 CVE-2012-2892 CVE-2012-2894 CVE-2012-2896 CVE-2012-2900 CVE-2012-5108 CVE-2012-5110 CVE-2012-5111 CVE-2012-5112 CVE-2012-5376|
|Created:||October 22, 2012||Updated:||October 24, 2012|
|Description:||There are multiple vulnerabilities in versions of Chromium before 22.0.1229.94. See the CVE entries for more information.|
|Created:||October 22, 2012||Updated:||November 6, 2012|
|Description:||From the CVE entry:
The strchr function in procmime.c in Claws Mail (aka claws-mail) 3.8.1 and earlier allows remote attackers to cause a denial of service (NULL pointer dereference and crash) via a crafted email.
|Created:||October 24, 2012||Updated:||April 9, 2013|
|Description:||From the Debian advisory:
cups-pk-helper, a PolicyKit helper to configure cups with fine-grained privileges, wraps CUPS function calls in an insecure way. This could lead to uploading sensitive data to a cups resource, or overwriting specific files with the content of a cups resource. The user would have to explicitly approve the action.
|Created:||October 24, 2012||Updated:||October 24, 2012|
|Description:||From the CVE: Directory traversal vulnerability in gitolite 3.x before 3.1, when wild card repositories and a pattern matching "../" are enabled, allows remote authenticated users to create arbitrary repositories and possibly perform other actions via a .. (dot dot) in a repository name.|
|Created:||October 23, 2012||Updated:||January 9, 2013|
|Description:||From the CVE entry:
Buffer overflow in the trash buffer in the header capture functionality in HAProxy before 1.4.21, when global.tune.bufsize is set to a value greater than the default and header rewriting is enabled, allows remote attackers to cause a denial of service and possibly execute arbitrary code via unspecified vectors.
|Package(s):||java-1.7.0-oracle||CVE #(s):||CVE-2012-1531 CVE-2012-1532 CVE-2012-1533 CVE-2012-3143 CVE-2012-3159 CVE-2012-5067 CVE-2012-5083|
|Created:||October 19, 2012||Updated:||December 3, 2012|
From the Red Hat advisory:
Unspecified vulnerability in the Java Runtime Environment (JRE) component in Oracle Java SE 7 Update 7 and earlier, 6 Update 35 and earlier, 5.0 Update 36 and earlier, and 1.4.2_38 and earlier; and JavaFX 2.2 and earlier; allows remote attackers to affect confidentiality, integrity, and availability via unknown vectors related to 2D. (CVE-2012-1531)
Unspecified vulnerability in the Java Runtime Environment (JRE) component in Oracle Java SE 7 Update 7 and earlier and 6 Update 35 and earlier allows remote attackers to affect confidentiality, integrity, and availability via unknown vectors related to Deployment. (CVE-2012-1532)
Unspecified vulnerability in the Java Runtime Environment (JRE) component in Oracle Java SE 7 Update 7 and earlier, and 6 Update 35 and earlier, allows remote attackers to affect confidentiality, integrity, and availability via unknown vectors related to Deployment. (CVE-2012-1533)
Unspecified vulnerability in the Java Runtime Environment (JRE) component in Oracle Java SE 7 Update 7 and earlier, 6 Update 35 and earlier, and 5.0 Update 36 and earlier allows remote attackers to affect confidentiality, integrity, and availability, related to JMX. (CVE-2012-3143)
Unspecified vulnerability in the Java Runtime Environment (JRE) component in Oracle Java SE 7 Update 7 and earlier, and 6 Update 35 and earlier, allows remote attackers to affect confidentiality, integrity, and availability via unknown vectors related to Deployment. (CVE-2012-3159)
Unspecified vulnerability in the Java Runtime Environment (JRE) component in Oracle Java SE 7 Update 7 and earlier allows remote attackers to affect confidentiality via unknown vectors related to Deployment. (CVE-2012-3167)
Unspecified vulnerability in the Java Runtime Environment (JRE) component in Oracle Java SE 7 Update 7 and earlier, 6 Update 35 and earlier, 5.0 Update 36 and earlier, 1.4.2_38 and earlier, and JavaFX 2.2 and earlier allows remote attackers to affect confidentiality, integrity, and availability via unknown vectors related to 2D. (CVE-2012-3183)
|Package(s):||libproxy||CVE #(s):||CVE-2012-4504 CVE-2012-4505|
|Created:||October 22, 2012||Updated:||April 8, 2014|
|Description:||From the Ubuntu advisory:
Tomas Mraz discovered that libproxy incorrectly handled certain PAC files. A remote attacker could use this issue to cause libproxy to crash, or to possibly execute arbitrary code.
|Created:||October 18, 2012||Updated:||October 24, 2012|
From the Fedora advisory:
Florian Weimer of the Red Hat Product Security Team found  that mom created PID files in /var/run as world-writable. This could allow a malicious local attacker to edit the PID file and on mom shutdown or restart, to kill some other process than the mom process, that they would not normally have access to terminate.
This is fixed upstream .
|Created:||October 22, 2012||Updated:||January 17, 2013|
|Description:||From the Red Hat bugzilla:
An upstream Ruby security notice indicated that ruby suffered from a flaw where unintended files could be created if they contained a NUL characer in the file path or name. Certain methods like IO#open did not check the filename passed to them, and just passed those strings to lower layer routines, which could lead to unintentional files being created
|Created:||October 22, 2012||Updated:||December 31, 2012|
|Description:||From the Debian advisory:
It was discovered that a buffer overflow in libtiff's parsing of files using PixarLog compression could lead to the execution of arbitrary code.
|Created:||October 24, 2012||Updated:||December 24, 2013|
|Description:||From the Debian advisory:
gpernot discovered that Tinyproxy, a HTTP proxy, is vulnerable to a denial of service by remote attackers by sending crafted request headers.
|Created:||October 24, 2012||Updated:||November 6, 2012|
|Description:||From the Debian advisory:
"function name" lines returned by diff are not properly escaped, allowing attackers with commit access to perform cross site scripting.
Page editor: Jake Edge
Brief itemsreleased on October 20. Linus comments:
Stable updates: 3.0.47, 3.4.15, and 3.6.3 were released on October 21; each contains another set of important fixes. Note that 3.4.15 and 3.6.3 also contain an ext4 data corruption bug (as do their immediate predecessors and 3.5.7) so waiting for the next update might be advisable. 3.0.47, instead, contains a block subsystem patch that "could cause problems"; 3.0.48 was released on October 22 with a revert.
Meanwhile, 3.2.32 was released on October 18.
| | \ \ nnnn/^l | | | | \ / / | | | '-,.__ => \/ ,-` => | '-,.__ | O __.´´) ( .` | O __.´´) ~~~ ~~ `` ~~~ ~~
The problem, as explained in this note from Ted Ts'o, has to do with how the ext4 journal is managed. In some situations, unmounting the filesystem fails to truncate the journal, leaving stale (but seemingly valid) data there. After a single unmount/remount (or reboot) cycle little harm is done; some old transactions just get replayed unnecessarily. If the filesystem is quickly unmounted again, though, the journal can be left in a corrupted state; that corruption will be helpfully replayed onto the filesystem at the next mount.
Fixes are in the works. The ext4 developers are taking some time, though, to be sure that the problem has been fully understood and completely fixed; there are signs that the bug may have roots far older than the patch that actually caused it to bite people. Once that process is complete, there should be a new round of stable updates (possibly even for 3.5, which is otherwise at end-of-life) and the world will be safe for ext4 users again.
(Thanks are due to LWN reader "nix" who alerted readers in the comments and reported the bug to the ext4 developers).
Update: Ted now thinks that his initial diagnosis was incomplete at best; the problem is not as well understood as it seemed. Stay tuned.announced that the source code for its video driver is now available under the BSD license. "If you’re not familiar with the status of open source drivers on ARM SoCs this announcement may not seem like such a big deal, but it does actually mean that the BCM2835 used in the Raspberry Pi is the first ARM-based multimedia SoC with fully-functional, vendor-provided (as opposed to partial, reverse engineered) fully open-source drivers, and that Broadcom is the first vendor to open their mobile GPU drivers up in this way."
Kernel development newssmall-task packing patch set may be a step in the right direction.
A "small task" in this context is one that uses a relatively small amount of CPU time; in particular, small tasks are runnable less than 25% of the time. Such tasks, if they are spread out across a multi-CPU system, can cause processors to stay awake (and powered up) without actually using those processors to any great extent. Rather than keeping all those CPUs running, it clearly makes sense to coalesce those small tasks onto a smaller number of processors, allowing the remaining processors to be powered down.
The first step toward this goal is, naturally, to be able to identify those small tasks. That can be a challenge: the scheduler in current kernels does not collect the information needed to make that determination. The good news is that this problem has already been solved by Paul Turner's per-entity load tracking patch set, which allows for proper tracking of the load added to the system by every "entity" (being either a process or a control group full of processes) in the system. This patch set has been out-of-tree for some time, but the clear plan is to merge it sometime in the near future.
The kernel's scheduling domains mechanism represents the topology of the underlying system; among other things, it is intended to help the scheduler decide when it makes sense to move a process from one CPU to another. Vincent's patch set starts by adding a new flag bit to indicate when two CPUs (or CPU groups, at the higher levels) share the same power line. In the shared case, the two CPUs cannot be powered down independently of each other. So, when two CPUs live in the same power domain, moving a process from one to the other will not significantly change the system's power consumption. By default, the "shared power line" bit is set for all CPUs; that preserves the scheduler's current behavior.
The real goal, from the power management point of view, is to vacate all CPUs on a given power line so the whole set can be powered down. So the scheduler clearly wants to use the new information to move small tasks out of CPU power domains. As we have recently seen, though, process-migration code needs to be written carefully lest it impair the performance of the scheduler as a whole. So, in particular, it is important that the scheduler not have to scan through a (potentially long) list of CPUs when contemplating whether a small task should be moved or not. To that end, Vincent's patch assigns a "buddy" to each CPU at system initialization time. Arguably "buddy" is the wrong term to use, since the relationship is a one-way affair; a CPU can dump small tasks onto its buddy (and only onto the buddy), but said buddy cannot reciprocate.
Imagine, for a moment, a simple two-socket, four-CPU system that looks (within the constraints of your editor's severely limited artistic capabilities) like this:
For each CPU, the scheduler tries to find the nearest suitable CPU on a different power line to buddy it with. The most "suitable" CPU is typically the lowest-numbered one in each group, but, on heterogeneous systems, the code will pick the CPU with the lowest power consumption on the assumption that it is the most power-efficient choice. So, if each CPU and each socket in the above system could be powered down independently, the buddy assignments would look like this:
Note that CPU 0 has no buddy, since it is the lowest-numbered processor in the system. If CPUs 2 and 3 shared a power line, the buddy assignments would be a little different:
In each case, the purpose is to define an easy path by which an alternative, power-independent CPU can be chosen as the new home for a small task.
With that structure in place, the actual changes to the scheduler are quite small. The normal load-balancing code is unaffected for the simple reason that small tasks, since they are more likely than not to be sleeping when the load balancer runs, tend not to be moved in the balancing process. Instead, the scheduler will, whenever a known small task is awakened, consider whether that task should be moved from its current CPU to the buddy CPU. If the buddy is sufficiently idle, the task will be moved; otherwise the normal wakeup logic runs as always. Over time, small tasks will tend to migrate toward the far end of the buddy chain as long as the load on those processors does not get too high. They should, thus, end up "packed" on a relatively small number of power-efficient processors.
Vincent's patch set included some benchmark results showing that throughput with the modified scheduler is essentially unchanged. Power consumption is a different story, though; using "cyclictest" as a benchmark, he showed power consumption at about ⅓ its previous level. The benefits are sure to be smaller with a real-world workload, but it seems clear that pushing small tasks toward a small number of CPUs can be a good move. Expect discussion of approaches like this one to pick up once the per-entity load tracking patches have found their way into the mainline.
In an article last week, we saw that the EPOLL_CTL_DISABLE operation proposed by Paton Lewis provides a way for multithreaded applications that cache information about file descriptors to safely delete those file descriptors from an epoll interest list. For the sake of brevity, in the remainder of this article we'll use the term "the EPOLL_CTL_DISABLE problem" to label the underlying problem that EPOLL_CTL_DISABLE solves.
This article revisits the EPOLL_CTL_DISABLE story from a different angle, with the aim of drawing some lessons about the design of the APIs that the kernel presents to user space. The initial motivation for pursuing this angle arises from the observation that the EPOLL_CTL_DISABLE solution has some difficulties of its own. It is neither intuitive (it relies on some non-obvious details of the epoll implementation) nor easy to use. Furthermore, the solution is somewhat limiting, since it forces the programmer to employ the EPOLLONESHOT flag. Of course, these difficulties arise at least in part because EPOLL_CTL_DISABLE is designed so as to satisfy one of the cardinal rules of Linux development: interface changes must not break existing user-space applications.
If there had been an awareness of the EPOLL_CTL_DISABLE problem when the epoll API was originally designed, it seems likely that a better solution would have been built, rather than bolting on EPOLL_CTL_DISABLE after the fact. Leaving aside the question of what that solution might have been, there's another interesting question: could the problem have been foreseen?
One might suppose that predicting the EPOLL_CTL_DISABLE problem would have been quite difficult. However, the synchronized-state problem is well known and the epoll API was designed to be thread friendly. Furthermore, the notion of employing a user-space cache of the ready list to prevent file descriptor starvation was documented in the epoll(7) man page (see the sections "Example for Suggested Usage" and "Possible Pitfalls and Ways to Avoid Them") that was supplied as part of the original implementation.
In other words, almost all of the pieces of the puzzle were known when the epoll API was designed. The one fact whose implications might not have been clear was the presence of a blocking interface (epoll_wait()) in the API. One wonders if more review (and building of test applications) as the epoll API was being designed might have uncovered the interaction of epoll_wait() with the remaining well-known pieces of the puzzle, and resulted in a better initial design that addressed the EPOLL_CTL_DISABLE problem.
So, the first lesson from the EPOLL_CTL_DISABLE story is that more review is necessary in order to create better API designs (and we'll see further evidence supporting that claim in a moment). Of course, the need for more review is a general problem in all aspects of Linux development. However, the effects of insufficient review can be especially painful when it comes to API design. The problem is that once an API has been released, applications come to depend on it, and it becomes at the very least difficult, or, more likely, impossible to later change the aspects of the API's behavior that applications depend upon. As a consequence, a mistake in API design by one kernel developer can create problems that thousands of user-space developers must live with for many years.
A second lesson about API design can be found in a comment that Paton made when responding to a question from Andrew Morton about the design of EPOLL_CTL_DISABLE. Paton was speculating about whether a call of the form:
epoll_ctl(epfd, EPOLL_CTL_DEL, fd, &epoll_event);
could be used to provide the required functionality. The EPOLL_CTL_DEL operation does not currently use the fourth argument of epoll_ctl(), and applications should specify it as NULL (but more on that point in a moment). The idea would be that "epoll_ctl [EPOLL_CTL_DEL] could set a bit in epoll_event.events (perhaps called EPOLLNOTREADY)" to notify the caller that the file descriptor was in use by another thread.
But Paton noted a shortcoming of this approach:
In other words, although the EPOLL_CTL_DEL operation doesn't use the epoll_event argument, the caller is not required to specify it as NULL. Consequently, existing applications are free to pass random addresses in epoll_event. If the kernel now started using the epoll_event argument for EPOLL_CTL_DEL, it seems likely that some of those applications would break. Even though those applications might be considered poorly written, that's no justification for breaking them. Quoting Linus Torvalds:
The lesson here is that when an API doesn't use an argument, usually the right thing to do is for the implementation to include a check that requires the argument to have a suitable "empty" value, such as NULL or zero. Failure to do that means that we may later be prevented from making the kind of API extensions that Paton was talking about. (We can leave aside the question of whether this particular extension to the API was the right approach. The point is that the option to pursue this approach was unavailable.) The kernel-user-space API provides numerous examples of failure to do this sort of checking.
However, there is yet more life in this story. Although there have been many examples of system calls that failed to check that "empty" values were passed for unused arguments, it turns out that epoll_ctl(EPOLL_CTL_DEL) fails to include the check for another reason. Quoting the BUGS section of the epoll_ctl() man page:
In other words, applications that use EPOLL_CTL_DEL are not only permitted to pass random values in the epoll_event argument: if they want to be portable to Linux kernels before 2.6.9 (which fixed the problem), they are required to pass a pointer to some random, but valid user-space address. (Of course, most such applications would simply allocate an unused epoll_event structure and pass a pointer to that structure.) Here, we're back to the first lesson: more review of the initial epoll API design would almost certainly have uncovered this fairly basic design error. (It's this writer's contention that one of the best ways to conduct that sort of review is by thoroughly documenting the API, but he admits to a certain bias on this point.)
Failing to check that unused arguments (or unused pieces of arguments) have "empty" values can cause subtle problems long after the fact. Anyone looking for further evidence on that point does not need to go far: the epoll_ctl() system call provides another example.
Linux 3.5 added a new epoll flag, EPOLLWAKEUP, that can be specified in the epoll_event.events field passed to epoll_ctl(). The effect of this flag is to prevent the system from being suspended while epoll readiness events are pending for the corresponding file descriptor. Since this flag has a system-wide effect, the caller must have a capability, CAP_BLOCK_SUSPEND (initially misnamed CAP_EPOLLWAKEUP).
In the initial EPOLLWAKEUP implementation, if the caller did not have the CAP_BLOCK_SUSPEND capability, then epoll_ctl() returned an error so that the caller was informed of the problem. However, Jiri Slaby reported that the new flag caused a regression: an existing program failed because it was setting formerly unused bits in epoll_event.events when calling epoll_ctl(). When one of those bits acquired a meaning (as EPOLLWAKEUP), the call failed because the program lacked the required capability. The problem of course is that epoll_ctl() has never checked the flags in epoll_event.events to ensure that the caller has specified only flag bits that are actually implemented in the kernel. Consequently, applications were free to pass random garbage in the unused bits.
When one of those random bits suddenly caused the application to fail, what should be done? Following the logic outlined above, of course the answer is that the kernel must change. And that is exactly what happened in this case. A patch was applied so that if the EPOLLWAKEUP flag was specified in a call to epoll_ctl() and the caller did not have the CAP_BLOCK_SUSPEND capability, then epoll_ctl() silently ignored the flag instead of returning an error. Of course, in this case, the calling application might easily carry on, unaware that the request for EPOLLWAKEUP semantics had been ignored.
One might observe that there is a certain arbitrariness about the approach taken to dealing with the EPOLLWAKEUP breakage. Taken to the extreme, this type of logic would say that the kernel can never add new flags to APIs that didn't hitherto check their bit-mask arguments—and there is a long list of such system calls (mmap(), splice(), and timer_settime(), to name just a few). Nevertheless, new flags are added. So, for example, Linux 2.6.17 added the epoll event flag EPOLLRDHUP, and since no one complained about a broken application, the flag remained. It seems likely that the same would have happened for the original implementation of EPOLLWAKEUP that returned an error when CAP_BLOCK_SUSPEND was lacking, if someone hadn't chanced to make an error report.
As an aside to the previous point, in cases where someone reports a regression after an API change has been officially released, there is a conundrum. On the one hand, there may be old applications that depend on the previous behavior; on the other hand, newer applications may already depend on the newly implemented change. At that point, there is no simple remedy: to fix things almost certainly means that some applications must break.
We can conclude with two observations, one specific, and the other more general. The specific observation is that, ironically, EPOLL_CTL_DISABLE itself seems to have had surprisingly little review before being accepted into the 3.7 merge window. And in fact, now that more attention has been focused on it, it looks as though the proposed API will see some changes. So, we have a further, very current, piece of evidence that there is still insufficient review of kernel-user-space APIs.
More generally, the problem seems to be that—while the kernel code gets reviewed on many dimensions—it is relatively uncommon for kernel-user-space APIs to be reviewed on their own merits. The kernel has maintainers for many subsystems. By now, the time seems ripe for there to be a kernel-user-space API maintainer—someone whose job it is to actively review and ack every kernel-user-space API change, and to ensure that test cases and sufficient documentation are supplied with the implementation of those changes. Lacking such a maintainer, it seems likely that we'll see many more cases where kernel developers add badly designed APIs that cause years of pain [PDF] for user-space developers.
As is generally the case when realtime Linux developers get together, the discussion soon turns to how (and when) to get the remaining pieces of the realtime patch set into the mainline. That was definitely the case at the 2012 realtime minisummit, which was held October 18 in conjunction with the 14th Real Time Linux Workshop (RTLWS) in Chapel Hill, North Carolina. Some other topics were addressed as well, of course, and a lively discussion, which Thomas Gleixner characterized as "twelve people siting around a table not agreeing on anything", ensued. Gleixner's joke was just that, as there was actually a great deal of agreement around that table.
I unfortunately missed the first hour or so of the minisummit, so I am using Darren Hart's notes, Gleixner's recap for the entire workshop on October 19, and some conversations with attendees as the basis for the report on that part of the meeting.
The first topic was on using Bugzilla to track bugs in the realtime patches. Hart and Clark Williams have agreed to shepherd a Bugzilla to help ensure the bugs have useful information and provide the needed pieces for the developers to track the problems down. Bugs can now be reported to the kernel Bugzilla using PREEMPT_RT for the "Tree" field. Doing so will send an email to developers who have registered their interest with Hart.
Gleixner has "mixed feelings" about it because it involves "web browsers, mouse clicks and other things developers hate". Previously, the normal way to report a bug was via the realtime or kernel mailing lists, but Bugzilla does provide a way to attach large files (e.g. log files) to bugs, which may prove helpful. The realtime hackers will know better in a year how well Bugzilla is working out and will report on it then, he said.
There was general agreement that the development process for realtime is working well. Currently, Gleixner is maintaining a patch set based on 3.6, which will be turned over to Steven Rostedt when it stabilizes. Rostedt then follows the mainline stable releases and is, in effect, the stable "team" for realtime. Those stable kernels are the ones that users and distributions generally base their efforts on. In the future, Gleixner has plans to update his 3.6-rt tree with incremental patches that have already been merged into other stable realtime kernels (3.0, 3.2, 3.4) to keep it closer to the mainline 3.6 stable release.
There was some discussion of the long-term support initiative (LTSI) kernels and what relationship those kernels have with the realtime stable kernels. The answer is: not much. LTSI plans to have realtime versions of its kernels, but when Hart suggested aligning the realtime kernel versions with those of LTSI, it was not met with much agreement. Gleixner said that the LTSI kernels would likely be supported for years, "probably decades", which is well beyond the scope of what the realtime developers are interested in doing.
One of the topics that came up frequently as part of both the workshop/minisummit and the extensive hallway/micro-brewery track was Gleixner's softirq processing changes released in 3.6-rt1. The locks for the ten different softirq types have been separated so that the softirqs raised in the context of a thread can be handled in that thread—without having to handle unrelated softirqs. This solves a number of problems with softirq handling (victimizing unrelated threads to process softirqs, configuring separate softirq thread priorities to get the desired behavior, etc.), but is a big change from the existing mainline implementation—as well as from previous realtime patch sets.
In the minisummit, Gleixner emphasized that more testing of the patches is needed. Networking, which is the most extensive user of softirqs in the kernel, needs more testing in particular. But the larger issue is the possibility of eventually eliminating softirqs in the kernel completely. To that end, each of the specific kernel softirq-using subsystems was discussed, with an eye toward eliminating the softirq dependency for both realtime and mainline.
The use of softirqs in the network subsystem is "massive" and even the network developers are not quite sure why it all works, according to Gleixner. But, softirqs seem to work fine for Linux networking, though the definition of "working" is not necessarily realtime friendly. If the kernel can pass the network throughput tests and fill the links on high-speed test hardware, then it is considered to be working. Any alternate solution will have to meet or exceed the current performance, which may be difficult.
The block subsystem's use of softirqs is mostly legacy code. Something like 90% of the deferred work has been shifted to workqueues over the years. Eliminating the rest won't be too difficult, Gleixner said.
The story with tasklets is similar. They should be "easy to get rid of", he said, it will just be a lot of work. Tasklets are typically used by legacy drivers and are not on a performance-critical path. Tasklet handling could be moved to its own thread, Rostedt suggested, but Gleixner thought it would be better to eliminate them entirely.
The timer softirq, which is used for the timer wheel (described and diagrammed in this LWN article), is more problematic. The timer wheel is mostly used for timeouts in the network stack and elsewhere, so it is pretty low priority. It can't run with interrupts disabled in either the mainline or in the realtime kernel, but it has to run somewhere, so pushing it off to ksoftirqd is a possibility.
The high-resolution timers softirq is mostly problematic because of POSIX timers and their signal-delivery semantics. Determining which thread should be the "victim" to deliver the signal to can be a lengthy process, so it is not done in the softirq handler in the realtime patches as it is in mainline. One solution that may be acceptable to mainline developers is to set a flag in the thread which requested the timer, and allow it to do all of the messy victim-finding and signal delivery. That would mean that the thread which requests a POSIX timer pays the price for its semantics.
Williams asked if users were not being advised to avoid signal-based
timers. Gleixner said that he tells users to "use pthreads". But,
"customers aren't always reasonable", Frank Rowand observed. He pointed
out that some he knows of are using floating point in the kernel, and now
that they have hardware floating point want to add that context to what is
saved during context switches. Paul McKenney noted that many processors
have lots of floating point registers which can add "multiple hundreds of
milliseconds microseconds" to save or restore. Similar problems exist for the
auto-vectorization code that is being added to GCC, which will result in
many more registers needing to be saved.
Back to the softirqs, McKenney said that the read-copy-update (RCU) work had largely moved to threads in 3.6, but that not all of the processing moved out of the softirq. He had tried to completely move out of softirq in a patch a ways back, but Linus Torvalds "kicked it out immediately". He has some ideas of ways to address those complaints, though, so eliminating the RCU softirq should be possible.
Finally, the scheduler softirq does "nothing useful that I can see", Gleixner said. It mostly consists of heuristics to do load balancing, and Peter Zijlstra may be amenable to moving it elsewhere. Mike Galbraith pointed out that the NUMA scheduling work will make the problem worse, as will power management. ARM's big.LITTLE scheduling could also complicate things, Rowand said.
There is a great deal of interest in getting those changes into the 3.2 and 3.4 realtime kernels. Later in the meeting, Rostedt said that he would create an unstable branch of those kernels to facilitate that. The modifications are "pretty local", Gleixner said, so it should be fairly straightforward to backport the changes. In addition, it is unlikely that backports of other fixes into the mainline stable kernels (which are picked up by the realtime stable kernels) will touch the changed areas, so the ongoing maintenance should not be a big burden.
Gleixner said that he is "swamped" by a variety of tasks, including stabilizing the realtime tree, the softirq split, and a "huge backlog" of work that needs to be done for the CPU hotplug rework. Part of the latter was merged for 3.7, but there is lots more to do. Rusty Russell has offered to help once Gleixner gets the infrastructure in place, so he needs to "get that out the door". Beyond that, he also spends a lot of time tracking down bugs found by the Open Source Automation Development Lab (OSADL) testing and from Red Hat bug reports.
He needs some help from the other realtime kernel developers in order to move more of the patch set into the mainline. Those in the room seemed very willing to help. The first step is to go through all of the realtime patches and work on any that are "halfway reasonable to get upstream" first.
One of the top priorities for upstreaming is not a kernel change, but is a change needed in the GNU C library (glibc). Gleixner noted that the development process for glibc has gotten a "lot better" recently and that the new maintainers are doing a "great job". That means that a longstanding problem with condvars and priority inheritance may finally be able to be addressed.
When priority inheritance was added to the kernel, Ulrich Drepper wrote the user-space portion for glibc. He had a solution for the problem of condvars not being able to specify that they want to use a priority-inheriting mutex, but that solution was one that Gleixner and Ingo Molnar didn't like, so nothing was added to glibc.
Three years ago, Hart presented a solution at the RTLWS in Dresden, but he was unable to get it into glibc. It is a real problem for users according to Gleixner and Williams, so Hart's solution (or something derived from it) should be merged into glibc. Hart said he would put that at the top of his list.
Another area that should be fairly easy to get upstream are changes to the SLUB allocator to make it work with the realtime code. SLUB developer Christoph Lameter has done some work to make the core allocator lockless and for it not to disable interrupts or preemption. Lameter's work was mostly to support enterprise users on large NUMA systems, but it should also help make SLUB work better with realtime.
If SLUB can be made to work relatively easily, Gleixner would be quite willing to drop support for SLAB. The SLOB allocator is targeted at smaller, embedded systems, including those without an MMU, so it is not an interesting target. Besides which, SLOB's "performance is terrible", Rostedt said. During the minisummit, Williams was able to build and boot SLUB on a realtime system, which "didn't explode right away", Gleixner reported in the recap. That, coupled with SLUB's better NUMA performance, may make it a much better target anyway, he said.
Switching to SLUB might also get rid of a whole pile of "intrusive changes" in the memory allocator code. The realtime memory management changes will be some of the hardest to sell to the upstream developers, so any reduction in the size of those patches will be welcome.
There are a number of places where drivers call local_irq_save() and local_irq_enable() that have been changed in the realtime tree to call *_nort() variants. There are about 25 files that use those variants, mostly drivers designed for uniprocessor machines that have never been fixed for multiprocessor systems. No one really cares about those drivers any more, Gleixner said, so the _nort changes can either go into mainline or be trivially maintained out of it.
Bit spinlocks (i.e. single bits used as spinlocks) need to be changed to support realtime, and that can probably be sold because it would add lockdep coverage. Right now, bit spinlocks are not checked by lockdep, which is a debugging issue. In converting bit spinlocks to regular spinlocks, Gleixner said he found 3-4 locking bugs in the mainline, so it would be beneficial to have a way to check them.
The problem is that bit spinlocks are typically using flag bits in size-constrained structures (e.g. struct page). But, for debugging, it will be acceptable to grow those structures when lockdep is enabled. For realtime, there is a need to just "live with the fact that we are growing some structures", Gleixner said. There aren't that many bit spinlocks; two others that he mentioned were the buffer head lock and the journal head lock.
Hart brought up the sleeping spinlock conversion, but Gleixner said that part is the least of his worries. Most of the annotations needed have already been merged, as have the header file changes. The patches are "really unintrusive now", though it is still a big change.
The CPU hotplug rework should eliminate most of the changes required for realtime once it gets merged. The migrate enable and disable patches are self-contained. The high-resolution timers changes and softirq changes can be fairly self-contained as well. Overall, getting the realtime patches upstream is "not that far away", Gleixner said, though some thought is needed on good arguments to get around the "defensive list" of some mainline developers.
To try to ensure they hadn't skipped over anything, Williams put up a January email from Gleixner with a "to do" list for the realtime patches. There are some printk()-related issues that were on the list. Gleixner said those still linger, and it will be "messy" to deal with them.
Zijlstra was at one time opposed to the explicit migrate enable/disable calls, but that may be not be true anymore, Gleixner said. The problem may be that there will be a question of who uses the code when trying to get the infrastructure merged. It is a "hen-egg problem", but there needs to be a way to ensure that processes do not move between CPUs, particularly when handling per-CPU variables.
In the mainline, spinlocks disable preemption (which disables migration), but that's not true in realtime. The current mainline behavior is somewhat "magic", and realtime adds an explicit way to disable migration if that's truly what's needed. As Paul Gortmaker put it, "making it explicit is an argument for it in it's own right". Gleixner said he would talk to Zijlstra about a use case and get the code into shape for mainline.
Gortmaker asked if there were any softirq uses that could be completely eliminated. McKenney believes he can do so for the RCU softirq, but he does have the reservation that he has never successfully done so in the past. High-resolution timers and the timer wheel can both move out of softirqs, Gleixner said, though the former may be tricky. The block layer softirq work can be moved to workqueues, but the network stack is the big issue.
One possible solution for the networking softirqs is something Rostedt calls "ENAPI" (even newer API, after "NAPI", the new API). When using threaded interrupt handlers, the polling that is currently done in the softirq handler could be done directly in the interrupt handler thread. If that works, and shows a performance benefit, Gleixner said, the network driver writers will do much of the work on the conversion.
Wait queues are another problem area. While most are "pretty straightforward", there are some where the wait queue has a callback that is called on wakeup for every woken task. Those callbacks could do most anything, including sleep, which prevents those wait queues from being converted to use raw locks. Lots of places can be replaced, but not for "NFS and other places with massive callbacks", Gleixner said.
There are a number of pieces that should be able to go into mainline largely uncontended. Code to shorten the time that locks are held and to reduce the interrupt and preempt disabled regions is probably non-controversial. The _nort annotations may also fall into that category as they don't hurt things in mainline.
The final item on the day's agenda is a feature that is not part of the realtime patches, but is of interest to many of the same users: CPU isolation. That feature, which is known by other names such as "adaptive NOHZ", would allow users to dedicate one or more cores to user-space processing by removing all kernel processing from those cores. Currently, nearly all processing can be moved to other cores using CPU affinity, but there is still kernel housekeeping (notably CPU time accounting and RCU) that will run on those CPUs.
Frédéric Weisbecker has been working on CPU isolation, and he attended the minisummit at least partly to give an update on the status of the feature. Accounting for CPU time without the presence of the timer tick is one of the areas that needs work. Users still want to see things like load averages reflect the time being spent in user-space processing on an isolated CPU, but that information is normally updated in the timer tick interrupt.
In order to isolate the CPU, though, the timer tick needs to be turned off. In the recap, Gleixner noted that the high-performance computer users of the feature aren't so concerned about the time spent in the timer tick (which is minimal), but the cache effects from running the code. Knocking data and instructions out of the cache can result in a 3% performance hit, which is significant for those workloads.
To account for CPU time usage without the tick, adaptive NOHZ will use the same hooks that RCU uses to calculate the CPU usage. While the CPU is isolated, the CPU time will just be calculated, but won't be updated until the user-space process enters the kernel (e.g. via a system call). The tick might be restarted when system calls are made, which will eventually occur so that the CPU-bound process can report its results or get new data. Restarting the tick would allow the CPU accounting and RCU housekeeping to be done. Weisbecker felt that it should only be restarted if it was needed for RCU; even that might possibly be offloaded to a separate CPU.
That led to a discussion of what the restrictions there are for using CPU isolation. There was talk of trying to determine which system calls will actually require restarting the tick, but that was deemed too kernel-version specific to be useful. The guidelines will be that anything other than one thread that makes no system calls on the CPU may result in less than 100% of the CPU available. Gleixner suggested adding a tracepoint that would indicate when the CPU exited isolated mode and why. McKenney suggested a warning like "this code needs a more deterministic universe"—to some chuckles around the table. Weisbecker and Rostedt plan to work on CPU isolation in the near term, with an eye toward getting it upstream soon.
And that is pretty much where the realtime minisummit ended. While there is plenty of work still to do, it is clear that there is increasing interest in "finishing" the task of getting the realtime changes merged. Gleixner confessed to being tired of maintaining it in the recap session, and that feeling is probably shared by others. Given that the mainline has benefited from the realtime changes already merged, it seems likely that will continue as more goes upstream. The trick will be in convincing the other kernel hackers of that.
[ I would like to thank OSADL and RTLWS for supporting my travel to the minisummit and workshop. ]
Patches and updates
Core kernel code
Filesystems and block I/O
Page editor: Jonathan Corbet
Back in May, we reported on a longstanding Debian technical committee "bug" that proposed replacing the Debian Python maintainer. The bug has existed since March 2010, and the problem of dysfunctional communication among the maintainers of various Python packages goes back further than that. It looked like things might be coming to a head in May, but the problem was finally resolved—or at least the technical committee rendered a judgment—on October 5.
It is not clear what precipitated the restart of the technical committee vote process, though a thread on hijacking packages in debian-devel at the same time may have been a partial trigger. In any case, Don Armstrong drafted a resolution for consideration on September 27. A few comments were made, which were then addressed by Armstrong, who called for a vote on October 4. The vote was quickly resolved in favor of not replacing Matthias Klose, the Debian Python maintainer.
While the vote was lopsided, it was hardly a ringing endorsement for Klose or his communication style. The other two options (besides the ever-present "further discussion") were to turn over the Python package to two different teams led by new maintainers: Sandro Tosi, who filed the original bug, or Jakub Wilk, who volunteered back in April. The choice for leaving Klose as the maintainer includes an additional clause that suggests a change:
The resolution also contained a bit of a recap of the problems, at least from the perspective of the committee. The communication difficulties reached a point some time ago that Klose has essentially stopped posting to the mailing list (and, for that matter, never posted any kind of response or clarification in the bug). Those problems stem from a feedback loop spelled out in the resolution. Essentially, flames posted about the motives of various participants led them to withdraw from communicating, which resulted in more flames (and, eventually, the resolution). The resolution notes: "Neither the inflammatory comments, nor the lack of response are acceptable outcomes."
It has been clear over the two and a half years this has gone on that the committee members are rather disappointed that they are faced with the issue. It is not just that they have a make a hard decision that "will appear to validate one problematic behavior or the other", but also that the parties involved couldn't resolve it on their own.
It is a bit unclear what the result of the decision will be. Wilk is still active on the debian-python mailing list, while Tosi hasn't posted in more than five months. Neither may be a long-term indicator of their plans regarding Python in Debian. After initially being resistant to a forcible change of maintainer, Wilk changed his mind on August 10, which is how his name ended up on the resolution ballot. That may also have helped spur a final tech committee vote on the issue.
But, moving forward, the committee did make a request to address one of the bigger problems that was cited when the bug was first filed. Because of the lack of communication, changes to Python packaging that affected other related packages were not being announced. In a clause that would have been present no matter how the vote went, the resolution covers that problem:
It is certainly a sticky situation when maintainers of related packages cannot seem to get along. Debian is famously maintainer-oriented, giving those maintainers wide latitude in how they handle "their" packages. That was likely a significant factor in the decision-making process. But, even with the vote, things may really not be resolved and the committee may need to get involved again. One hopes not, but the resolution doesn't necessarily really change anything.
> I'm assuming that since we're now having dozen-post threads about
> whether or not to have periods in our descriptions that all the hard
> stuff has been solved?
Perhaps in planning QA, for now, but certainly not in executing it. Documentation is lacking, too, and then some people still wouldn't read it anyway, and with more EAPI changes we keep adding more ways to screw it up, so please stand still while we reinvent the wheels on your desk chair.
Ubuntu 12.10 "Quantal Quetzal" has been released. Desktop, Server, Cloud, and Core images are available for download now; the official variants (Kubuntu, Xubuntu, Edubuntu, Lubuntu, and Ubuntu Studio) are expected to follow later today. Among the changes in this edition, of course, is support for UEFI secure boot.
Newsletters and articles of interest
Page editor: Rebecca Sobol
DevelopmentharmonySEQ and SoftWerk, an interesting pair of MIDI sequencers. These programs record and play only MIDI data — they have no integrated audio capabilities — and they favor interactive realtime composition by combining looping musical patterns. Both programs also build their user interfaces with GTK, but we'll see how each program's interface is designed in accord with its fundamental approach to MIDI sequencing.
The standalone MIDI sequencer has been superseded by the audio/MIDI digital audio workstation but there's life yet in the old design. Many composers work only with MIDI-based systems — MIDI favors discrete note-like events, what my composition teacher called the "pitches & rhythm" model — and despite its age MIDI is still a powerful force throughout the music industry. Thus, interest in the specification remains high and development continues.
Throughout this article I use terminology specific to MIDI. If you don't know anything about MIDI I suggest that you read the excellent Wikipedia summary for some relevant background. In brief, MIDI is a specification for a common messaging protocol and hardware implementation for devices manufactured by different companies. The messaging protocol includes various commands that instruct a receiving device — hardware or software — to perform actions such as sounding a note with a particular loudness ("velocity" in MIDI parlance) on a specific instrument ("program change").
The MIDI message format is flexible; for example, the note-on message format includes values for pitch and loudness, while the program change sends only a value for instrument selection. Most messages also include a channel value that assigns the message to one of sixteen MIDI channels. Thus, if the MIDI-compliant device (e.g. a synthesizer) receives a note-on message on its configured channel it should play a note with the specified pitch and loudness levels. The receiver stops playing when it receives a message to cease: either a note-off message or a note-on with a loudness of zero.
The note-on message described above is one of the common MIDI messages. The specification also provides message formats for data streams from pitch benders, keyboard aftertouch, and modulation wheels, devices known as "continuous controllers." System-exclusive (a.k.a. "sysex") messages have a special format with nothing defined beyond an initial identifier and a message end point. The identifier tells the receiver what kind of device it expects — a Yamaha SY99, for example. If the receiver is the intended target then it will process the data between the start and end markers. Typically a sysex message contains device-specific data not carried by standard MIDI messages. Often it is the only means of reaching certain parameters of a device, but sysex messages must be used with some caution. MIDI is a serial protocol, and sysex messages are variable in length. They should be placed carefully in the data stream to avoid delays in the timing of other messages.
MIDI was designed originally to connect synthesizers from various manufacturers, but it quickly found its way to the personal computers of the late 1980s. In turn, the functions of the hardware synthesizers migrated to the computer, until at last we arrive at the present day's primacy of the software synthesizer. Nevertheless, MIDI retains its viability for internal as well as external connectivity, and it shows no signs of losing its place as an industry-wide standard.
It is important to understand that MIDI is not an audio format. As described above, the MIDI stream itself is a simple switching protocol, common MIDI messages merely instruct a sound-producing instrument to produce an audible output. Thanks to the nature of the message format MIDI can also be employed to control light displays, audio mixer elements, karaoke visuals, and other hardware and software targets that have little or nothing to do with making sound.
Neither sequencer includes a built-in sound source, so you'll need a software synthesizer and/or an external hardware synthesizer, and you'll need a way to connect the output from the sequencer to the synthesizer's input. Minimal requirements for external connections include an interface with MIDI ports, a MIDI-equipped synthesizer, and the cables to connect the two. Internal connections are considerably easier to make. Install one of excellent native Linux synthesizers — QSynth is my default selection — then start JACK, preferably with QJackCtl. Start your synth, open your sequencer, then connect everything — audio and MIDI — in QJackCtl's connections panels. Now you can start having some fun with these sequencers.
A few more words about MIDI connections and JACK: QJackCtl's connections panel displays tabs for MIDI and for ALSA, a possibly confusing arrangement. Both are MIDI connection tabs, but the first is for clients written for connection via JackMIDI, a sample-accurate MIDI transfer mechanism. The MIDI panel is also where connections will appear if you've selected the raw or seq MIDI driver in the QJackCtl Setup dialog. However, those drivers have been deprecated in favor of the a2jmidi bridge software. The ALSA tab provides plain vanilla ALSA connectivity. I use JackMIDI whenever possible, but the ALSA tab is still helpful where there is no special need for absolute timing accuracy or when I'm working in a non-JACK environment.
Rafal Cieslak's harmonySEQ is a pattern-based MIDI sequencer with some unique features. Certain aspects of its design remind me of Dr. T's KCS, a keyboard-controlled sequencer that permits the creation of sequencers within sequencers. In harmonySEQ each sequencer has its own event pattern and each pattern has its own meter and duration. Transposition, mute/solo, pattern edits, and other aspects of a sequence can be changed during performance, and a flexible events-action system lets the user trigger sequencers through keypresses, MIDI notes, and other user-assigned operations. Of course, all your work can be saved and reloaded for editing and replay.
The current public release of harmonySEQ is version 0.16 with installable packages available for Ubuntu, OpenSUSE, and Fedora. Suitably motivated users can build the program from the release source package or from its Launchpad repository. After installation you can start harmonySEQ from a shell prompt or by clicking on its start-up icon.
HarmonySEQ opens first to an empty display. Click on the "Add Note Sequence" icon (identified by its tooltip, there is no corresponding menu item) to create the new sequence display seen to the right. The harmonySEQ UI is divided into five sections — the top menu bar, an icon bar, the sequencers list, the sequencer properties panel, and the event-action list. Tooltips will appear by default, and I suggest leaving them on until you're familiar with the program. Documentation is non-existent at this point in harmonySEQ's development, but the tooltips, the clarity of the GUI, and a little experimentation should provide enough information to get started.
Basic operation of the program is straightforward. MIDI events are entered into the pattern grid in the "Sequencer Properties" panel by recording from an external MIDI controller, a virtual keyboard, or by left-clicking on a box in the grid (right-clicking removes an entry). Sequences may be polyphonic, provided that your synthesizer(s) can handle the note density. A looping sequence can be toggled on or off, or you can play it through in a single pass. All sequence parameters can be edited in realtime, including timing and rhythmic elements.
HarmonySEQ also provides a GUI for creating control sequencers; these consist of graphic control curves for MIDI continuous controllers that are applied to synthesis parameters such as filters and low-frequency oscillators. This feature is a powerful aid when making music that employs filter sweeps and other dynamic effects that require precise placement. Most sequencers include some kind of GUI for making controller curves, but harmonySEQ's is the simplest and most effective implementation I've encountered.
You can play your sequences by clicking on the "Sequence On/Off" toggles in the main display, but to really tap into harmonySEQ's power you'll want to investigate the "Event/Action" dialog. This panel lets you assign keypresses or MIDI messages to various actions that control sequence playback. For example, in the example to the right I've assigned keys to toggle playback of each sequence. I can trigger sequences in any order or combination I want, and I can press multiple keys to trigger groups of sequences.
The "Event/Action" facility is a great tool for improvisation and performance, but it has its problems. You must be sure that no other panel is active, otherwise your keystrokes will be entered into number boxes or other places where their effect may be undesirably surprising. Not every key assignment behaves as advertised, and sometimes the program throws a stuck note into the works. (Thank goodness for QSynth's Panic button.)
As a MIDI-only sequencer, harmonySEQ includes no built-in instruments. As shown to the right, my usual setup employs QJackCtl to connect harmonySEQ to QSynth loaded with a soundfont compatible with the General MIDI Specification. QSynth's output is routed through the CAPS Versatile Plate Reverb LADSPA plugin in the JACK Rack — I prefer the plugin over QSynth's on-board effect — and the rack's output is sent to the system's audio out ports. A more experimental setup routes harmonySEQ into the Festige launcher for Windows VST/VSTi plugins, with or without the effects in the JACK Rack.
To get an idea of what can be done with harmonySEQ I recommend listening to some of the demos on its Web site. In particular, the examples by Louigi Verona show off what can be done with this little gem. HarmonySEQ has some features that blend nicely with Louigi's musical inclinations.
HarmonySEQ has great potential, and Rafal has indicated that he has plans for further versions. Unfortunately development is frozen until he finds the time and/or assistance to carry the project forward. Meanwhile, harmonySEQ is useful now, the source code remains available, and collaboration is welcome at all levels.
Many years ago the Doepfer Musikelektronik company manufactured a hardware MIDI sequencer called the Schaltwerk. The unit was a powerful tool for composing with looping patterns. The Doepfer information page describes the Schaltwerk as:
Predestined it may have been, but if you want one now the Schaltwerk is available only as vintage gear, rather hard to find, with reported prices ranging from US$400 to $1600. Considering the Schaltwerk's scarcity and cost, perhaps you should look instead at Paul Davis's SoftWerk, a software emulation of the Doepfer hardware's capabilities with some very fine features of its own. It's also much easier to find and costs $0, a considerable savings.
I say it's "easier" to find, but it's unlikely that you'll discover SoftWerk in your Linux distribution's official packages. I found a PKGBUILD for it in the Arch users' repository but not in the package lists for my Debian Squeeze and Ubuntu systems. Fortunately SoftWerk is light on dependencies and easy to build and install. A source tarball is available at http://ardour.org/files/softwerk-3.0.tar.bz2 and a recommended patch can be downloaded from the same location. Apparently the patch isn't absolutely necessary — an unpatched SoftWerk builds and runs without trouble on my Debian machine — but if you have problems building the program on an up-to-date system, apply the patch and rebuild.
After installation, enter softwerk at the shell prompt. If all goes well, the program will open to its default state with eight tracks ready and waiting for your input. Make your audio and MIDI connections in the same manner described above for harmonySEQ, and the fun will begin.
Like its hardware model, SoftWerk is styled after a pre-MIDI analog sequencer, with no piano-roll display. Instead, a series of values is entered into a track-like display, then rhythm rules are applied to the series — i.e. the sequence — to create a looping phrase. The loop may be directed to repeat itself in a variety of ways, such as playback straight through from start to finish, playing from beginning to end then reversing the series from end to beginning (also called ping-ponging), and playback in random order. Empty steps can be added to the pattern to emulate musical rests, track and sequence length phrase can be adjusted as needed, and everything that can be controlled is controllable in realtime.
The unfortunately out-of-date SoftWerk home page includes a single page of documentation with some advice about using the mouse and keyboard to facilitate numerical entry. Beyond that useful information you're on your own. Fortunately SoftWerk's GUI is easy to learn. The "Mode Selector" defines the type of output message assigned to the sequenced events. By default, it's set to Relative Pitch, but the selection includes a variety of MIDI messages, including note on/off, pitch-bend, program change, and continuous controller values. Sequence event values may be entered by hand, or you can choose a scale from the "Mode Fill" menu (it includes a random fill). I have a lot of fun with one track defined to produce notes and another defined to create random program changes. Both tracks are assigned to the same MIDI channel, and each runs with its own timing. I can redefine the program change values — or any other value displayed — at will in realtime, singly or for the entire group.
Global tempo is set with the tempo slider. Each track has a ticks-per-beat regulation of events instead of the conventional beats-per-measure. This method is simple to learn, but it permits very complex rhythmic relationships between tracks. As its author writes, "SoftWerk is specifically designed to accommodate such structures."
When you're ready to store your work, your sequence tracks are saved as a "pattern". All tracks are saved at once; there is no facility for saving or exporting a single track from the sequencer. However, you can arrange your patterns in sets with an autoplay feature that starts playback as soon as a new pattern is loaded into the sequencer. The pattern file manager has few features, but it suffices — barely — as a playlist for live performance or as a formal composition tool to create definite sequences of patterns.
SoftWerk can be used in a deterministic manner, but its true capability is as a software instrument for realtime improvisation and composition. Alas, there will be no further development of SoftWerk, but at version 3.03 it is a mature work. All it needs now is your input, so put on your creative hat and check it out.
HarmonySEQS and SoftWerk are only two of the sequencers listed on the applications pages at linuxaudio.org. If you're not happy with those two, check out OOM or the non-Sequencer. Even in the audio domain, if it's Linux it's about choice.
Mozilla has enabled its "Firefox Marketplace" web app store on the latest Aurora release of Firefox for Android. Aurora is the development branch of Firefox, so this is definitely a "soft" launch, but it does give developers and users a taste of Mozilla's recent work on Web APIs. Anyone can survey the goods in stock at the marketplace by visiting marketplace.mozilla.org, but installation requires Firefox for Android.
Git 1.8.0 is available. This release features numerous small improvements and bugfixes, but it is also noteworthy for being the last release to support the current behavior of the git push command. Starting with the next major release, when git push is not supplied with an argument saying which branch to push to remote, it will "use the "simple" semantics that pushes the current branch to the branch with the same name, only when the current branch is set to integrate with that remote branch." Fortunately, the next release will also provide warnings alerting users to this change.We're entering a new, exciting and somewhat scary phase for Wayland. As of this 1.0.0 release, we're changing the development model and committing to the the protocol and client side API we have now." The job is far from done, but the developers are getting closer to having a credible replacement for the X Window System.
Arduino has released version 1.5 of its microcontroller programming environment, which supports easier library installation and board selection. The new release also implements compiling "sketches" for multiple processor architectures, a feature designed to support the Arduino Due board, which is also available now. The Due hardware sports a 32-bit ARM Cortex-M3 CPU in place of the 8-bit microcontrollers of previous generations, as well as USB OTG and two digital-to-analog converters (DACs).
Dirk Hohndel has released version 2.1 of Subsurface, marking the first stable public release of the open source divelog program started in 2011 by Linus Torvalds. The application imports data from a wide range of dive computers, offers advanced visualizations, and provides both data-logging and statistical analysis.
Newsletters and articles
On his blog, Christian Heilmann examines the gap between making a source repository public and maintaining a "real" open source project. "Several times during the talk he explained that he does have a job and that he has no time to answer every email. After all, there should be no need to ask questions or get support as the code is available and on Github and people could help each other and be happy that the code was released as open source. How come there was no magical community appearing out of nowhere that took care of all this?" Short answer: if you aren't interested in other people's patches, it's not really an open source project.introduces a spectrum of virtualization, on Linux.com. "At XenSummit 2012 in San Diego, Mukesh Rathor from Oracle presented his work on a new virtualization mode, called "PVH". Adding this mode, there are now a rather dizzying array of different terms thrown about -- "HVM", "PV", "PVHVM", "PVH" -- what do they all mean? And why do we have so many? The reason we have all these terms is that virtualization is no longer binary; there is a spectrum of virtualization, and the different terms are different points along the spectrum." eighteenth installment of Lennart Poettering's "systemd for Administrators" series covers an interesting newish feature: integrated support for control group resource controllers. "When thinking about service management for systemd, we quickly realized that resource management must be core functionality of it. In a modern world -- regardless if server or embedded -- controlling CPU, Memory, and IO resources of the various services cannot be an afterthought, but must be built-in as first-class service settings. And it must be per-service and not per-process as the traditional nice values or POSIX Resource Limits were."
Followers of the series may also want to take a peek at Part XVII, covering management of the journal.
Page editor: Nathan Willis
The Free Software Foundation has opened nominations for its 15th annual Free Software Awards. There are two awards, the Award for the Advancement of Free Software (to an individual), and Award for Projects of Social Benefit (to a project). Nominations will be accepted through November 15.
Articles of interesttalks with Hugo Roy about his work with FSFE. "At the moment I’m mainly working on setting up our Free Your Android campaign in France, with phone liberation workshops. I really believe in this project: I think mobile devices are becoming more and more important, and having control over them, and more importantly over the services that we run them with, is becoming more important too."
Calls for PresentationsWe invite submissions of papers addressing all areas of audio processing and media creation based on Linux. Papers can focus on technical, artistic and scientific issues and should target developers or users. In our call for music, we are looking for works that have been produced or composed entirely/mostly using Linux."
|PostgreSQL Conference Europe||Prague, Czech Republic|
|Droidcon London||London, UK|
|PyData NYC 2012||New York City, NY, USA|
|Firebird Conference 2012||Luxembourg, Luxembourg|
|October 27||Linux Day 2012||Hundreds of cities, Italy|
|October 27||Central PA Open Source Conference||Harrisburg, PA, USA|
|Technical Dutch Open Source Event||Eindhoven, Netherlands|
|October 27||pyArkansas 2012||Conway, AR, USA|
|Linaro Connect||Copenhagen, Denmark|
|Ubuntu Developer Summit - R||Copenhagen, Denmark|
|PyCon DE 2012||Leipzig, Germany|
|October 30||Ubuntu Enterprise Summit||Copenhagen, Denmark|
|MeetBSD California 2012||Sunnyvale, California, USA|
|OpenFest 2012||Sofia, Bulgaria|
|ApacheCon Europe 2012||Sinsheim, Germany|
|Apache OpenOffice Conference-Within-a-Conference||Sinsheim, Germany|
|Embedded Linux Conference Europe||Barcelona, Spain|
|LinuxCon Europe||Barcelona, Spain|
|KVM Forum and oVirt Workshop Europe 2012||Barcelona, Spain|
|LLVM Developers' Meeting||San Jose, CA, USA|
|November 8||NLUUG Fall Conference 2012||ReeHorst in Ede, Netherlands|
|Mozilla Festival||London, England|
|Free Society Conference and Nordic Summit||Göteborg, Sweden|
|Python Conference - Canada||Toronto, ON, Canada|
|SC12||Salt Lake City, UT, USA|
|PyCon Argentina 2012||Buenos Aires, Argentina|
|Qt Developers Days||Berlin, Germany|
|19th Annual Tcl/Tk Conference||Chicago, IL, USA|
|Linux Color Management Hackfest 2012||Brno, Czech Republic|
|November 16||PyHPC 2012||Salt Lake City, UT, USA|
|8th Brazilian Python Conference||Rio de Janeiro, Brazil|
|November 24||London Perl Workshop 2012||London, UK|
|Mini Debian Conference in Paris||Paris, France|
|Computer Art Congress 3||Paris, France|
|Lua Workshop 2012||Reston, VA, USA|
|Open Hard- and Software Workshop 2012||Garching bei München, Germany|
|CloudStack Collaboration Conference||Las Vegas, NV, USA|
|Konferensi BlankOn #4||Bogor, Indonesia|
|December 2||Foswiki Association General Assembly||online and Dublin, Ireland|
|December 5||4th UK Manycore Computing Conference||Bristol, UK|
|Open Source Developers Conference Sydney 2012||Sydney, Australia|
|Qt Developers Days 2012 North America||Santa Clara, CA, USA|
|CISSE 12||Everywhere, Internet|
|26th Large Installation System Administration Conference||San Diego, CA, USA|
If your event does not appear here, please tell us about it.
Page editor: Rebecca Sobol
Copyright © 2012, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds