User: Password:
|
|
Subscribe / Log in / New account

LWN.net Weekly Edition for November 1, 2012

RTLWS: Realtime Linux for aircraft

By Jake Edge
October 31, 2012

While there was much discussion about realtime Linux at this year's Real Time Linux Workshop (RTLWS)—unsurprisingly—there were few presentations on actually using it. Karl Kufieta's talk on how the realtime kernel was used for aircraft control was one of the exceptions. Kufieta described both the hardware and software components used by him and his colleagues at the University of Braunschweig to develop a Linux-based unmanned aerial vehicle (UAV).

Several different aircraft designs have been used by the University over the years, Kufieta said, for applications ranging from agricultural surveying to boundary layer measurements in Antarctica. Their designs go beyond traditional aircraft to, for example, a helium balloon that won an international challenge to navigate into a building and out through its chimney.

As might be guessed, weight is an important factor. They chose a Gumstix board with an OMAP 3 for the main processor, with a separate data acquisition processor for sensor data. That setup weighs 60g including the GPS antenna. It currently draws less than 2.5 watts, but that should come down once they figure out how to use some of the OMAP power-saving features, he said.

The main processor handles the control algorithms, image processing, and navigation. The data acquisition processor talks to various sensors, actuators, gyroscopes, the GPS, and so on. The two processors communicate over an SPI bus. Depending on the application, there can be more than one data acquisition processor, all of which communicate via CAN bus. In that scenario, one of the data acquisition processors communicates all of the sensor data to the OMAP via SPI messages. This design allows switching to different main processors without changing the design of the rest of the system.

The system is programmed using Simulink, which is an object-oriented simulation and modeling language with a drag-and-drop GUI interface, Kufieta said. It is integrated with MATLAB, which was used to develop some of the navigation and control code. Simulink can generate C code, which can be targeted to run atop an operating system (i.e. on Linux on the main processor) or to run without an operating system for the data acquisition processors. GCC is then used to build the generated code for the ARM processors.

Their goal was to run the Simulink code, particularly the Kalman filter used for navigation, on realtime Linux. To do so, they used the Simulink Real-time Workshop combined with the 2.6.33.9 realtime kernel. Linux has lots of advantages for the project over various commercial alternatives, including drivers for many of the devices (e.g. webcam, WiFi, UMTS), libraries for image processing and other needed tasks, along with support for SSH and other network protocols. The disadvantage is that it "may be a bit complex", and that the delays in thread execution might be too large for their needs.

As it turned out, their testing showed an upper bound of 62µs for the thread execution delay. That was using a "super simple model", but even when running thousands of these simple realtime tasks, the WiFi was still working well, 500-byte SPI messages could be sent in less than a millisecond, and the non-realtime data logging component, which writes debugging data to an SD card, was able to keep up.

To control an airplane, there is a wide variety of complex code that needs to be run. There are equations to turn the various forces measured by the sensors into vectors. The navigation information then needs to be used to determine what kinds of changes to make based on the current position and attitude of the plane. Position and attitude information is derived from a number of different sources, including GPS, an inertial measurement unit (IMU), accelerometers, gyroscopes, and pressure sensors. Those measurements feed into a model that is used to control the actuators to fly the plane.

The Kalman filter is used to rectify the high-frequency, lower accuracy data from the IMU, accelerometers, gyroscopes, and so on, with the low-frequency, but much more accurate GPS data. The IMU and other data can be sampled at 1kHz, though they are currently sampling at 100Hz, while the GPS data is available at roughly 1Hz rates. The Kalman filter then combines that data (along with an estimate of its accuracy) to determine position.

The processing is broken up into several realtime threads, each running at its own realtime priority, based on 10ms slots. The highest priority task reads the UART (IMU and GPU) and SPI bus (other sensors), while the others implement processing with a granularity of 100Hz, 10Hz, and 1Hz. In addition, when the GPS reading is available, that wakes up another thread that may take more than 10ms, which means that it will span multiple slots. The time budget was tight, Kufieta said, until they turned on optimization; now the whole model can run with only 60% CPU utilization on the OMAP 3.

So far, all of the testing has been ground-based, using Simulink's "external mode" to simulate the inputs, and observe the outputs. That allowed the team to "see it fly" in a simulated environment. The first flight tests are planned for later this year. But the question they set out with—"can we run this on realtime Linux?"—has clearly been answered in the affirmative.

[ I would like to thank OSADL and the RTLWS for supporting my travel to Chapel Hill for the conference. ]

Comments (12 posted)

Collaborative coding with Collide and friends

By Nathan Willis
October 31, 2012

Google became the latest entrant into the online code editor races in July, when it released Collide as an open source project. Collide is a lightweight integrated development environment (IDE) that is notable for its support of concurrent editing by multiple users. Although its feature set is slim in comparison to some of the more established offerings, it might still find a niche role to fill — if only because the competition is equally limited when it comes to key features like collaboration.

Originally, Collide was a project that Google was reported to be developing as yet another public service among its existing throngs of web applications. Thomas Claburn at InformationWeek speculated that it was the "Brightly" IDE for Google's Dart language, although Brightly was not announced officially; reports of its existence stem from mentions in a forwarded internal email. Whatever Collide was originally intended to be, however, it will never become. Google closed down the branch office that was developing it in July 2012, and the code was released under the Apache 2.0 license by developer Scott Blum.

Blum has continued to post and respond to questions on the project's Google Groups mailing list, but development has slowed to a trickle. Nevertheless, the application is trivial to install. It requires Java 7 on the machine that will act as the server (i.e., hosting the files to be edited); clients access the editing session from their browser. Exactly which browsers are compatible remains a tad murky; some users have reported success with Firefox while others have not. Chrome and Chromium are the safe choices.

Edit-o-rama

The code is provided as a tar archive that can be unpacked and launched without system-wide installation. The launcher is a shell script named collide; although there is no documentation, reading the script reveals that you can specify a session password by appending -password=foo . This is an important distinction because one must specify a password in order for users on other machines to join in. The script starts up a web server session running locally on port 8080; without password protection, only connections from localhost are accepted.

[Collide]

Collide presents a sign-in screen to every browser, either asking for a name or for a name followed by the password, depending on whether a password was assigned at start-up (users can enter any identifier they want for the name, since it is only used to identify them to other participants in the session). From there, all users have a tree view into the directory from which the server was launched. A collapsible user list sits in the lower right hand corner, in which you can see the self-selected user names and (more importantly) Collide-assigned colors of everyone in the session. The actual editing features provided are respectable: line numbering, syntax highlighting, auto-completion, and real-time file tree manipulation (meaning that added, deleted, and moved files are relayed to all of the clients immediately, without requiring a reload).

From a collaboration standpoint, the editor relays changes faster than one can type, which is about the only real requirement. Granted, Collide does not offer TLS/SSL encryption or strong authentication, so it is most likely to be used in LAN environments where latency is not an issue. The UI only distinguishes between concurrent editors by altering the cursor color, however. This makes it harder to keep track of where other users are in the document than it is in other collaborative editors (some of which, for example, use different text background colors for each user).

There is also a Collide extension available for Chrome that adds JavaScript debugging and real-time editing of "live" CSS. Currently JavaScript, Python, CSS, and HTML are the only languages supported by the syntax highlighter, although there are hooks for enterprising developers to add others — incomplete highlighting code for Dart and XML is buried in the source tree.

Reportedly, there were other features that had been working while Collide was still an active internal project, but just did not make it through the cleanup process before the code was released to the public — including showing other users' text selections. The team decided to forgo implementing persistent user identity and authentication in light of the fact that there are already hosted web services that offer that functionality.

Everybody, edit now

As it is, Collide provides a fast-and-dirty editing option for locally-connected teams or for pair-programming exercises. Developers using Collide will already have to have another means of discussing what they do — either out loud or through other chat applications. But if that list of restrictions sounds like it makes Collide a weak offering, consider that the other major collaborative code editors are equally limited in one way or anther.

The most widely discussed in recent years was Mozilla's Bespin. As invariably happens to the browser-maker's projects these days, Bespin underwent Mozilla's spontaneous-renaming process, becoming Skywriter, then it was shut down. But the code was handed over to the ACE editor, which sits at the heart of the popular Cloud9 collaborative editing service.

ACE offers a similar in-browser editing experience to Collide's, plus additional features like breakpoints and code folding (i.e., collapsing blocks of code for the sake of readability), but its collaboration features are an add-on implemented by Cloud9. Although real-time collaboration was a feature during the project's tenure at Mozilla, it is not supported in the downloadable version. ACE is also designed to be an embeddable component served by an existing web server; it is not built for rapid deployment or ad-hoc usage. Mozilla's Kevin Dangoor told one ACE user that interested parties would be welcome to contribute real-time collaboration code, but that was in early 2011, and there does not appear to have been any progress on that front since.

Of course, there are numerous other open source online IDE projects. Adobe has Brackets, which is focused on supporting HTML and JavaScript. The Eclipse project has Orion, which is arguably the closest in design to Collide, in that it can run as a local Java server without the prerequisite to start another web server. Orion incorporates code validation, cross-file searching, and a number of other helpful features, but collaborative editing is not among them.

With the other projects outstripping Collide in terms of traditional IDE features; Collide's primary strength is its multi-user editing. But it does not look like the project has any forward momentum of its own, and in spite of Blum's early comments, there has not been substantial further development. On the contrary, Collide developer Jaime Yap said on Google Plus that

We want the Collide opensourcing to serve as reference implementations for cool technologies, features, and interaction concepts we want to see exist in actual web-IDE services. These existing web-IDE services could leverage technology in the Collide stack. Longer term, some enterprising group could fork the Collide codebase and use it to bootstrap their own competing service.

But at this point, it simply does not look like the other web-based IDE projects are interested.

The reason why may be obvious to anyone active in an existing open source project — real-time collaborative editing's main competition is the simplicity offered by distributed version control. There may be a handful of situations where having multiple developers working on a single chunk of code all at once is advantageous, but it is far from the norm. Real-time collaboration requires that the participants work simultaneously and at the same pace, and it makes separating out individual contributions difficult or impossible. Apart from dedicated hack-a-thon events or tutorial sessions, this simply is not the most efficient way for individual developers to work.

Nevertheless, for hack-a-thons and tutorials, Collide could prove to be a valuable asset. There is little administrative overhead and a connection to the outside Internet is not required. It would be gratifying to see Collide take on a new life as an open source effort; some have noted that it builds on the real-time features of the "love it or hate it" Google Wave, which itself is now an Apache Incubator project in search of a killer application. But more likely is a future where Collide joins Wave among the ranks of "slick ideas that never took off" — an unfortunate fate, and one made worse by the fact that the other online IDE projects have yet to deliver an equivalent to Collide's key feature.

Comments (13 posted)

Roll your own fundraising drive with Selfstarter

By Nathan Willis
October 31, 2012

Free software projects in search of a fundraising model now have yet another alternative to consider: Selfstarter, a Ruby-on-Rails application built to emulate Kickstarter-style donation drives in a self-hosted environment. It provides a simple crowd-funding framework, but one that can be adapted to a number of fundraising scenarios. Although it is not the only free software option, it does have its advantages.

Fundraising for free software development has been a popular topic of late. In August, we covered Adam Dingle and Jim Nelson's survey of the available approaches at GUADEC, which centered on a comparison between the popular pre-funding donation model used at Kickstarter and the pay-what-you-like post-funding model used by the Humble Bundle. In September, we looked at Bradley Kuhn's report at LinuxCon North America about the Software Freedom Conservancy's (SFC) successful drives to raise funds tied to specific developer contracts.

MediaGoblin and constituent-relations-management

Since then, at least one well-known project has launched a large-scale campaign of its own: Chris Webber, lead developer of the MediaGoblin web publishing platform, is trying to raise funds to support himself for one year's worth of full-time development. As Webber explained on his blog, the campaign is run through the Free Software Foundation (FSF), but it required some engineering effort to support several features missing from the FSF's base fundraising platform. Specifically, Webber wanted a donor rewards system, the ability to send email updates to donors, and an automatically-updating "progress bar" that tracked the total as pledges came in. All three features are found on the Kickstarter platform. In addition, Webber wanted a custom theme to match the MediaGoblin site, which of course is not supported on Kickstarter.

The software powering the donation drive is CiviCRM, which the FSF adopted in 2010, after having promoted its development as a "high priority project" for several years. CiviCRM is used by several other organizations in the free software community, including the GNOME Foundation, Wikimedia Foundation, and Mozilla. In an email, Webber noted that some of the CiviCRM modifications used to deploy the MediaGoblin drive were trivial — such as the progress bar, which uses a simple AJAX query to the CiviCRM back-end.

Whether the remaining changes constitute an interesting enough module to make into a formal extension or patch set remains an open question. "I think we did a good job of making something that was feature-compatible with Kickstarter, but part of that was also working on things as in terms of looks," he said. "I'd be interested in someone building on that work to make something that people could much more easily click-and-go make a fundraising campaign. Would CiviCRM be a good fit for that? It seems like it's working well for us, but then again, I'm not the one running it."

CiviCRM is a not a lightweight package; it is intended to serve as a "relationship management" system over the long term — which in human terms means multiple years. It also needs to be integrated with another content management system like Joomla! or Drupal; installing and maintaining it introduces considerable overhead if all one is interested in is a one-shot fundraising drive.

Self-starting start-ups

In contrast, the Selfstarter application is designed for rapid deployment. The company behind its development is Lockitron, a start-up created to build and sell a smartphone-based keyless door lock system. When the company's project proposal was rejected for inclusion on Kickstarter, the team wrote a work-alike and hosted the fundraising drive independently. The drive was evidently a success; the site reports that well over 1400% of its target amount was raised, and the entire first round of pre-orders is sold out.

Selfstarter is built for Ruby 1.9.2, and is designed to be deployed on production servers using Heroku (although Heroku is optional, but recommended). The code is available from a GitHub repository; after cloning the repository the basic setup can be installed with bundle install --without production and the necessary databases (by default, SQLite) created with rake db:migrate, both of which should be old hat to Rails developers.

Fresh out of the box, Selfstarter does require quite a bit of customization. The config/settings.yml file includes all of the campaign settings, from the project name and fundraising goal to the strings displayed for the various messages (for example, whether donors are called "donors" or "backers," and the note explaining what to expect when the campaign ends).

Selfstarter supports only Amazon payments, though the README file explains that this and several other limitations were choices made for simplicity when Lockitron wrote the application for its own campaign. Similarly, it supports embedding Vimeo videos, using Google Analytics, and has configuration variables for Facebook and Twitter "share buttons."

Adding support for additional payment systems, video embedding, and sharing services are all among the first requests filed on the issue tracker. They will probably not prove difficult, but the fact that the current values are hardcoded in simply illustrates that Selfstarter is a young project that still requires manual configuration. The same is true of the theming and CSS styling; all of the media assets and stylesheets are simple to locate, but radically altering the layout of the pages will require a bit more work.

The area in which Selfstarter really needs further development is in its support for different styles of campaign. The Lockitron campaign is built around a single product, offered at a single price. That may work well for the majority of product-driven drives, but it does not offer donors the choice of levels that many Kickstarter campaigns use to "up-sell" potential backers. Tracking donor movements between levels and statistics on average donations is unimplemented. Selfstarter is also built around the assumption that the campaign is an all-or-nothing funding proposition (that is, if the target amount is not met, no one will be charged). This is the Kickstarter model, but others may prefer to take a different approach.

Selfstarter also does not implement user accounts, strong authentication, or provisions for emailing donors. The value of features like emailing donors is debatable, although it is a popular way to keep buzz alive in Kickstarter drives. The developers do say that they welcome patches for these and other features, however.

Getting your kicks

With its current feature set, Selfstarter has the makings of a decent funding platform for independent projects. But independent projects are hard for potential donors to discover — a drive run by a well-known entity like the FSF is likely to garner discussion, but for new projects, waiting for visitors flush with cash to stumble across the campaign site is a worrisome proposition.

One intriguing possibility on this front was raised in a thread on the Selfstarter issue tracker. User elf-pavlik asked whether Selfstarter could be merged with the similar open source crowdfunding project Catarse. Diogo Biazus from the Catarse team replied that Selfstarter's identity as a self-hosted fundraising platform made it significantly different from Catarse's model, which involves maintaining a curated stable of multiple projects. But the two could still find a way to interoperate, he said:

So, why not create a channel in Catarse, where we could receive projects from the Selfstarter? In this fashion, Selfstarter could evolve to be the Catarse's entry point to new projects (as an Engine) without losing it's original identity of simple-project platform.

That concept is interesting because it would help any free-software-focused crowdfunding efforts balance the benefits and risks of going-it-alone. As Dingle observed in his GUADEC talk, a pool of multiple fundraising projects hosted at one site is easier for users to find, but it makes smaller and less well-known projects harder to find on the site itself, because the "cool" projects grab most of the attention. The possibility of a hybrid model, where individual projects could run their own campaigns but still be accessible from a central service, might help.

Of course, regardless of whether the campaign is self-hosted, run by the FSF, or promoted on a specialty Web service, the trickiest part of the fundraising drive is not the software component — it is marketing the campaign to donors and convincing them to open their wallets. Selfstarter and similar efforts cannot do that part, though they can take developers' minds off of the implementation problem, which is at least a start.

Comments (2 posted)

Page editor: Jonathan Corbet

Security

Holes discovered in SSL certificate validation

By Jake Edge
October 31, 2012

A stinging indictment of the state of SSL client applications was recently published as part of the Proceedings from the ACM Computer and Communications Security (CCS) conference. The six authors of the paper (three each from Stanford and The University of Texas) describe testing that certainly should have been done as part of the application development process, but clearly wasn't. The problem stems partly from the libraries that provide SSL certificate verification, because developers are not using them properly. By and large the low-level library cryptographic protocol implementations are fine, the authors found, but the APIs are designed so poorly that developers commonly misuse them.

The paper [PDF] is entitled "The Most Dangerous Code in the World: Validating SSL Certificates in Non-Browser Software" and lives up to that label. It is truly an eye-opening read. The researchers simulated man-in-the-middle attacks using a combination of self-signed and invalid SSL certificates along with a valid certificate for the amusing AllYourSSLAreBelongTo.us domain to observe the effects on various applications. The observations led to the overall conclusion: "SSL certificate validation is completely broken in many critical software applications and libraries".

The researchers used Windows and Ubuntu laptops, a Nexus One smartphone, and an iPad 2 as clients for their testing. A second Ubuntu system running custom Java and C proxies, as well as the Fiddler web debugging proxy, served as the man in the middle. As they noted, the kind of attack that can be launched from the proxies is precisely what SSL is meant to thwart—their testing did not go beyond what an active man in the middle could do:

This is exactly the attack that SSL is intended to protect against. It does not involve compromised or malicious certificate authorities, nor forged certificates, nor compromised private keys of legitimate servers. The only class of vulnerabilities we exploit are logic errors in client-side SSL certificate validation.

What they found was client applications that largely (or completely) accepted various kinds of invalid certificates and blindly sent private information (cookies, authentication information, credit card numbers) over an unsecured link. In many cases, the proxies could make the actual SSL connection to the remote server and funnel the data along—while surreptitiously storing the data away for (ab)use at a later time.

The paper gives plenty of examples of where the applications go wrong. Much of it can be blamed on bad library APIs, they said, in essentially all of the low-level SSL libraries (OpenSSL, GnuTLS, JSSE, CryptoAPI, etc.):

The root cause of most of these vulnerabilities is the terrible design of the APIs to the underlying SSL libraries. Instead of expressing high-level security properties of network tunnels such as confidentiality and authentication, these APIs expose low-level details of the SSL protocol to application developers. As a consequence, developers often use SSL APIs incorrectly, misinterpreting and misunderstanding their manifold parameters, options, side effects, and return values.

While that is certainly a problem, the application authors do bear some of the responsibility as well. It is not difficult to test an application against a few types of bad certificates. Because of the complexity of the APIs, developers should probably have been more wary that they might have made a mistake. Given that there is some kind of security requirement for the application (why use SSL otherwise?), it would seem prudent to verify proper functioning. Using a well-tested, industry-standard library is certainly an important piece of the puzzle, but there is more to it than just that.

There is also the problem of higher-level libraries perched atop OpenSSL, et al. The researchers found that libraries like Apache HttpClient, cURL, and the PHP and Python SSL libraries all had flaws in certificate validation. In fact, Python's urllib, urllib2, and httplib documentation warns that no certificate checking is done by the libraries, which evidently doesn't stop "high security" applications from still using them. One of the examples shows how this can all go awry:

Amazon's Flexible Payments Service PHP library attempts to enable hostname verification by setting cURL's CURLOPT_SSL_VERIFYHOST parameter to true. Unfortunately, the correct, default value of this parameter is 2; setting it to true silently changes it to 1 and disables certificate validation. PayPal Payments Standard PHP library introduced the same bug when updating a previous, broken implementation.

These kinds of errors would almost be comical, if they weren't so serious. While it may sound difficult for attackers to position themselves "in the middle", the prevalence of wireless networking these days makes it easier. Taking over an access point (or just running a hostile one as "Free Airport WiFi", say) puts an attacker squarely in the middle. Users that are running the vulnerable applications over a link to a bad access point are susceptible to many kinds of attacks. Compromise of home routers is another possibility. Obviously, malware running upstream in the wired internet is equally dangerous (if harder to pull off).

The researchers conclude the article with recommendations for both application and SSL library developers. For the most part, they are pretty common-sense suggestions (do test your application, don't have multiple inconsistent error reporting mechanisms, etc.) that could certainly apply more widely than just SSL applications and libraries. Obviously, they could not test each and every non-browser SSL-using client (or even all of the SSL client libraries) out there, but the list of vulnerable components from the abstract is rather astonishing:

Vulnerable software includes Amazon's EC2 Java library and all cloud clients based on it; Amazon's and PayPal's merchant SDKs responsible for transmitting payment details from e-commerce sites to payment gateways; integrated shopping carts such as osCommerce, ZenCart, Ubercart, and PrestaShop; AdMob code used by mobile websites; Chase mobile banking and several other Android apps and libraries; Java Web-services middleware—including Apache Axis, Axis 2, Codehaus XFire, and Pusher library for Android—and all applications employing this middleware.

Perhaps the most distressing piece of this research is the appearance—at least—that developers are being pretty cavalier about checking the security baked into their supposedly "high security" applications. Beyond that, it is amazing that payment processors who are dealing with credit cards (and, importantly, credit card fraud) have seemingly not been very diligent about the code they distribute to their customers. All in all, this research shows that even "secure" applications and libraries often skimp on important verification testing—a seriously sad state of affairs.

Comments (28 posted)

Brief items

Security quotes of the week

It checks to see if the /mnt/ubi_boot/mfg_test/enable file exists, and if so, it fires up a telnet service (among other things). However, the mfg_test directory doesn't exist at all on the production system [...] But with the SSID command injection vulnerability, we can easily create it. The commands to create the file are too long to fit into the restricted 32-character SSID input field, so we'll echo them piecemeal into a shell script and then execute that script [...]

Rooted with nothing but the remote control it came with.

-- /dev/ttyS0 on jailbreaking the Netgear NTV300 "NeoTV"

The industry standard is most Social Security numbers are not encrypted. A lot of banks don't encrypt. It's very complicated. It's very cumbersome. There's a lot of numbers involved with it.
-- South Carolina governor Nikki Haley

If you're going to allow users to download all of their data with one command, you might want to double- and triple-check that command. Otherwise it's going to become an attack vector for identity theft and other malfeasance.
-- Bruce Schneier on "data portability" risks

I have X'd out any information that you could use to change my reservation. But it's all there, PNR, seat assignment, flight number, name, [etc.] But what is interesting is the bolded three on the end. This is the TSA Pre-Check information. The number means the number of beeps. 1 beep no Pre-Check, 3 beeps yes Pre-Check. On this trip as you can see I am eligible for Pre-Check. Also this information is not encrypted in any way.
-- John Butler shows how to change a boarding pass for less TSA screening

This iommu encrypts addresses on the device bus to avoid [divulging] information to hackers equipped with bus analyzers. Following 3DES, addresses are encrypted multiple times. A XOR cypher is employed for efficiency.
-- Avi Kivity (thanks to Michael S. Tsirkin.)

Comments (2 posted)

New vulnerabilities

cgit: code execution

Package(s):cgit CVE #(s):CVE-2012-4465
Created:October 31, 2012 Updated:November 28, 2012
Description: From the CVE entry:

Heap-based buffer overflow in the substr function in parsing.c in cgit 0.9.0.3 and earlier allows remote authenticated users to cause a denial of service (crash) and possibly execute arbitrary code via an empty username in the "Author" field in a commit.

Alerts:
Fedora FEDORA-2012-18464 cgit 2012-11-28
Fedora FEDORA-2012-18462 cgit 2012-11-28
openSUSE openSUSE-SU-2012:1421-1 cgit 2012-10-31
openSUSE openSUSE-SU-2012:1422-1 cgit 2012-10-31

Comments (none posted)

dokuwiki: path disclosure

Package(s):dokuwiki CVE #(s):CVE-2011-3727 CVE-2012-3354
Created:October 30, 2012 Updated:April 9, 2013
Description: From the Red Hat bugzilla [1, 2]:

A full path disclosure flaw was found in the way DokuWiki, a standards compliant, simple to use Wiki, performed sanitization of HTTP POST 'prefix' input value prior passing it to underlying PHP substr() routine, when the PHP error level has been enabled on the particular server. A remote attacker could use this flaw to obtain full path location of particular requested DokuWiki page by issuing a specially-crafted HTTP POST request. (CVE-2012-3354)

DokuWiki 2009-12-25c allows remote attackers to obtain sensitive information via a direct request to a .php file, which reveals the installation path in an error message, as demonstrated by lib/tpl/index.php and certain other files. (CVE-2011-3727)

Alerts:
Mandriva MDVSA-2013:073 dokuwiki 2013-04-08
Gentoo 201301-07 dokuwiki 2013-01-09
Mageia MGASA-2012-0362 dokuwiki 2012-12-11
Fedora FEDORA-2012-16605 dokuwiki 2012-10-30
Fedora FEDORA-2012-16614 dokuwiki 2012-10-30

Comments (none posted)

drupal7: code execution

Package(s):drupal7 CVE #(s):
Created:October 29, 2012 Updated:October 31, 2012
Description: From the Drupal advisory:

A bug in the installer code was identified that allows an attacker to re-install Drupal using an external database server under certain transient conditions. This could allow the attacker to execute arbitrary PHP code on the original server.

Alerts:
Fedora FEDORA-2012-16421 drupal7 2012-10-28
Fedora FEDORA-2012-16442 drupal7 2012-10-28

Comments (none posted)

exim4: arbitrary code execution

Package(s):exim4 CVE #(s):CVE-2012-5671
Created:October 26, 2012 Updated:November 1, 2012
Description:

From the Debian advisory:

It was discovered that Exim, a mail transport agent, is not properly handling the decoding of DNS records for DKIM. Specifically, crafted records can yield to a heap-based buffer overflow. An attacker can exploit this flaw to execute arbitrary code.

Alerts:
openSUSE openSUSE-SU-2014:0986-1 exim 2014-08-11
openSUSE openSUSE-SU-2014:0983-1 exim 2014-08-11
Gentoo 201401-32 exim 2014-01-27
Fedora FEDORA-2012-17085 exim 2012-10-31
openSUSE openSUSE-SU-2012:1404-1 exim 2012-10-27
Ubuntu USN-1618-1 exim4 2012-10-26
Debian DSA-2566-1 exim4 2012-10-26
Fedora FEDORA-2012-17044 exim 2012-10-30

Comments (1 posted)

java: multiple unspecified vulnerabilities

Package(s):OpenJDK CVE #(s):CVE-2012-5078 CVE-2012-5080 CVE-2012-5082
Created:October 25, 2012 Updated:October 31, 2012
Description:

From the CVE entries:

CVE-2012-5078: Unspecified vulnerability in the JavaFX component in Oracle Java SE JavaFX 2.2 and earlier allows remote attackers to affect confidentiality, integrity, and availability via unknown vectors.

CVE-2012-5080: Unspecified vulnerability in the JavaFX component in Oracle Java SE JavaFX 2.2 and earlier allows remote attackers to affect confidentiality, integrity, and availability via unknown vectors.

CVE-2012-5082: Unspecified vulnerability in the JavaFX component in Oracle Java SE JavaFX 2.2 and earlier allows remote attackers to affect availability via unknown vectors.

Alerts:
SUSE SUSE-SU-2012:1398-1 OpenJDK 2012-10-24

Comments (none posted)

kdelibs: multiple vulnerabilities

Package(s):kdelibs CVE #(s):CVE-2012-4512 CVE-2012-4513
Created:October 31, 2012 Updated:February 28, 2013
Description: From the Red Hat advisory:

A heap-based buffer overflow flaw was found in the way the CSS (Cascading Style Sheets) parser in kdelibs parsed the location of the source for font faces. A web page containing malicious content could cause an application using kdelibs (such as Konqueror) to crash or, potentially, execute arbitrary code with the privileges of the user running the application. (CVE-2012-4512)

A heap-based buffer over-read flaw was found in the way kdelibs calculated canvas dimensions for large images. A web page containing malicious content could cause an application using kdelibs to crash or disclose portions of its memory. (CVE-2012-4513)

Alerts:
Gentoo 201406-31 konqueror 2014-06-27
Oracle ELSA-2012-1418 kdelibs 2013-02-28
Oracle ELSA-2012-1416 kdelibs 2012-10-30
CentOS CESA-2012:1416 kdelibs 2012-10-30
CentOS CESA-2012:1418 kdelibs 2012-10-30
openSUSE openSUSE-SU-2012:1581-1 kdelibs4 2012-11-28
Scientific Linux SL-kdel-20121030 kdelibs 2012-10-30
Scientific Linux SL-kdel-20121030 kdelibs 2012-10-30
Red Hat RHSA-2012:1418-01 kdelibs 2012-10-30
Red Hat RHSA-2012:1416-01 kdelibs 2012-10-30

Comments (none posted)

kernel: information leak

Package(s):kernel CVE #(s):CVE-2012-0957
Created:October 29, 2012 Updated:November 6, 2012
Description: From the Red Hat bugzilla:

The uname() syscall since 3.0 with the UNAME26 personality leaks kernel stack memory contents.

Alerts:
openSUSE openSUSE-SU-2013:0396-1 kernel 2013-03-05
Oracle ELSA-2013-2507 kernel 2013-02-28
Ubuntu USN-1704-2 Quantal kernel 2013-02-01
Mageia MGASA-2013-0016 kernel-rt 2013-01-24
Ubuntu USN-1704-1 kernel 2013-01-22
Mageia MGASA-2013-0011 kernel-tmb 2013-01-18
Mageia MGASA-2013-0010 kernel 2013-01-18
Mageia MGASA-2013-0012 kernel-vserver 2013-01-18
Mageia MGASA-2013-0009 kernel-linus 2013-01-18
Ubuntu USN-1649-1 linux-ti-omap4 2012-11-30
Ubuntu USN-1647-1 linux-ti-omap4 2012-11-30
Ubuntu USN-1646-1 linux 2012-11-30
Ubuntu USN-1644-1 linux 2012-11-30
Ubuntu USN-1652-1 linux-lts-backport-oneiric 2012-11-30
Fedora FEDORA-2012-17479 kernel 2012-11-06
Red Hat RHSA-2012:1491-01 kernel-rt 2012-12-04
Ubuntu USN-1645-1 linux-ti-omap4 2012-11-30
Ubuntu USN-1648-1 linux 2012-11-30
Fedora FEDORA-2012-16669 kernel 2012-10-28

Comments (none posted)

libqt4: CRIME attack

Package(s):libqt4 CVE #(s):CVE-2012-4929
Created:October 31, 2012 Updated:November 12, 2014
Description: From the CVE entry:

The TLS protocol 1.2 and earlier, as used in Mozilla Firefox, Google Chrome, and other products, can encrypt compressed data without properly obfuscating the length of the unencrypted data, which allows man-in-the-middle attackers to obtain plaintext HTTP headers by observing length differences during a series of guesses in which a string in an HTTP request potentially matches an unknown string in an HTTP header, aka a "CRIME" attack.

Alerts:
Debian DSA-3253-1 pound 2015-05-07
Fedora FEDORA-2014-13777 Pound 2014-11-12
Fedora FEDORA-2014-13764 Pound 2014-11-07
openSUSE openSUSE-SU-2013:1630-1 openssl 2013-11-06
Gentoo 201309-12 apache 2013-09-23
Ubuntu USN-1898-1 openssl 2013-07-03
Fedora FEDORA-2013-4403 mingw-openssl 2013-04-03
CentOS CESA-2013:0587 openssl 2013-03-09
Oracle ELSA-2013-0587 openssl 2013-03-05
Oracle ELSA-2013-0587 openssl 2013-03-04
CentOS CESA-2013:0587 openssl 2013-03-05
Scientific Linux SL-open-20130304 openssl 2013-03-04
Red Hat RHSA-2013:0587-01 openssl 2013-03-04
Mageia MGASA-2013-0053 qt4 2013-02-16
Debian DSA-2627-1 nginx 2013-02-17
Debian DSA-2626-1 lighttpd 2013-02-17
openSUSE openSUSE-SU-2013:0157-1 libqt4 2013-01-23
openSUSE openSUSE-SU-2013:0143-1 libqt4 2013-01-23
Debian DSA-2579-1 apache2 2012-11-30
Ubuntu USN-1627-1 apache2 2012-11-08
Ubuntu USN-1628-1 qt4-x11 2012-11-08
openSUSE openSUSE-SU-2012:1420-1 libqt4 2012-10-31
Debian-LTS DLA-400-1 pound 2016-01-24

Comments (none posted)

mozilla: multiple vulnerabilities

Package(s):firefox, thunderbird, xulrunner, seamonkey CVE #(s):CVE-2012-4194 CVE-2012-4195 CVE-2012-4196
Created:October 29, 2012 Updated:November 8, 2012
Description: From the Red Hat advisory:

Multiple flaws were found in the location object implementation in Firefox. Malicious content could be used to perform cross-site scripting attacks, bypass the same-origin policy, or cause Firefox to execute arbitrary code. (CVE-2012-4194, CVE-2012-4195, CVE-2012-4196)

Alerts:
openSUSE openSUSE-SU-2014:1100-1 Firefox 2014-09-09
Gentoo 201301-01 firefox 2013-01-07
Mageia MGASA-2012-0353 iceape 2012-12-07
CentOS CESA-2012:1413 thunderbird 2012-10-30
Mageia MGASA-2012-0312 thunderbird 2012-10-29
Mandriva MDVSA-2012:170 firefox 2012-11-02
Slackware SSA:2012-304-01 thunderbird 2012-10-30
Oracle ELSA-2012-1413 thunderbird 2012-10-30
Mageia MGASA-2012-0311 firefox 2012-10-29
Oracle ELSA-2012-1407 firefox 2012-10-27
Fedora FEDORA-2012-16988 xulrunner 2012-11-08
SUSE SUSE-SU-2012:1426-1 Mozilla Firefox 2012-10-31
Slackware SSA:2012-304-02 seamonkey 2012-10-30
Scientific Linux SL-fire-20121030 firefox 2012-10-30
Fedora FEDORA-2012-16988 firefox 2012-11-08
Oracle ELSA-2012-1407 firefox 2012-10-26
Slackware SSA:2012-300-01 firefox 2012-10-26
Fedora FEDORA-2012-17307 thunderbird 2012-11-08
Scientific Linux SL-thun-20121030 thunderbird 2012-10-30
Fedora FEDORA-2012-17028 xulrunner 2012-10-30
Fedora FEDORA-2012-17028 firefox 2012-10-30
Red Hat RHSA-2012:1413-01 thunderbird 2012-10-29
CentOS CESA-2012:1407 firefox 2012-10-27
CentOS CESA-2012:1407 firefox 2012-10-27
Ubuntu USN-1620-2 thunderbird 2012-10-30
openSUSE openSUSE-SU-2012:1412-1 Mozilla 2012-10-30
CentOS CESA-2012:1413 thunderbird 2012-10-30
Ubuntu USN-1620-1 firefox 2012-10-26
Red Hat RHSA-2012:1407-01 firefox 2012-10-26

Comments (none posted)

optipng: use after free

Package(s):optipng CVE #(s):
Created:October 31, 2012 Updated:October 31, 2012
Description: From the optipng changelog:

Version 0.7.3 fixed a use-after-free vulnerability in the palette reduction code. This vulnerability was accidentally introduced in version 0.7.

Version 0.7.4 fixed the previous fix, which failed to fix the option -fix. (Thanks to Gynvael Coldwind and Mateusz Jurczyk for the report.)

Alerts:
Fedora FEDORA-2012-16680 optipng 2012-10-31

Comments (none posted)

python-django: information disclosure

Package(s):python-django CVE #(s):CVE-2012-4520
Created:October 30, 2012 Updated:March 8, 2013
Description: From the Mageia advisory:

The Host header parsing in Django 1.3 and Django 1.4 -- specifically, django.http.HttpRequest.get_host() -- was incorrectly handling username/password information in the header. Using this, an attacker can cause parts of Django -- particularly the password-reset mechanism -- to generate and display arbitrary URLs to users.

Alerts:
openSUSE openSUSE-SU-2013:1248-1 python-django 2013-07-24
openSUSE openSUSE-SU-2013:1203-1 python-django 2013-07-16
Ubuntu USN-1757-1 python-django 2013-03-07
Debian DSA-2634-1 python-django 2013-02-27
Mandriva MDVSA-2012:181 python-django 2012-12-19
Ubuntu USN-1632-2 python-django 2012-11-20
Ubuntu USN-1632-1 python-django 2012-11-15
Mageia MGASA-2012-0315 python-django 2012-10-29
Fedora FEDORA-2012-16440 Django 2012-10-31
Fedora FEDORA-2012-16417 Django 2012-10-30

Comments (none posted)

request-tracker: multiple vulnerabilities

Package(s):request-tracker3.8 CVE #(s):CVE-2012-4730 CVE-2012-4732 CVE-2012-4734 CVE-2012-4735 CVE-2012-4884
Created:October 29, 2012 Updated:November 8, 2012
Description: From the Debian advisory:

CVE-2012-4730: Authenticated users can add arbitrary headers or content to mail generated by RT.

CVE-2012-4732: A CSRF vulnerability may allow attackers to toggle ticket bookmarks.

CVE-2012-4734: If users follow a crafted URI and log in to RT, they may trigger actions which would ordinarily blocked by the CSRF prevention logic.

CVE-2012-4735: Several different vulnerabilities in GnuPG processing allow attackers to cause RT to improperly sign outgoing email.

CVE-2012-4884: If GnuPG support is enabled, authenticated users attackers can create arbitrary files as the web server user, which may enable arbitrary code execution.

Alerts:
Debian DSA-2567-1 request-tracker3.8 2012-10-26
Fedora FEDORA-2012-17174 rt3 2012-11-08
Fedora FEDORA-2012-17218 rt3 2012-11-08

Comments (none posted)

rtfm: privilege escalation

Package(s):rtfm CVE #(s):CVE-2012-4731
Created:October 29, 2012 Updated:October 31, 2012
Description: From the Debian advisory:

It was discovered that RTFM, the FAQ manager for Request Tracker, allows authenticated users to create articles in any class.

Alerts:
Debian DSA-2568-1 rtfm 2012-10-26

Comments (none posted)

Page editor: Jake Edge

Kernel development

Brief items

Kernel release status

The current development kernel is 3.7-rc3, released on October 28. Linus notes that it's mostly a lot of small changes in a lot of places. But he has found a new problem to be concerned about: "And talking about the shortlog: christ people, some of you need to change your names. I'm used to there being multiple 'David's and 'Peter's etc, but there are three different Linus's in just this rc. People, people, I want to feel like the unique snowflake I am, not like just another anonymous guy in a crowd."

Stable updates: 3.0.49, 3.4.16, and 3.6.4 all came out on October 28; they were followed by 3.0.50, 3.2.33, 3.4.17 and 3.6.5 on October 31. All contain another set of important fixes. Worth noting is the fact that 3.6.5 disables by default the hard and soft link security restrictions added during the 3.6 merge window in response to another reported regression.

Comments (none posted)

Quotes of the week

And the next technology journalist that asks you whether you want fonts that small, I'll just hunt down and give an atomic wedgie.
Linus Torvalds doesn't do blocking wedgies

And suddenly causing a complete cessation of vm scanning at a particular magic threshold seems rather crude, compared to some complex graduated thing which will also always do the wrong thing, only more obscurely ;)
Andrew Morton

You will get this message once a day until you've dealt with these bugs!
bugzilla@kernel.org failing to win friends and influence developers

Comments (18 posted)

Kroah-Hartman: Help wanted

Greg Kroah-Hartman is looking for somebody to help him put stable kernels together. "I'm looking for someone to help me out with the stable Linux kernel release process. Right now I'm drowning in trees and patches, and could use some one to help me sanity-check the releases I'm doing."

Comments (10 posted)

Airlie: raspberry pi drivers are NOT useful

Kernel graphics maintainer Dave Airlie is rather unimpressed with the Raspberry Pi driver release; it is not something that will ever be merged. "Why is this bad? You cannot make any improvements to their GLES implementation, you cannot add any new extensions, you can't fix any bugs, you can't do anything with it. You can't write a Mesa/Gallium driver for it. In other words you just can't."

Comments (79 posted)

Kernel development news

A potential NUMA scheduling solution

By Jonathan Corbet
October 31, 2012
Earlier this year, two different developers set out to create a solution to the problem of performance (or the lack thereof) on non-uniform memory access (NUMA) systems. The Linux kernel's scheduler will freely move processes around to maximize CPU utilization on large systems; unfortunately, on NUMA systems, that can lead to processes being separated from their memory, reducing performance considerably. Two very different solutions to the problem were posted, leaving no clear path toward a single solution that could be merged into the mainline. Now, perhaps, that single solution exists, but the way that solution came about raises some questions.

The first approach was Peter Zijlstra's sched/numa patch set. It added a "lazy migration" mechanism (implemented by Lee Schermerhorn) that uses soft page faults to move useful pages to the NUMA node where they were actually being used. On top of that, it implemented a new "home node" concept that keeps the scheduler from moving processes between NUMA nodes whenever possible; it also tries to make memory allocations happen on the allocating process's home node. Finally, there was a pair of system calls allowing a process to change its home node and to form groups of processes that should all run on the same home node.

Andrea Arcangeli's AutoNUMA patch set, instead, was more strongly focused on migrating pages to the nodes where they are actually being used. To that end, it created a tracking mechanism (again, using page faults) to figure out where page accesses were coming from; there was a new kernel thread to perform this tracking. Whenever the generated statistics revealed that too many pages were being accessed from remote nodes, the kernel would consider either relocating the processes performing those accesses or relocating the pages; either way, the goal was to get both the processes and the pages on the same node.

To say that the two developers disagreed on the right solution is to understate the case considerably. Peter claimed that AutoNUMA abused the scheduler, added too much memory overhead, and slowed scheduling decisions unacceptably. Andrea responded that sched/numa would not work well, especially for larger jobs, without manual tweaking by developers and/or system administrators. The conversation was rather less than polite at times — until it went silent altogether. Peter last responded to the AutoNUMA discussion at the end of June — this example demonstrates the level of the discussion at that time — and the last sched/numa posting happened at the end of July.

The silence ended on October 25 with Peter's posting of the numa/core patch set. The patch introduction reads:

Here's a re-post of the NUMA scheduling and migration improvement patches that we are working on. These include techniques from AutoNUMA and the sched/numa tree and form a unified basis - it has got all the bits that look good and mergeable....

These patches will continue their life in tip:numa/core and unless there are major showstoppers they are intended for the v3.8 merge window. We believe that they provide a solid basis for future work.

It is worth noting that the value of "we" is not well defined anywhere in the patch set.

Numa/core brings in much of the sched/numa patch set, including the lazy migration scheme, the memory policy changes, and the home node concept. The core scheduler change tries to keep processes on their home node by adding resistance to moving a process away from that node, and by trying to push misplaced processes back to the home node during load balancing. There is also a feature to wake sleeping processes on the home node regardless of where they were running before, but it is disabled because "we found this to be far too aggressive." Missing from this patch set is the proposed numa_tbind() and numa_mbind() system calls; it's not clear whether those are meant to be added later.

The patch set also includes some ideas from AutoNUMA. The page structure gains a new last_nid field to record the ID of the NUMA node last observed to access the page. That new field will cause struct page to grow on 32-bit systems, which is never a popular thing to do. It is expected, though, that most systems where better NUMA scheduling really matters will be 64-bit.

Scanning of memory is still done: pages are marked as being absent so that usage patterns can be observed from the resulting soft faults. But the kernel thread to perform this scanning no longer exists; it is, instead, done by each process in its own context. The number of pages scanned is proportional to each process's run time, so little effort is put into the scanning of pages belonging to processes that rarely run. Scanning does not start until a given process has accumulated at least one second of run time. It makes sense that there is little value in optimizing the NUMA placement of short-lived processes; in this case, that intuition was confirmed with an improvement in the all-important kernel-compilation benchmark. Most of the memory overhead added by the original AutoNUMA patches has been removed.

Thus far, there has been little in the way of reviews of this large patch set, and no benchmark results posted. Things will have to pick up on that front if a patch set of this size is going to be ready by the time the 3.8 merge window opens. The numa/core patches may improve NUMA scheduling, and they may be the right basis to move forward with, but the development community as a whole does not know that yet.

There is one other thing that jumps out at an attentive observer. These patches credit Andrea's work with a set of Suggested-by and Based-on-idea-by tags, but none of them are signed off by Andrea. It would appear that, while some of his ideas have found their way into this patch set, his code has not. But, despite the fact that he did not write this code, Andrea has been conspicuously absent from the review discussion.

In the absence of any further information, it is hard not to conclude that Andrea has removed himself from this particular project. Certainly Red Hat cannot be faulted if it is unable to feel entirely comfortable when some of its highest-profile engineers are fighting among themselves in a public forum. So it is not hard to imagine that the developers involved were given clear instructions to resolve the situation. If that were the case, we would have a solution that was arrived at as much by Red Hat management as by the wider development community.

Such speculation (and it certainly is no more than that), of course, says nothing about the quality of the current patch set. That will be judged by the development community, presumably between now and when the 3.8 merge window opens. Assuming the patches pass this review, we should have an improved NUMA scheduler and an end to an ongoing dispute. As the number of NUMA (and NUMA-like) systems grows, that can only be a good thing.

Comments (9 posted)

Relocating RCU callbacks

By Jonathan Corbet
October 31, 2012
The read-copy-update (RCU) subsystem is one of the kernel's key scalability mechanisms; it is usually invoked in situations where normal locking is far too slow. RCU is known to be complex code, to the point that lesser kernel developers will happily proclaim that they do not understand it. That should not be taken to mean that RCU cannot be made faster or more complex, though. Paul McKenney's "callback-free CPUs" patch set is a case in point.

Much RCU processing has traditionally been done in software interrupt (softirq) context, meaning that the actual processing is done at seemingly random times during the execution of whatever process happens to have the CPU at the time. Softirqs thus have the potential to add arbitrary delays to the execution of any process, regardless of that process's priority. It is not surprising that the realtime developers have been working on the softirq problem; non-realtime developers, too, have been known to grumble about softirq overhead. Depending on the load on the system, RCU processing can be a significant part of the overall softirq workload. So improvements in RCU processing can help eliminate unwanted latencies and jitter even if software interrupt handling as a whole remains unchanged.

Paul recently described some work in that direction on this page; as of the 3.6 kernel, much of the RCU grace period handling has been moved to kernel threads. RCU works by replacing a data structure with a modified version, retaining the old copy but hiding it from view so that no new references to it will be created. The RCU rules guarantee that any data structure made inaccessible in this way before a "grace period" passes will have no outstanding references after that period; the determination of grace periods is thus a crucial step in the cleanup and deletion of those old data structures. It turns out that identifying grace periods in a scalable and efficient manner is not a trivial task; see, for example, this article for details.

Moving grace period handling to kernel threads takes a certain amount of RCU overhead out of the softirq path, reducing jitter and allowing that handling to be assigned priorities like any other process. But, even with grace period processing out of the way, RCU still has a fair amount of work to do in softirq context. Near the top of the list is the calling of RCU callbacks — the functions that actually perform cleanup work after a grace period passes. With some workloads, the number of callbacks can get quite large. Users concerned about jitter have expressed a desire to move as much kernel processing out of the way as possible; RCU callback processing represents a significant chunk of that work.

That is the motivation for Paul's callback-free CPUs patch set. The idea is simple enough: rather than invoke RCU callbacks in softirq context, the kernel can just shunt that work off to yet another kernel thread. The implementation, of course, is just a bit more involved than that.

The patch set adds a new rcu_nocbs= boot-time parameter allowing the system administrator to specify a set of CPUs to run in the "no callbacks" mode. It is not possible to do so with every CPU in the system; at least one processor must remain in the traditional mode or grace period processing will not function properly. In practical terms, that means that CPU0 cannot be run in the no-callbacks mode and any attempt to hot-remove the last traditional-RCU CPU will fail.

When a CPU (call it CPUN) runs without RCU callbacks, there will be a separate rcuoN process charged with callback handling. When that process wakes up, it will grab the list of outstanding callbacks for its assigned CPU, using some tricky atomic-exchange techniques to avoid the need for explicit locking. The thread will wait for the grace period to expire, then run through the callbacks; after that the cycle begins anew. Normally the process wakes up when callbacks are added to an empty list, but a separate boot parameter instructs the threads to poll occasionally for new work instead. Polling has its costs, especially on systems where energy efficiency and letting CPUs sleep are priorities, but it can improve RCU's CPU efficiency, helping throughput.

Users who are so sensitive to jitter that they want to reconfigure RCU callback processing may not be satisfied just by having that processing move to a thread that competes with their workload. The good news for those users is that, once callback processing lives in its own thread, it can be assigned a priority that fits with the overall goals of the system. Perhaps even better, the callback thread does not have to run on the CPU whose callbacks it is handling; by playing with CPU affinities, administrators can move that work to other CPUs, freeing the no-callback CPUs to focus more exclusively on the user's workload.

No-callback CPUs are thus part of the larger effort toward fully-dedicated CPUs that run nothing but the user's processes. The idea is that, on such a CPU, the workload would be fully in charge and need never worry that the kernel would get in the way when there is time-sensitive work to be done. Solving that problem in a robust and maintainable manner is a rather larger problem; it requires the NoHZ mechanism and more. It has been recognized for some time that this problem will need to be solved in smaller pieces; the no-callback CPUs patch is one of those pieces.

This patch set is in its second iteration; comments this time around have been scarce. Barring surprises, it would not be surprising to see this feature pushed into the 3.8 kernel. Most users will not care, but, for those who obsess about latency and jitter, it should be a welcome addition.

Comments (none posted)

Thoughts on the ext4 panic

By Jonathan Corbet
October 29, 2012
In just a few days, a linux-kernel mailing list report of ext4 filesystem corruption turned into a widely-distributed news story; the quality of ext4 and its maintenance, it seemed, was in doubt. Once the dust settled, the situation turned out to be rather less grave than some had thought; the bug in question only threatened a very small group of ext4 users using non-default mount options. As this is being written, a fix is in testing and should be making its way toward the mainline and stable kernels shortly. The bug was obscure, but there is value in looking at how it came about and the ripples it caused.

The timeline

On October 23, user "Nix" was trying to help track down an NFS lock manager crash when he ran into a little problem: the crash kept corrupting his filesystem, making the debugging task rather more difficult than it would otherwise have been. He reported the problem to the linux-kernel mailing list; he also posted a warning for other LWN readers. The ext4 developers moved quickly to find the problem, coming up with a hypothesis within a few hours of the initial report. Unfortunately, the hypothesis turned out to be wrong.

Before that became clear, though, a number of news outlets had posted articles on the problem. LWN was not the first to do so ("first" is not at the top of our list of priorities), but, late on the 24th, we, too, posted an item about the issue. It quickly became clear, though, that the original hypothesis did not hold water, and that further investigation was in order. That investigation, as it turns out, took a few days to play out.

Eric Sandeen eventually tracked the problem down to this commit which found its way into the mainline during the 3.4 merge window. That change was meant to be a cleanup, gathering the inode allocation logic into a single function and removing some duplicated code. The unintended result was to cause the inode bitmap to be modified outside of a transaction, introducing unchecksummed data into the journal. If the system crashed during that time, the next mount would encounter checksum errors and refuse to play back the journal; the filesystem was then seen as being corrupt.

The interesting thing is that, on most systems, this problem will never come about because, on those systems, the journal checksums do not actually exist. Journal checksumming is an optional feature, not enabled by default, and, evidently, not widely used. Nix had turned on the feature somewhat inadvertently; most other users do not turn it on at all, even if they are aware it exists. Anybody who has journal checksums turned off will not be affected by this bug, so very few ext4 users needed to be concerned about potential data corruption.

As an interesting aside, checksums on the journal are a somewhat problematic feature; as seen in this discussion from 2008, it is not at all clear what the best response should be when journal checksums fail to match. The journal checksum may not be information that the system can reasonably act upon; indeed, as in this case, it may create problems of its own.

Eric's patch appears to fix the problem; corrupted journals that were easily observed before its application do not happen afterward. There will naturally be a period of review and testing before this change is merged into the mainline — nobody wants to create a new problem through undue haste — but kernel releases with a version of the fix (it has already been revised once) should be available to users in short order. But most users will not really care, since they were not affected by the problem in the first place. They may care more about the plans to improve the filesystem test suites so that regressions of this nature can be more easily caught in the future.

Analysis

In retrospect, the media coverage of this bug was clearly out of proportion to that bug's impact. One might attribute that to a desire for sensational stories to drive traffic, and that may well be part of what was going on. But there are a couple of other factors that are worth keeping in mind before jumping to that judgment:

  • Many media outlets employ editors and writers who, almost beyond belief, are not trained in kernel programming. That makes it very hard for them to understand what is really going on behind a linux-kernel discussion even if they read that discussion rather than basing a story on a single message received in a tip. They will see a subject like "Apparent serious progressive ext4 data corruption," along with messages from prominent developers seemingly confirming the problem, and that is what they have to go with. It is hard to blame them for seeing a major story in this thread.

  • Even those who understand linux-kernel discussions (LWN, in its arrogance, places itself in this category) can be faced with an urgent choice. If there were a data corruption bug in recent kernels, then we would be beyond remiss to fail to warn our readers, many of whom run the kernels in question. There comes a point where, in the absence of better information, there is no alternative to putting something out there.

The ext4 developers certainly cannot be faulted for the way this story went. They did what conscientious developers do: they dropped everything to focus on what appeared to be a serious regression affecting their users. They might have avoided some of the splash by taking the discussion private and not saying anything until they were certain of having found the real problem, but that is not the way our community works. It is hard to imagine that pushing development discussions out of the public view is going to make things better in the long run.

Thus, one might conclude that we are simply going to see an occasional episode like this, where a bug report takes on a life of its own and is widely distributed before its impact is truly understood. Early reports of software problems, arguably, should be treated like early software: potentially interesting, but likely to be in need of serious review and debugging. That's simply the world we live in.

A more serious concern may apply to the addition of features to the ext4 filesystem. Ext4 is viewed as the stable, production filesystem in the Linux kernel, the one we're supposed to use while waiting for Btrfs to mature. One might well question the addition of new features to this filesystem, especially features that prove to be rarely used or that don't necessarily play well with existing features. And, sure enough, Linux filesystem developers have raised just this kind of worry in the past. In the end, though, the evolution of ext4 is subject to the same forces as the rest of the kernel; it will go in the directions that its developers drive it. There is interest in enhancing ext4, so new features will find their way in.

Before getting too worried about this prospect, though, it is worth thinking about the history of ext4. This filesystem is heavily used with all kinds of workloads; any problems lurking within will certainly emerge to bite somebody. But problems that have affected real users have been exceedingly rare and, even in this case, the number of affected users appears to be countable without running out of fingers. Ext4, in other words, has a long and impressive record of stability, and its developers are determined to keep it that way; this bug can be viewed as the sort of exception that proves the rule. One should never underestimate the value of good backups, but, with ext4, the chances of having to actually use those backups remain quite small.

Comments (81 posted)

Patches and updates

Kernel trees

Architecture-specific

Core kernel code

Development tools

Device drivers

Documentation

Filesystems and block I/O

Memory management

Networking

Security-related

Virtualization and containers

Miscellaneous

Page editor: Jonathan Corbet

Distributions

Fedora and LVM

By Jonathan Corbet
October 31, 2012
Those following the progress of the Fedora 18 development cycle cannot have failed to notice that the rework of Anaconda, the distribution's installer, is not going as smoothly as one might have liked. Complaints are common, and there is a real risk that installer problems will end up being what users remember about this release. Given that, it may seem surprising that the Fedora developers intend to change one of the fundamental decisions made by the developers of the new installer.

The logical volume manager (LVM) sits above the block layer, providing abstract storage devices that can be resized, encrypted, and more. In the absence of explicit instructions to the contrary, Anaconda has installed systems using LVM for many years. LVM adds some flexibility to an installed system and supports a number of official Fedora features, but it has the potential to confuse users who are not prepared for the addition of a layer of indirection over their disk partitions. It also irritates users who know they don't need LVM and would rather not see another layer of software in the block I/O path. Grumbling about the use of LVM in Fedora is not uncommon.

The new installer changes the default; unless the user asks for LVM explicitly, current F18 testing releases will install directly onto disk partitions and leave LVM out of the picture. How that change came to be is not entirely clear; it does not seem that there was much, if any discussion in the Fedora development community first. That did finally begin, though, on October 25, when Adam Williamson filed a Fedora bug asking that the change be reverted so that Fedora would, once again, install and use LVM by default.

That discussion got off to a bit of a rough start; arguably, Adam's phrasing did not help matters:

It's kind of hard to really swing the 'LVM annoys people' argument too. Well, it _does_, but not for very good reason. That argument boils down to 'catering to idiots': the people who say they're annoyed by LVM as default are people who know raw partitioning, don't understand LVM, and are resisting change.

In the end, though, the real arguments in favor of changing Anaconda to restore LVM-by-default came to the fore; there are several of them. The first of these is that a number of advertised Fedora features depend on LVM; a user who does without LVM will end up without the ability to use System Storage Manager, resize filesystems, migrate filesystems across disk, and more. Thus, Ric Wheeler said, turning LVM off by default constitutes a regression that needs to be reverted.

There is also the little problem that Fedora's documentation is written with the assumption that LVM is in use. Turning LVM off obsoletes that documentation without fixing it. Quality documentation is hard enough to come by as it is; causing what documentation exists to become inaccurate without (as LVM proponents see it) a proper justification just makes things worse to no good end.

Also relevant is that the current plan is for Fedora to switch to Btrfs during the Fedora 19 development cycle. Given that, making a fundamental change to the Fedora storage stack now makes little sense to many developers. It will just add churn to a system that is going away anyway, leaving one Fedora release with a storage setup that differs from all the others. That has the potential to confuse users and increase the amount of work the Fedora storage developers have to put into supporting the F18 release. Even if the Btrfs transition is delayed to F20 (an outrageously shocking and unpredictable course of events, but one might as well ponder even highly unlikely scenarios), a case could be made that it might be better not to perturb the existing stack unnecessarily in the meantime.

Finally, it has been pointed out that the change to Anaconda returning it to the pre-F18 default is quite small; it is really just changing the default value of a checkbox on an installer screen. All of the code for installing over LVM — code that has been used for many releases — is still there and working as well as ever. So the change should be safe and should not be cause for yet another slip in the Fedora 18 schedule.

Arguments for leaving the default as it is (and, thus, continuing to install without LVM) usually start with the fact that it is quite late in the F18 development cycle; unnecessary changes — even small and seemingly safe changes — should be avoided if possible. That is doubly true for the installer, which has had trouble stabilizing as it is. Rather than revisit established decisions, it is said, it would be better to focus on fixing the known problems and getting a solid release out the door.

Beyond that, some developers question the value of LVM. Fedora developer "drago01" asked:

Resizing partitions isn't that common and not the primary use of LVM (you can do it without it and most users won't). It is still pretty much useless (as in the extra features won't be used) for the average desktop / laptop installs. For most users all it does is slowing down the boot process (we should stop adding crap to the default boot process because someone might need it on some obscure case).

LVM has been fingered in the past for slowing down the boot process; indeed, it has been called out as one of the chief offenders. Discussion in the bug report suggests that LVM's dependency on udev-settle, which is the real cause of boot-time delays, has been significantly reduced, to the point that many or most installations no longer need it. But, if boot time is a prime concern, the addition of another service to set up in the boot path is unlikely to help the situation.

Finally, opponents argue that LVM is confusing to relatively unsophisticated users who will end up being unable to manage their systems properly. It is an added level of abstraction that makes things harder without bringing any significant new value. It would be better, they argue, to behave like many other distributions and just install directly onto disk partitions by default.

The Fedora Engineering Steering Committee (FESCO) took up this issue and brought it to a vote on October 30. Full consensus was not to be found there either, but, in the end, FESCO voted in favor of the change back to the pre-F18 default. So, unless something gets derailed somewhere, the Fedora 18 will, like its predecessors, install and use LVM by default.

Comments (155 posted)

Brief items

Distribution quotes of the week

In the Linux space what the games people seem to be doing now is realising that they should just replace "Linux" with "Ubuntu" and they'll cover all the general end user bases, and the techie oriented distros will work out themselves how to to make it work 8)
-- Alan Cox

If you don't like the rules feel free to whine, beg, and plead to QA, the council, $DIETY, or your mother, but follow the rules until they're changed. There is always room for mistakes, but big projects don't work when everybody just does whatever they feel like doing.
-- Rich Freeman

Should we just skip F18? (like seriously).
-- Dave Airlie

Comments (none posted)

Yocto 1.3 "danny" Released

Version 1.3 of the Yocto embedded distribution builder is out. It features a new terminal-based interface, a lot of usability improvements, a number of upgraded components, and over 500 bug fixes.

Full Story (comments: 1)

Linaro ARMv8 Downloads Now Available

Linaro has announced the release of ARMv8 images. "The ARMv8 architecture offers 64-bit computing for ARM SoCs. ARM and Linaro have been hard at work to enable opensource software for the new AArch64 execution state and for the new A64 instruction set and Linaro is making early ARMv8 images available to interested developers. While hardware isn’t available for purchase, ARM offers a free of charge ARMv8 virtual platform called “Foundation model” which allows booting Linaro’s GNU/Linux images." (Thanks to Riku Voipio)

Comments (1 posted)

Distribution News

Debian GNU/Linux

DebConf13 sponsors wanted

DebConf13 is scheduled to take place in Switzerland next year. Sponsors are needed to make it happen. "After DebConf12's great success we are working to secure the next DebConf in Switzerland. We are now contacting sponsors from all over the world and we would like to ask all Debian contributors to inform us about any useful connections they know or have, which could result in sponsorship for DebConf13."

Full Story (comments: none)

Red Hat Enterprise Linux

Red Hat Enterprise Linux Extended Update Support 6.0 1-Month EOL Notice

Red Hat has announced a 1 month notice for Red Hat Enterprise Linux Extended Update Support Add-On (EUS) 6.0. "In accordance with the Red Hat Enterprise Linux Errata Support Policy, the Extended Update Support for Red Hat Enterprise Linux 6.0 will end on 30th November, 2012."

Full Story (comments: none)

Ubuntu family

Ubuntu 11.04 (Natty Narwhal) end-of-life

Ubuntu 11.04 (Natty Narwhal) reached the end of support on October 28. "The supported upgrade path from Ubuntu 11.04 is via Ubuntu 11.10 (Oneiric Ocelot)."

Full Story (comments: none)

New Distributions

sposkpat

sposkpat (Single Purpose Operating System: KPatience) is a new distribution based on Debian 6.0 and KDE's kpat. It runs in RAM as a full-screen solitaire card game with no distractions, not even a clock. The website notes: "Playing solitaire games helps you to greatly improve your attention span and enhance the ability to concentrate."

Full Story (comments: none)

Newsletters and articles of interest

Distribution newsletters

Comments (none posted)

Garrett: A detailed technical description of Shim

Matthew Garrett describes Shim, the first stage bootloader used to support Secure Boot. "handle_image() is the real meat of Shim. First it has to examine the header data in read_header(), copying the relevant bits into a context structure that will be used later. Some basic sanity checks on the binary are also performed here. If we're running in secure mode (ie, Secure Boot is enabled and we haven't been toggled into insecure mode) we then need to verify that the binary matches the signature and hasn't been blacklisted."

Comments (none posted)

Ubuntu: No more alphas, just one beta (The H)

The H reports on a change in the Ubuntu 13.04 release process. There will not be any 13.04 alpha releases. "With there only being one beta and one final release of an Ubuntu version, the archive of code will now only be frozen late in the development cycle. This change could also allow for the introduction of Mark Shuttleworth's "Ta-da" features quite late in the development cycle, though currently it is unclear how they will be integrated into the tree; there could be a parallel testing effort for a version with those features included or if the features could be added earlier in the cycle to allow their testing to begin sooner."

Comments (14 posted)

Page editor: Rebecca Sobol

Development

A brief sketch of Processing

By Michael Kerrisk
October 31, 2012

Processing, a somewhat confusingly named language, has been around since 2001. As the language heads towards toward a 2.0 release, it seems a good time to take a look at the language, who uses it, and what they use it for.

History and goals

The Processing project was started by Casey Reas and Benjamin Fry. Both developers were involved in the development of the Design By Numbers programming language, a 1990s experiment at the MIT Media Lab whose goal was to make programming easier for non-programmers. Like Design By Numbers, Processing was initially developed as a tool to teach programming in the visual arts. Consequently, the language and its development environment are heavily oriented toward visual output, animation, and graphical interaction. Those attributes provide an "instant gratification" aspect that makes the language engaging for just about anyone to learn, perhaps even more so for people with little previous programming experience. Processing supports both 2D and 3D graphics programming.

The project provides both a language (translator plus libraries) and a development environment (the Processing Development Environment, or PDE). Both of these are written in Java, with the consequence that Processing runs on a range of operating systems including Linux, Windows, and Mac OS X. There's also support for creating Android applications that, the developers warn, should be considered to be Beta quality. The PDE is licensed under the GNU GPLv2+ and the libraries are licensed under the GNU LGPLv2+. (The Processing web site provides instructions for downloading the source code from the Subversion repository.)

Syntactically, Processing is very close to Java, and in fact the Processing translator simply translates a Processing program—called a "sketch"—into Java before running it. However, by contrast with Java, Processing doesn't require the programmer to understand object-oriented programming concepts such as classes and objects (although most advanced Java features are available should the programmer want to make use of them). In addition, the basics of the Processing graphics library can be quickly grasped, but the library is sufficiently rich that Processing programs can produce sophisticated graphical output. The upshot is that programs that produce useful graphical output are typically much shorter and more easily written than their Java equivalents.

Installation and set-up

[The Processing
    Development Environment] Installation of Processing is simple: download the tarball, unpack it, and then execute the processing shell script. Running the shell script fires up the PDE, whose operation is fairly obvious: a menu bar, a toolbar (for common operations such as opening, running, or stopping a sketch, and exporting an sketch to create a standalone program), a display window for the source code, a message area (for error messages and so on), and a console for displaying output produced when the sketch makes calls to the built-in println() function. Clicking the "run" button opens a separate window that displays the graphical output of the sketch.

A sample program

A simple sketch illustrates some of the basics of Processing. We'll break the sketch into pieces for the purposes of explaining it.

As noted before, the Processing follows Java in its syntax. The first line is a comment, and the next lines declare a constant and (global) variable.

    // dots_v1.pde -- draw a multicolored mouse trail
 
    final int bgcolor = 50;              // Color for canvas background
    int strokeweight = 20;               // Weight of drawing stroke

Within a sketch, the programmer can define certain functions that are automatically invoked in response to various events. For example, the setup() function is invoked once at the start of execution of a sketch. Among the typical tasks performed here are initializations of graphics features. At a minimum, there would normally be a call to size() to set the size of the drawing canvas. In addition, this setup() function displays some simple help text explaining how to operate this sketch's graphical interface.

    void setup() {         // Executed once
      size(400, 400);                    // Canvas size
      smooth();                          // Enable antialiasing                  
      background(bgcolor);               // Set background colour for canvas
                                         // (RGB levels all the same, thus grey)
      strokeWeight(strokeweight);        // Set thickness of drawing stroke
 
      textSize(20);
      textAlign(CENTER);
      text("Click the mouse here to start", 200, 150);
      text("Space bar: clear canvas", 200, 200);
      text("Left/right arrows: change stroke size", 200, 250);
    }

The draw() function is another standard function that, if defined by the programmer, is called continuously during the execution of the sketch. (Execution of the draw() function can be disabled and enabled using the built-in noLoop() and loop() functions, and the frequency with which it is executed can be controlled using the frameRate() function.) The draw() function below tests whether the window has the focus, and if so, draws a point at the current mouse location (mouseX and mouseY are built-in variables that report the current mouse location) using the current drawing stroke color and weight. Depending on the stroke weight, the "points" will be circles of varying size.

    void draw() {          // Executed continuously
      if (focused)                       // If we have mouse focus
        point(mouseX, mouseY);
    }

The next three functions are automatically invoked by the Processing framework in response to events. The first of these functions, invoked when any character is is typed, clears the canvas if the space character is typed. The second function randomly changes the stroke color as the mouse moves. The third function clears the help message displayed by the setup() function, by redrawing the canvas background when the user first clicks the mouse in the display window.

    void keyTyped() {      // Executed when a key is typed
      if (key == ' ')                    // Space bar clears the canvas
        background(bgcolor);
    }
 
    void mouseMoved() {    // Executed on mouse moves
      stroke(random(255), random(255), random(255));
    }
 
    boolean firstClick = true;
 
    void mouseClicked() {  // Executed for click of any mouse button
      if (firstClick) {
        firstClick = false;
        background(bgcolor);             // Clear initial help message
      }
    }

The final function, keyPressed() is also invoked automatically by Processing in response to key presses. This function, rather than keyTyped() must be used to catch composed key sequences such as arrow keys. Processing generates three types of event for the keyboard: key pressed, key typed, and key released. All three of these are generated for keys that generate single characters, but only the first and the last are generated for composed key sequences and presses of modifier keys such as the Shift key. In this sketch, the purpose of the keyTyped() function is to decrease and increase the stroke size when the left and right arrow keys are pressed.

    void keyPressed() {    // Executed on any keyboard key press
      if (key == CODED) {           // Is it a key sequence?
        if (keyCode == LEFT && strokeweight > 1) {           // Left arrow
          strokeweight--;
          strokeWeight(strokeweight);
        } else if (keyCode == RIGHT && strokeweight < 50) {  // Right arrow
          strokeweight++;
          strokeWeight(strokeweight);
        }
      }
    }

(Complete source code of the example sketch can be found here.)

By default, the PDE will save sketches in subdirectories under $HOME/sketchbook, as files with the extension .pde.

Processing.js

Processing.js is a separate project that provides a port of Processing to JavaScript: the Processing.js parser converts Processing code to JavaScript. What this means is that a Processing program can be run inside a web browser, with its output rendered to an HTML <canvas> element.

To use Processing.js, one first downloads a version of the JavaScript. Then, it's simply a matter of creating a web page that embeds the script and includes a <canvas> element that specifies the Processing sketch to run. For example, the canvas shown below is rendering the output of the sketch shown above. To do this, the page embeds the following HTML:

    <script src="https://static.lwn.net/images/2012/processing/processing-1.4.1.min.js"></script>
    <canvas data-processing-sources="/images/2012/processing/dots_v1.pde">

The sketch can be run by following the instructions shown in the canvas.

If your browser supports WebGL, then you should be able to run the example on this page within Processing.js to see a demonstration of some basic 3D animation in Processing.

Processing today and tomorrow

Although the Processing project started in 2001, the 1.0 release of the language wasn't until November 2008. That was followed by a 1.1 release in March 2010, 1.2 in July 2010, and the current stable 1.5 release in April 2011. Although originally planned for the second half of 2011, Processing 2.0 still has not yet been released, and there appears to be no firm release date. Nevertheless, there has been a steady stream of Alpha and Beta releases, with the most recent Beta 5 being released on October 22.

Processing 2.0 contains a number of (backward-incompatible) API changes, and replaces older 2D and 3D renderers with variants of the OpenGL renderer. In addition, the OpenGL library has been rewritten and made part of the core application (rather than being a separate library). The 2.0 release also includes better support for "modes"—Processing-speak for multiple language and platform support. The PDE supports three standard modes: "Java", for running applications in the "native" Java environment, "Android", for running applications in an Android emulator, and "JavaScript", which allows a sketch to be run inside a browser using Processing.js. The Processing wiki provides a summary of the significant changes in Processing 2.0.

Since its inception, Processing has become steadily more popular. No doubt, this springs in part from the relative simplicity of the language, coupled with the "immediate gratification" aspect provided by its graphical interface. However, what is also striking is just how much good documentation there is for the language: a language reference, a wiki, and tutorials and example programs (with many more examples in the downloaded Processing environment). An impressive number of books attests to the language's popularity. Doubtless, the extensive documentation has contributed to the growing usage of the language. Further promotion for the language exists in the form of an extensive gallery of quite diverse (mainly visual and artistic) projects that employ Processing (for example, this video by one of the founders of the Processing project provides a good showcase if what can be done with the language). As a consequence, although Processing was originally designed as a teaching language for non-programmers in the visual arts, it has developed into a capable and widely used production tool for animation, data analysis and visualization, and art.

Comments (2 posted)

Brief items

Quotes of the week

Failure is not an option, but let's call it our product pivot strategy.
Mike Linksvayer

Tell the user he is stupid instead of having him debug the source code to find out.
Hans Baier (Hat tip to Jörn Nettingsmeier)

Comments (none posted)

Motif relicensed

The venerable Motif graphical toolkit has been relicensed to LGPLv2.1; the code is now hosted on SourceForge. Most of the world is unlikely to care much, but, as they say, better 20 years too late than never.

Comments (33 posted)

Tryton 2.6 available

Version 2.6 of the Tryton application framework is available. There is one major API change, the introduction of "Active Records," a pattern designed to simplify the codebase and unify multiple methods of accessing the value of a record. Other bug fixes and GUI improvements came along for the ride, though.

Full Story (comments: none)

Rakudo Star 2012.10 released

Version 2012.10 of the Rakudo Perl distribution is available. Rakudo includes a Perl 6 compiler, plus the Parrot virtual machine and additional modules from the Perl community. Most of the changes in this release are due to changes in the Perl 6 specification itself, although bugfixes and other enhancements are included as well.

Full Story (comments: none)

CTDB 2.0 released

Version 2.0 of CTDB, the cluster-oriented version of the TDB database used by Samba (among other projects), is available. New is support for read-only records, policy routing, new test infrastructure, and a locking API designed to prevent deadlocks between CTDB and Samba.

Full Story (comments: none)

Newsletters and articles

Development newsletters from the last week

Comments (none posted)

Knight-Mozilla's Opened Captions

At his blog, Dan Schultz writes about Opened Captions, his project that taps into closed-caption text from TV broadcasts and converts it to a usable data feed. "The Internet is filled with real-time updates triggered by online activity, but it still feels like magic when we see automatic updates driven by the real world. Opened Captions makes it easy for programmers to use live TV transcripts as an input." Schultz is a 2012 Knight-Mozilla Fellow; the program pairs developers with journalists. Opened Captions initially supports just the US government-access channel C-SPAN, but is extensible.

Comments (none posted)

Haley: We're doing an ARM64 OpenJDK port!

We're a bit late in noticing, but Andrew Haley has announced plans to develop a free Java for 64-bit ARM systems. "There are two versions of HotSpot, the VM that OpenJDK uses, for the current (32-bit) ARM. The one written and owned by Oracle is proprietary, and the other (written for ARM) is free. The proprietary VM performs better than the free one. The latter uses a lightweight small-footprint just-in-time compiler that can't compete with the might of Oracle's JIT. We really don't want this situation for A64, so we're writing a port that will be entirely free software. We'll submit this as an OpenJDK project, and we hope that others will join us in the work."

Comments (134 posted)

Page editor: Nathan Willis

Announcements

Articles of interest

Jailbreaking now legal under DMCA for smartphones, but not tablets (ars technica)

Ars technica summarizes the latest round of the three-year DMCA exemption process. "The new batch of exemptions illustrate the fundamentally arbitrary nature of the DMCA's exemption process. For the next three years, you'll be allowed to jailbreak smartphones but not tablet computers. You'll be able to unlock phones purchased before January 2013 but not phones purchased after that. It will be legal to rip DVDs to use an excerpt in a documentary, but not to play it on your iPad."

Comments (26 posted)

EFF: Privacy in Ubuntu 12.10: Amazon Ads and Data Leaks

The Electronic Frontier Foundation expresses privacy concerns with the new internet search added to Ubuntu 12.10. "It's a major privacy problem if you can't find things on your own computer without broadcasting what you're looking for to the world. You could be searching for the latest version of your résumé at work because you're considering leaving your job; you could be searching for a domestic abuse hotline PDF you downloaded, or legal documents about filing for divorce; maybe you're looking for documents with file names that will gave away trade secrets or activism plans; or you could be searching for a file in your own local porn collection. There are many reasons why you wouldn't want any of these search queries to leave your computer." The article also includes instructions to opt-out.

Comments (8 posted)

Rare photos: gnu crashes Windows 8 launch

FSF volunteer Tristan Chambers took pictures of a gnu at the Windows 8 launch in New York. "Reporters and security guards at the event weren't sure how to react when they were greeted by a real, live gnu. The gnu -- which, on closer inspection, was an activist in a gnu suit -- had come for some early trick-or-treating. But instead of candy, she had free software for the eager journalists. The gnu and the FSF campaigns team handed out dozens of copies of Trisquel, a fully free GNU/Linux distribution, along with press releases and stickers. Once they got over their confusion, the reporters were happy to see us and hear our message -- that Windows 8 is a downgrade, not an upgrade, because it steals users' freedom, security and privacy."

Full Story (comments: 2)

New Books

The ThoughtWorks Anthology, Volume 2 -- Pragmatic Bookshelf

Pragmatic Bookshelf has released "The ThoughtWorks Anthology, Volume 2", essays on Software Technology and Innovation by various authors.

Full Story (comments: none)

Calls for Presentations

Linux Plumbers Conference Call for Organisers

The Technical Advisory Board of the Linux Foundation has issued a call for anyone interested in running Linux Plumbers Conference (LPC) 2013. "To make a successful application, you need at least a Chair (person in overall charge) a Treasurer and an Events co-ordinator." LPC 2013 is scheduled for September 18-20 in New Orleans, LA.

Full Story (comments: none)

Upcoming Events

Events: November 1, 2012 to December 31, 2012

The following event listing is taken from the LWN.net Calendar.

Date(s)EventLocation
October 29
November 3
PyCon DE 2012 Leipzig, Germany
October 29
November 1
Ubuntu Developer Summit - R Copenhagen, Denmark
October 29
November 2
Linaro Connect Copenhagen, Denmark
November 3
November 4
OpenFest 2012 Sofia, Bulgaria
November 3
November 4
MeetBSD California 2012 Sunnyvale, California, USA
November 5
November 9
Apache OpenOffice Conference-Within-a-Conference Sinsheim, Germany
November 5
November 7
Embedded Linux Conference Europe Barcelona, Spain
November 5
November 7
LinuxCon Europe Barcelona, Spain
November 5
November 8
ApacheCon Europe 2012 Sinsheim, Germany
November 7
November 9
KVM Forum and oVirt Workshop Europe 2012 Barcelona, Spain
November 7
November 8
LLVM Developers' Meeting San Jose, CA, USA
November 8 NLUUG Fall Conference 2012 ReeHorst in Ede, Netherlands
November 9
November 11
Free Society Conference and Nordic Summit Göteborg, Sweden
November 9
November 11
Mozilla Festival London, England
November 9
November 11
Python Conference - Canada Toronto, ON, Canada
November 10
November 16
SC12 Salt Lake City, UT, USA
November 12
November 17
PyCon Argentina 2012 Buenos Aires, Argentina
November 12
November 14
Qt Developers Days Berlin, Germany
November 12
November 16
19th Annual Tcl/Tk Conference Chicago, IL, USA
November 16 PyHPC 2012 Salt Lake City, UT, USA
November 16
November 19
Linux Color Management Hackfest 2012 Brno, Czech Republic
November 20
November 24
8th Brazilian Python Conference Rio de Janeiro, Brazil
November 24 London Perl Workshop 2012 London, UK
November 24
November 25
Mini Debian Conference in Paris Paris, France
November 26
November 28
Computer Art Congress 3 Paris, France
November 29
November 30
Lua Workshop 2012 Reston, VA, USA
November 29
December 1
FOSS.IN/2012 Bangalore, India
November 30
December 2
Open Hard- and Software Workshop 2012 Garching bei München, Germany
November 30
December 2
CloudStack Collaboration Conference Las Vegas, NV, USA
December 1
December 2
Konferensi BlankOn #4 Bogor, Indonesia
December 2 Foswiki Association General Assembly online and Dublin, Ireland
December 5 4th UK Manycore Computing Conference Bristol, UK
December 5
December 7
Open Source Developers Conference Sydney 2012 Sydney, Australia
December 5
December 7
Qt Developers Days 2012 North America Santa Clara, CA, USA
December 7
December 9
CISSE 12 Everywhere, Internet
December 9
December 14
26th Large Installation System Administration Conference San Diego, CA, USA
December 27
December 29
SciPy India 2012 IIT Bombay, India
December 27
December 30
29th Chaos Communication Congress Hamburg, Germany
December 28
December 30
Exceptionally Hard & Soft Meeting 2012 Berlin, Germany

If your event does not appear here, please tell us about it.

Page editor: Rebecca Sobol


Copyright © 2012, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds