|
|
Subscribe / Log in / New account

LWN.net Weekly Edition for May 10, 2012

Who owns your data?

By Jake Edge
May 9, 2012

The Economist is concerned that our "digital heritage" may be lost because the formats (or media) may be unreadable in, say, 20 years time. The problem is complicated by digital rights management (DRM), of course, and the magazine is spot on with suggestions that circumventing those restrictions is needed to protect that heritage. But in calls for more regulation (not a usual Economist stance) the magazine misses one of the most important ways that digital formats can be future-proofed: free and open data standards.

DRM is certainly a problem, but a bigger problem may well be the formats that much of digital data is stored in. The vast majority of that data is not stored in DRM-encumbered formats, it is, instead, stored in "secret" data formats. Proprietary software vendors are rather fond of creating their own formats, updating them with some frequency, and allowing older versions to (surprise!) become unsupported. If users of those formats are not paying attention, documents and other data from just a few years ago can sometimes become unreadable.

There are few advantages to users from closed formats, but there are several for the vendors involved, of course. Lock-in and the income stream from what become "forced" upgrades are two of the biggest reasons that vendors continue with their "secret sauce" formats. But it is rather surprising that users, businesses and governments in particular, haven't rebelled. How did we get to a point where we will pay for the "privilege" of having a vendor take our data and lock it up such that we have to pay them, again and again, to access it?

There is a cost associated with documenting a data format, so the proprietary vendors would undoubtedly cite that as leading to higher purchase prices. But that's largely disingenuous. In many cases, there are existing formats (e.g. ODF, PNG, SVG, HTML, EPUB, ...) that could be used, or new ones that could be developed. The easiest way to "document" a format is to release code—not binaries—that can read it, but that defeats much of the purpose for using the proprietary formats in the first place so it's not something that most vendors are willing to do.

Obviously, free software fits the bill nicely here. Not only is code available to read the format, but the code that writes the format is there as well. While documentation that specifies all of the different values, flags, corner cases, and so on, would be welcome, being able to look at the code that actually does the work will ensure that data saved in that format can be read for years (centuries?) to come. As long as the bits that make up the data can be retrieved from the storage medium and that quantum computers running Ubuntu 37.04 ("Magnificent Mastodon") can still be programmed, the data will still be accessible. There may even be a few C/C++ programmers still around who can be lured out of retirement to help—if they aren't all busy solving the 2038 problem, anyway.

More seriously, though, maintaining access to digital data will require some attention. Storage device technology continues to evolve, and there are limits on the lifetime of the media itself. CDs, DVDs, hard drives, tapes, flash, and so on all will need refreshing from time to time. Moving archives from one medium to another is costly enough, why add potentially lossy format conversions and the cost of upgrading software to read the data—if said software is even still available.

Proprietary vendors come and go; their formats right along with them. Trying to read a Microsoft Word document from 20 years ago is likely to be an exercise in frustration, but trying to read a Windows 3.0 WordStar document will be far worse. There are ways to do so, of course, but they are painful—if one can even track down a 3.5" floppy drive (not to mention 5.25"). If the original software is still available somewhere (e.g. Ebay, backup floppies, ...) then it may be possible to use emulators to run the original program, but that still may not help with getting the data into a supported format.

Amusingly, free software often supports older formats far longer than the vendors do. While the results are often imperfect, reverse engineering proprietary data formats is a time-honored tradition in our communities. Once that's been done, there's little reason not to keep supporting the old format. That's not to say that older formats don't fall off the list at times, but the code is still out there for those who need it.

As internet services come and go, there will also be issues with preserving data from those sources. Much of it is stored in free software databases, though that may make little difference if there is no access to the raw data. In addition, the database schema and how it relates articles, comments, status updates, wall postings, and so on, is probably not available either. If some day Facebook, Google+, Twitter, Picasa, or any of the other proprietary services goes away—perhaps with little or no warning—that data may well be lost to the ages too. Some might argue that the majority of it should be lost, but some of it certainly qualifies as part of our digital heritage.

Beyond the social networks and their ilk, there are a huge number of news and information sites with relevant data locked away on their servers. Data from things like the New York Times (or Wall Street Journal), Boing Boing and other blogs, the article from The Economist linked above, the articles and comments here at LWN, and thousands (perhaps millions) more, are all things that one might like to preserve. The Internet Archive can only do so much.

Solutions for data from internet sites are tricky, since the data is closely held by the services and there are serious privacy considerations for some of it. But some way to archive some of that data is needed. By the time the service or site itself is on the ropes, it may well be too late.

Users should think long and hard before they lock up their long-term data in closed formats. While yesterday's email may not be all that important (maybe), that unfinished novel, last will and testament, or financial records from the 80s may well be. Beyond that, shareholders and taxpayers should be pressuring businesses and governments to store their documents in open formats. In the best case scenario, it will just cost more money to deal with old, closed-format data; in the worst case, after enough time passes, there may be no economically plausible way to retrieve it. That is something worth avoiding.

Comments (46 posted)

TizenConf: Pitching HTML5 as a development framework

By Nathan Willis
May 9, 2012

The Tizen Project has considerable technical history on its side, as it is the successor to the well-known Moblin, MeeGo, and LiMo projects. Yet in a way that pedigree also works against it, as the project makes its pitch to third-party application developers who have seen the aforementioned predecessors come and go — sometimes first-hand. At the first Tizen Developer Conference in San Francisco, the project worked hard to establish its "developer story" — in particular highlighting the broader support from industry players and the stability of HTML5 and related open web specifications as a development platform.

The industry

In Tuesday's keynote sessions, Intel's Imad Sousou and Samsung's J.D. Choi took a quick tour through the platform as exposed to application developers (a detailed examination was reserved for the break-out sessions); the project defines a Web API that uses the World Wide Web Consortium (W3C)'s packaging and configuration format, and "custom" APIs for accessing contact data, NFC, Bluetooth, and other subsystems. They then went deeper into three specific areas of the stack: security, animation, and connection management.

[Imad Sousou]

The security framework is based on Smack, which Sousou described as being preferable to other Linux alternatives that required "setting up 8,000 policy files". The platform also provides integrity protection by checking application signatures at install time, and isolates each application in its own process (although he did not go into specifics, Sousou described the setup as less complicated than the "draconian" measures taken by other platforms).

The animation framework is based on OpenGL ES and the Emotion scene graph library provided by the Enlightenment Foundation Libraries (EFL), LiMo's underlying application framework. Connection management is handled by ConnMan, which Sousou announced had finally been declared 1.0. The project has worked on reducing ConnMan's overhead in the past three years, specifically for mobile devices, where the typical 2-3 second DHCP configuration time is a deal-breaker for users. The enhanced ConnMan now performs DHCP setup in milliseconds.

Several points in Sousou and Choi's talk about the architecture drew contrasts with other mobile platforms — primarily Android and the latest Blackberry offering. The point they made was that Tizen is open to input on the design from anyone willing to join the project and contribute — which is hardly the case, they suggested, for Android.

They also used their time to discuss the distinction between the Tizen Project and the Tizen Association. The project is the actual open source software project, which is led by a technical steering group (headed by Sousou and Choi), and at this stage largely developed by full-time employees from the two companies, plus smaller partners. In contrast, the Tizen Association is the marketing group that works to sell Tizen as a solution to OEM device makers, carriers, third-party application vendors, and any other industry customers. In addition to marketing the project to industry players, though, the Association also attempts to gather their requirements for an OS platform.

The next keynote was presented by Kiyohito Nagata, chairman of the Tizen Association. Nagata is also senior vice-president of NTT Docomo, Japan's largest wireless carrier. He talked about Docomo's research in user demands of smartphone devices, making the case that Tizen offers carriers the flexibility to implement their own application stores and custom services — across a range of devices. Again, this aspect of Tizen was placed in contrast to the competition.

Nagata ended his talk by discussing the board membership of the Tizen Association, which includes other large mobile phone carriers — notably Orange, Telefónica, SK Telecom, and Sprint. Tizen is marketing itself as a cross-device platform, serving in-vehicle systems (IVI), set-top boxes, tablets, and smartphones. That list is identical to MeeGo's target platforms, of course, but like MeeGo the vast majority of the talk centered around handsets — including the keynotes and the current work of the Tizen Association.

The web

Buy-in from mobile carriers is a plus, but third-party applications are what those carriers are interested in attracting in order to make their plans appealing. Tizen's case as a development platform comes down to its HTML5-based API, which was the subject of numerous breakout sessions at the conference: from the overall API to specific components (e.g., graphics, I/O, NFC, and Bluetooth).

Intel's Sakari Poussa and Samsung's Taehee Lee led a breakout session that covered the overall Web API suite. As we covered when we looked at the SDK in January, a significant chunk of the Web API is drawn from existing work spearheaded by the W3C. But there are other APIs, some exploring ways to expose mobile device functionality to web applications (for example, the ability to lock the screen rotation into landscape mode, which is reportedly of interest to game developers), others defining new general-purpose functionality like mapping-and-routing. The Tizen APIs also cover system-maintenance tasks, such as application installation, update and removal, and creating and managing user accounts for online services.

The bigger news, however, was Sousou's announcement that the Tizen project is working with the W3C to develop these "missing piece" APIs into general standards. The project wants them to be standard APIs, not "Tizen APIs," he said. In particular, Tizen is part of the W3C's new Core Mobile Web Platform Group, and Tizen is committed to adhering to the standard, whatever decisions the working group makes.

Of course, standards are just words, and many developers have heard the "write once, run anywhere" song multiple times. The "Advanced HTML5 Features" session dealt with that question specifically, arguing that the web has always been a fragmented platform, but that web development has evolved to cope with varying implementation details on desktop browsers, and has done so better than most other development platforms.

If that seems like a mild assurance, Facebook's head of mobile developer relations James Pearce was on hand to offer a more concrete testing tool, the company's new compliance tester RingMark. RingMark defines three levels (or to be more precise, "rings") of compatibility: Ring 0 covers the status quo of existing W3C device APIs, Ring 1 covers "aspirational" extensions to Ring 0, including audio/video and other high-performance tasks that are currently the domain of native APIs on most platforms. Ring 2 covers the still-in-development suite of web APIs for the future, such as WebGL.

Attendees in several of the sessions I sat in on expressed interest in Tizen's compliance program. Although Tizen so far has no formal compliance plan, it was made clear that compliance will be assessed based on a product's adherence to the API. That makes for a stark contrast against MeeGo, which demanded specific versions of specific libraries and Linux system components — a requirements set that ultimately proved too arduous for even MeeGo co-founder Nokia to pass with its N9 phone.

The future

The project, then, is making its case as an HTML5-based development platform; the next question is how it will be received by the developer community. One independent developer I talked to (who requested anonymity) expressed his doubts that HTML5 scales up to industrial devices and serious applications; he cited medical tablets among other possible upscale device classes. Most of the speakers addressed JavaScript performance and latency as points needing work in HTML5 applications, although as you might expect, most also said they were pleased with Tizen's performance.

There were a handful of companies present who are already developing applications on Tizen. Cell phone carrier Orange was among them, and presented a session on its experiences. The team from Orange has deployed HTML5 applications for news, movie ticket offers, and streaming TV, and has built enhanced user-information tools, integrating items like data and SMS counters into the phone UI.

Tizen's community manager Dawn Foster dealt with the outreach question in her state-of-the-community talk on Tuesday. In brief, the Tizen community at the moment is small; considerably smaller than the MeeGo community was, with fewer volunteer contributors joining the paid developers from Intel and Samsung. But that is to be expected, she said, primarily because it is hard to build excitement about a platform before consumer devices are available. On that front, she added, Tizen is trying to take a different approach, by underplaying the hype of the platform and "letting the code lead". Likewise, while MeeGo established a complicated working group structure at the outset, well before any code was delivered, Tizen's project structure is intentionally loose at this stage.

Perhaps that "release-first" strategy will also help deal with the other hurdle facing Tizen, developer burnout among veterans of the earlier projects in Tizen's lineage. Fundamentally, burnout with platform-switching may be one of the reasons Tizen is pressing so hard on the HTML5 front at the moment. Whatever else developers may think of HTML5, it is at least a platform-neutral approach to application development. The keynotes talked of more options still-to-come in the Tizen 2.0 release currently scheduled for the end of 2012 — for example, the Emotion animation framework mentioned by Choi. But at least for now, HTML5 and the web APIs remain the sole story for application developers.

Intel and Samsung are both ramping up their outreach to those developers. Intel is running an application developer contest, while Samsung distributed mobile developer devices to registered attendees. Foster also highlighted two tools to develop HTML5 applications that are designed to be lighter-weight than the full Tizen SDK: the Rapid Interface Builder (RIB) and Web Simulator. The contest runs until August — which is plenty of time for developers to explore the code base. As of May 9, however, there had still not been any consumer device announcements.

It is understandable that independent developers might be wary of Tizen given how recently they were being told about MeeGo. Ultimately no trick can undo that wariness; the only remedy will be to see the project grow in its own right and earn its own place. There are some key differences already — fairly or not, MeeGo was always perceived largely as a Nokia-only party without much connection to the all-important phone carrier industry, while Tizen has a longer list of mobile partners on board. MeeGo also presented potential contributors with a top-heavy compliance process and byzantine project structure, all well before there was any code to examine. With Tizen, however a developer feels about the commercial parties behind the scenes, there is code to see, and an API that exists outside the project itself; both of which are in the "plus" column.

[ The author would like to thank the Tizen project and the Linux Foundation for support to attend the conference. ]

Comments (15 posted)

Accounting systems: a rant and a quest

By Jonathan Corbet
May 8, 2012
Attentive long-time readers of LWN may remember that this business is based entirely on free software with one distressing exception: our business accounting is still done using the proprietary "QuickBooks Pro" package. QuickBooks does not lack for aggravations, but the task of replacing it has never quite attained a high enough priority for something to actually happen. Good replacements in the free software community are hard to come by, accounting is boring, our accountant deals easily (and cheaply) with QuickBooks files, and the existing solution, for the most part, simply works. Or, at least, it used to simply work.

The monthly accounting ritual involves importing a lot of data from the web site into the accounting application; in particular, subscription sales need to be properly fed in so that we can minimize our taxes on the income in the proper American tradition. This process normally works just fine, but, recently, it failed, saying: "Cannot import, not enough disk space or too many records exist." Naturally, in QuickBooks style, it failed partway through the import process, leaving a corrupted accounting file behind. But QuickBooks users usually learn to make backups frequently and can take such things in stride.

The inability to feed data into the system is a little harder to take in stride, though, especially once some investigation proved that disk space is not in short supply and the failure is elsewhere. It didn't take much time searching to turn up an interesting, unadvertised QuickBooks antifeature: there is a software-imposed limit of 14,500 "list items," which include products offered by the company, vendors, customers, and more. Once that limit is hit, QuickBooks will not allow any more items to be entered; the only supported way out is to upgrade to the "enterprise" version, which can currently be done for a special offer price of only $2400.

In other words: Intuit sells a program that is intended to become an integral part of a business's core processes, perhaps even functioning as a point-of-sale system. This program will, without warning, simply cease to function once the business accumulates an arbitrary number of entries. The only way for that business to get a working accounting system back is to "upgrade" to a new version that costs ten times as much. One can only conclude that this proprietary software package has not been written with its users' needs as the top priority. Instead, it contains a hidden trap to force them into more expensive offerings at a time when they may have little alternative. Who would have ever thought proprietary programs could be that way?

Here at LWN, we had no particularly urgent need to get things working again; other businesses may well not have the luxury of enough time to find an acceptable way out of this situation. It is, thus, unsurprising that there are entire businesses being built around this little surprise from Intuit. Needless to say, there is little enthusiasm in the LWN head office for the purchase of an expensive and proprietary "enterprise" accounting system. In the short term, a workaround has been found: sacrifice most of our accounting history to bring the record count to a level where the program will consent to function as advertised. That has other interesting side effects, like mysteriously changing the balances of reconciled accounts from previous years, but it does take the immediate pressure off. For now, we can continue to do our books.

But a clear message has been delivered here: it is about time that we at LWN read some pages from our own publication and realize that a dependence on proprietary software poses a real risk to our business. A company that is willing to put one such hostile surprise into an important application will put in others and, without the source, there is no way anybody can look for them or remove them if they are found. QuickBooks is too risky to continue to use.

It is, in other words, time to make the move to a free accounting program.

When we have looked at the available tools in the past, the results have always been a little disappointing. There is no shortage of software that can maintain a chart of accounts and a set of double-ledger books. But there has been, in the past, a relative scarcity of useful accounting tools for small businesses. Instead, what's out there is:

  • Various personal finance utilities, including GnuCash, KMyMoney, and others. For basic accounting they work well, but they fall short of a business's needs.

  • Massive enterprise-oriented toolkits that can be used to build systems implementing accounting, inventory-tracking, point-of-sale, customer relationship management, supply-chain management, human resources, and invoicing, with add-on modules for bill collection, weather prediction, automated trading, and bread baking. These systems have names like ADempiere, Compiere, OpenERP, LedgerSMB, and Apache OFBiz. The target users for these projects appear to be consultants and businesses with full-time people dedicated to keeping the system running. To a business like LWN, they tend to look like a box with hundreds of nearly identical parts and a little note saying "some assembly required."

What is missing in the middle is a package for a business with no special accounting needs, but which needs to be able to automate data entry, generate tax forms at the end of the year, and interface with an accountant so it can get its taxes done. Given how incredibly exciting small-business accounting is, it's surprising that so few developers have felt a burning need to scratch that particular itch. There is no accounting for taste, it seems.

That said, it has been a few years since we last made a serious effort to learn about free software accounting alternatives; clearly the time has come for another pass. So we'll be doing it, with an eye toward, hopefully, making the transition at the end of the calendar year. That gives us several months to forget about the problem while still allowing a few months of panic at the end, so the schedule should be plausible.

Stay tuned for updates, it should be an interesting ride. But we are pretty well determined not to find out what other surprises our proprietary accounting system may have in store for us. In 2012, it should be possible to run a small, simple business on free software and never have to wonder when the accounting system will stop functioning and demand more money. We intend to prove it.

Comments (84 posted)

Page editor: Jonathan Corbet

Security

Internet censorship and OONI

By Jake Edge
May 9, 2012

Internet "censorship" is often associated with repressive governments filtering the traffic of their citizens, but it goes well beyond that. Internet service providers sometimes filter—or alter—the traffic that they carry, companies restrict employees based on keywords and URLs, courts naïvely order certain URLs to be blocked, and so on. But it is difficult for any particular internet user to know just what it is they can't get at. That problem is what the Tor Open Observatory of Network Interference (OONI) project is hoping to help solve.

The overall goal for the OONI project is "to collect data which shows an accurate representation of network interference on the Filternet we call the internet", according to the web site. One obvious, though time consuming, way to do that is to gather information from multiple different "locations" on the internet, and that is what OONI has set out to do. Of course, the OONI project itself can only reach out so far, so the intent is to enlist other participants—essentially "crowdsourcing" the data collection.

There are other internet censorship tracking projects—Google's Transparency Report and Herdict for example—but the OONI project's README notes that other efforts either use a closed methodology or closed software. As befits a Tor project, though, OONI is fully open source. No top-level LICENSE file for OONI is present at the moment, but one would guess it will be similar to Tor's permissive license.

The core piece (ooni-probe) is written as a framework in Python, with an eye toward contributions of additional tests (called "plugoos") and reports. "Tests" are meant to detect censorship events by comparing the results obtained locally with some kind of experimental control. That control could be obtained via the Tor network, for example, or via some other means. The tests can use various kinds of "assets", which might include lists of URLs, IP addresses and ports, or keywords, as their input. Current tests include checking that Tor bridges are functioning, determining whether HTTP "Host" field filtering is occurring, checking for DNS tampering, doing address and port scans, detecting Squid proxies, and so on.

While there are plenty of tests that could be added, seemingly the area needing the most attention right now is the "reports". Currently, test failures are essentially just written to an unstructured text log file, which can be stored locally or uploaded to a server. Tools to interpret the data and to provide higher-level visualizations of the types and locations of internet censorship are planned.

While the OONI code is under heavy development, the project can already claim some successes. ooni-probe was used to detect eight blocked web sites for internet users in Bethlehem, West Bank. The probe scanned more than one million sites and found that users are blocked from eight news sites "whose reporting is critical of [Palestinian Authority] President Mahmoud Abbas".

In addition, ooni-probe found that T-Mobile USA's Web Guard "feature" blocks access to much more than the advertised categories. In particular, sites for Tor, the Internet Archive WaybackMachine, Chinese sports news, French economics and financial news, a Japanese URL shortener, and many others, were blocked though they didn't fall into any of the listed categories: "Alcohol, Mature Content, Violence, Drugs, Pornography, Weapons, Gambling, Suicide, Guns, Hate, Tobacco, Ammunition".

OONI is just getting started, but it is clearly a welcome addition to the internet landscape. In order for John Gilmore's famous quote ("The Net interprets censorship as damage and routes around it"—which seems to be an informal slogan for OONI) to be true, the internet, or really its users and operators, must be aware of where that censorship is occurring and how it is being applied. With tools like OONI (and the others, though it's unclear why they aren't more transparent), routing around that censorship will be easier. The free flow of information on the internet depends on being able to do so.

Comments (none posted)

Brief items

Security quotes of the week

> Is chkrootkit confused?

Yes and no. It correctly detects that your /sbin/init is something hideous and nasty, but fails to realise that it's something hideous and nasty that Fedora ships 8)

-- Alan Cox

If the Order stands, Twitter will be put in the untenable position of either providing user communications and account information in response to all subpoenas or attempting to vindicate its users’ rights by moving to quash these subpoenas itself--even though Twitter will often know little or nothing about the underlying facts necessary to support their users’ argument that the subpoenas may be improper.
-- Twitter stands up for its users

As long as the Air Force pinky-swears it didn’t mean to, its drone fleet can keep tabs on the movements of Americans, far from the battlefields of Afghanistan, Pakistan or Yemen. And it can hold data on them for 90 days — studying it to see if the people it accidentally spied upon are actually legitimate targets of domestic surveillance.
-- Spencer Ackerman

An Apple programmer, apparently by accident, left a debug flag in the most recent version of the Mac OS X operating system. In specific configurations, applying OS X Lion update 10.7.3 turns on a system-wide debug log file that contains the login passwords of every user who has logged in since the update was applied. The passwords are stored in clear text.
-- Emil Protalinski

Comments (2 posted)

An important PHP security update

PHP 5.3.12 and 5.4.2 have been released to fix a nasty security hole that was disclosed somewhat sooner than planned. Essentially, it allows any remote attacker to pass command-line arguments to the PHP interpreter behind a web page—but only in the (hopefully rare) setups where PHP is invoked via the CGI mechanism. "If you are using Apache mod_cgi to run PHP you may be vulnerable. To see if you are just add ?-s to the end of any of your URLs. If you see your source code, you are vulnerable. If your site renders normally, you are not."

Comments (12 posted)

Linux Format censored over 'Learn to Hack' feature (bit-tech)

Bit-tech reports that Barnes & Noble pulled the last issue of Linux Format magazine because of an article featuring hacking techniques. "Issue 154 of Linux Format magazine had as its cover feature a piece entitled 'Learn to Hack,' walking readers through the use of the Metasploit Framework exploitation toolkit to gain access to computer systems running a variety of operating systems. The article also covered password cracking, network sniffing, and man-in-the-middle attacks over encrypted protocols. More importantly, the guide also covered how best to protect your systems from the self-same attacks, providing readers with information that the publication hoped would help keep them safe from the ne'er-do-wells inhabiting the seedier sides of the net." Future, Linux Format's parent company, has made the article available online.

Comments (28 posted)

New vulnerabilities

argyllcms: code execution

Package(s):argyllcms CVE #(s):CVE-2012-1616
Created:May 7, 2012 Updated:June 19, 2012
Description: From the Red Hat bugzilla:

A Use-after-free vulnerability was found in the way icclib, a library used for reading and writing of color profile files that conform to the International Color Consortium (ICC) Profile Format Specification, processed certain crafted ICC profile files. The ICC Profile Format is a cross-platform device profile format that can be used to translate color data created on one device into another device's native color space.

A remote attacker could provide a specially crafted file and trick a local user into opening it, which could lead to arbitrary code execution with the privileges of the user running an application linked against icclib.

Alerts:
Fedora FEDORA-2012-6529 argyllcms 2012-05-04
Gentoo 201206-04 argyllcms 2012-06-18

Comments (3 posted)

asterisk: denial of service

Package(s):asterisk CVE #(s):CVE-2012-2416
Created:May 4, 2012 Updated:May 9, 2012
Description: From the CVE entry:

chan_sip.c in the SIP channel driver in Asterisk Open Source 1.8.x before 1.8.11.1 and 10.x before 10.3.1 and Asterisk Business Edition C.3.x before C.3.7.4, when the trustrpid option is enabled, allows remote authenticated users to cause a denial of service (daemon crash) by sending a SIP UPDATE message that triggers a connected-line update attempt without an associated channel.

Alerts:
Gentoo 201206-05 asterisk 2012-06-20
Fedora FEDORA-2012-6724 asterisk 2012-05-04
Fedora FEDORA-2012-6612 asterisk 2012-05-03

Comments (none posted)

flash-player: code execution

Package(s):flash-player CVE #(s):CVE-2012-0779
Created:May 7, 2012 Updated:May 23, 2012
Description: From the SUSE advisory:

Adobe Flash Player before 10.3.183.19 and 11.x before 11.2.202.235 on Windows, Mac OS X, and Linux; before 11.1.111.9 on Android 2.x and 3.x; and before 11.1.115.8 on Android 4.x allows remote attackers to execute arbitrary code via a crafted file, related to an "object confusion vulnerability," as exploited in the wild in May 2012.

Alerts:
Gentoo 201206-21 adobe-flash 2012-06-23
SUSE SUSE-SU-2012:0592-2 flash-player 2012-05-08
openSUSE openSUSE-SU-2012:0594-1 flash-player 2012-05-08
SUSE SUSE-SU-2012:0592-1 flash-player 2012-05-20
Red Hat RHSA-2012:0688-01 flash-plugin 2012-05-23

Comments (none posted)

horizon: multiple vulnerabilities

Package(s):horizon CVE #(s):CVE-2012-2094 CVE-2012-2144
Created:May 7, 2012 Updated:May 9, 2012
Description: From the

Matthias Weckbecker discovered a cross-site scripting (XSS) vulnerability in Horizon via the log viewer refrash mechanism. If a user were tricked into viewing a specially crafted log message, a remote attacker could exploit this to modify the contents or steal confidential data within the same domain. (CVE-2012-2094)

Thomas Biege discovered a session fixation vulnerability in Horizon. An attacker could exploit this to potentially allow access to unauthorized information and capabilities. (CVE-2012-2144)

Alerts:
Ubuntu USN-1439-1 horizon 2012-05-07

Comments (none posted)

kernel: denial of service

Package(s):linux CVE #(s):CVE-2012-2100
Created:May 8, 2012 Updated:December 19, 2012
Description: From the Ubuntu advisory:

A flaw was found in the Linux kernel's ext4 file system when mounting a corrupt filesystem. A user-assisted remote attacker could exploit this flaw to cause a denial of service.

Alerts:
Oracle ELSA-2013-1645 kernel 2013-11-26
Oracle ELSA-2012-2048 linux 2012-12-20
Oracle ELSA-2012-2048 linux 2012-12-20
Oracle ELSA-2012-1580 kernel 2012-12-19
Scientific Linux SL-kern-20121219 kernel 2012-12-19
CentOS CESA-2012:1580 kernel 2012-12-19
Red Hat RHSA-2012:1580-01 kernel 2012-12-18
Scientific Linux SL-kern-20121114 kernel 2012-11-14
Red Hat RHSA-2012:1445-01 kernel 2012-11-13
Oracle ELSA-2012-1445 kernel 2012-11-14
Oracle ELSA-2012-1445 kernel 2012-11-14
CentOS CESA-2012:1445 kernel 2012-11-13
Ubuntu USN-1458-1 linux-ti-omap4 2012-05-31
Ubuntu USN-1440-1 linux-lts-backport-natty 2012-05-08
Ubuntu USN-1432-1 linux 2012-05-07

Comments (none posted)

mahara: insecure default/privilege escalation

Package(s):mahara CVE #(s):
Created:May 9, 2012 Updated:May 9, 2012
Description:

From the Debian advisory:

It was discovered that Mahara, the portfolio, weblog, and resume builder, had an insecure default with regards to SAML-based authentication used with more than one SAML identity provider. Someone with control over one IdP could impersonate users from other IdP's.

Alerts:
Debian DSA-2467-1 mahara 2012-05-09

Comments (none posted)

mozilla-https-everywhere: no SSL switch for some URLs

Package(s):mozilla-https-everywhere CVE #(s):
Created:May 3, 2012 Updated:May 9, 2012
Description:

From the Tor bug entry:

If you go to a URL such as http://www.google.com./ HTTPS-Everywhere will *not* switch to HTTPS. This is a legal DNS value, technically but not practically distinct from http://www.google.com/ and as such, it should be handled similarly.

[...] (it would allow an active attacker to perform Firesheep-style cookie stealing accounts against sites that HTTPS Everywhere protects with domain-wide redirects, if the ruleset does not also have a <securecookie> directive)

Alerts:
Fedora FEDORA-2012-7136 mozilla-https-everywhere 2012-05-03
Fedora FEDORA-2012-7175 mozilla-https-everywhere 2012-05-03

Comments (none posted)

openconnect: denial of service

Package(s):openconnect CVE #(s):
Created:May 7, 2012 Updated:May 9, 2012
Description: Version 3.18 of openconnect, a client for Cisco's "AnyConnect" VPN, fixes a potential buffer overrun when handling the greeting banner from the server. Also this update fixes a potential crash when processing libproxy results.
Alerts:
Fedora FEDORA-2012-6730 openconnect 2012-05-04
Fedora FEDORA-2012-6758 openconnect 2012-05-04

Comments (none posted)

php: code execution

Package(s):php5 CVE #(s):CVE-2012-2311 CVE-2012-1823
Created:May 7, 2012 Updated:July 2, 2012
Description: From the Ubuntu advisory:

It was discovered that PHP, when used as a stand alone CGI processor for the Apache Web Server, did not properly parse and filter query strings. This could allow a remote attacker to execute arbitrary code running with the privilege of the web server. Configurations using mod_php5 and FastCGI were not vulnerable.

Alerts:
SUSE SUSE-SU-2013:1351-1 PHP5 2013-08-16
Gentoo 201209-03 php 2012-09-23
CentOS CESA-2012:1046 php 2012-07-10
Scientific Linux SL-php-20120709 php 2012-07-09
Scientific Linux SL-php5-20120705 php53 2012-07-05
Scientific Linux SL-php-20120705 php 2012-07-05
Oracle ELSA-2012-1046 php 2012-06-30
Oracle ELSA-2012-1047 php53 2012-06-28
Oracle ELSA-2012-1045 php 2012-06-28
CentOS CESA-2012:1047 php53 2012-06-27
CentOS CESA-2012:1045 php 2012-06-27
Red Hat RHSA-2012:1047-01 php53 2012-06-27
Red Hat RHSA-2012:1046-01 php 2012-06-27
Red Hat RHSA-2012:1045-01 php 2012-06-27
Fedora FEDORA-2012-7567 php-eaccelerator 2012-05-27
Red Hat RHSA-2012:0570-01 php 2012-05-11
SUSE SUSE-SU-2012:0604-1 PHP5 2012-05-09
Red Hat RHSA-2012:0569-01 php53 2012-05-10
Red Hat RHSA-2012:0568-01 php 2012-05-10
Fedora FEDORA-2012-7586 php-eaccelerator 2012-05-27
Fedora FEDORA-2012-7567 php 2012-05-27
Fedora FEDORA-2012-7586 php 2012-05-27
Mandriva MDVSA-2012:071 php 2012-05-10
Mandriva MDVSA-2012:068-1 php 2012-05-10
SUSE SUSE-SU-2012:0598-2 PHP5 2012-05-09
SUSE SUSE-SU-2012:0598-1 PHP5 2012-05-09
Oracle ELSA-2012-0547 php53 2012-05-08
Debian DSA-2465-1 php5 2012-05-09
Oracle ELSA-2012-0546 php 2012-05-08
Oracle ELSA-2012-0546 php 2012-05-08
Scientific Linux SL-php5-20120508 php53 2012-05-08
Scientific Linux SL-php-20120508 php 2012-05-08
CentOS CESA-2012:0547 php53 2012-05-07
CentOS CESA-2012:0546 php 2012-05-07
CentOS CESA-2012:0546 php 2012-05-07
Red Hat RHSA-2012:0547-01 php53 2012-05-07
Red Hat RHSA-2012:0546-01 php 2012-05-07
openSUSE openSUSE-SU-2012:0590-1 php5 2012-05-07
Ubuntu USN-1437-1 php5 2012-05-04
Fedora FEDORA-2012-7567 maniadrive 2012-05-27
Fedora FEDORA-2012-7586 maniadrive 2012-05-27

Comments (none posted)

python3: multiple vulnerabilities

Package(s):python3 CVE #(s):CVE-2012-1150 CVE-2012-0845 CVE-2011-3389
Created:May 3, 2012 Updated:November 12, 2014
Description:

From the Fedora advisory:

Bug #750555 - CVE-2012-1150 python: hash table collisions CPU usage DoS (oCERT-2011-003) https://bugzilla.redhat.com/show_bug.cgi?id=750555

Bug #789790 - CVE-2012-0845 python: SimpleXMLRPCServer CPU usage DoS via malformed XML-RPC request https://bugzilla.redhat.com/show_bug.cgi?id=789790

Bug #812068 - python: SSL CBC IV vulnerability (CVE-2011-3389, BEAST) https://bugzilla.redhat.com/show_bug.cgi?id=812068

Alerts:
Fedora FEDORA-2014-13777 Pound 2014-11-12
Gentoo 201401-04 python 2014-01-07
Mandriva MDVSA-2013:037 fetchmail 2013-04-05
Gentoo 201301-01 firefox 2013-01-07
Ubuntu USN-1615-1 python3.2 2012-10-23
Ubuntu USN-1613-1 python2.5 2012-10-17
Ubuntu USN-1613-2 python2.4 2012-10-17
Ubuntu USN-1616-1 python3.1 2012-10-24
Ubuntu USN-1596-1 python2.6 2012-10-04
Ubuntu USN-1592-1 python2.7 2012-10-02
Mandriva MDVSA-2012:149 fetchmail 2012-09-01
Mageia MGASA-2012-0169 python 2012-07-19
Mandriva MDVSA-2012:096-1 python 2012-07-02
Mandriva MDVSA-2012:096 python 2012-06-20
Mandriva MDVSA-2012:097 python 2012-06-20
CentOS CESA-2012:0744 python 2012-06-18
Scientific Linux SL-pyth-20120618 python 2012-06-18
CentOS CESA-2012:0745 python 2012-06-18
Red Hat RHSA-2012:0745-01 python 2012-06-18
openSUSE openSUSE-SU-2012:0667-1 python 2012-05-30
Fedora FEDORA-2012-5924 python-docs 2012-05-06
Fedora FEDORA-2012-5924 python 2012-05-06
Fedora FEDORA-2012-5916 python3 2012-05-03
Fedora FEDORA-2012-9135 python3 2012-06-19
Red Hat RHSA-2012:0744-01 python 2012-06-18
Oracle ELSA-2012-0745 python 2012-06-19
Oracle ELSA-2012-0744 python 2012-06-19
Scientific Linux SL-pyth-20120618 python 2012-06-18

Comments (none posted)

Page editor: Jake Edge

Kernel development

Brief items

Kernel release status

The current development kernel is 3.4-rc6, released on May 6. "Another week, another -rc - and I think we're getting close to final 3.4. So please do test."

Stable updates: the 3.0.31 and 3.3.5 updates were released on May 7 with the usual pile of important fixes.

The 3.2.17 update, with 167 fixes, is in the review process as of this writing; it can be expected on or after May 11.

Comments (2 posted)

Quotes of the week

So [KERN_CONT] is like a defibrillator: it is good to *have* one, but it's really bad to have to *use* one.
Linus Torvalds

I really love fairy tales, just not in the context of kernel code.
Thomas Gleixner

Quick! Everyone say something extreme for this week's LWN Quote of the Week!
Jon Masters (warning: disappointing results)

Comments (none posted)

Nichols, Jacobson: Controlling Queue Delay

Kathleen Nichols and Van Jacobson have published a paper describing a new network queue management algorithm that, it is hoped, will play a significant role in the solution to the bufferbloat problem. "CoDel (Controlled Delay Management) has three major innovations that distinguish it from prior AQMs. First, CoDel’s algorithm is not based on queue size, queue-size averages, queue-size thresholds, rate measurements, link utilization, drop rate or queue occupancy time. Starting from Van Jacobson’s 2006 insight, we used the local minimum queue as a more accurate and robust measure of standing queue. Then we observed that it is sufficient to keep a single-state variable of how long the minimum has been above or below the target value for standing queue delay rather than keeping a window of values to compute the minimum. Finally, rather than measuring queue size in bytes or packets, we used the packet-sojourn time through the queue. Use of the actual delay experienced by each packet is independent of link rate, gives superior performance to use of buffer size, and is directly related to the user-visible performance."

For more information, see this blog post from Jim Gettys. "A preliminary Linux implementation of CoDel written by Eric Dumazet and Dave Täht is now being tested on Ethernet over a wide range of speeds up to 10gigE, and is showing very promising results similar to the simulation results in Kathie and Van’s article. CoDel has been run on a CeroWrt home router as well, showing its performance."

Comments (13 posted)

Kernel development news

The CoDel queue management algorithm

By Jonathan Corbet
May 9, 2012
"Bufferbloat" can be thought of as the buffering of too many packets in flight between two network end points, resulting in excessive delays and confusion of TCP's flow control algorithms. It may seem like a simple problem, but the simple solution—make buffers smaller—turns out not to work. A true solution to bufferbloat requires a deeper understanding of what is going on, combined with improved software across the net. A new paper from Kathleen Nichols and Van Jacobson provides some of that understanding and an algorithm for making things better—an algorithm that has been implemented first in Linux.

Your editor had a classic bufferbloat experience at a conference hotel last year. An attempt to copy a photograph to the LWN server (using scp) would consistently fail with a "response timeout" error. There was so much buffering in the path that scp was able to "send" the entire image before any of it had been received at the other end. The scp utility would then wait for a response from the remote end; that response would never come in time because most of the image had not, contrary to what scp thought, actually been transmitted. The solution was to use the -l option to slow down transmission to a rate closer to what the link could actually manage. With scp transmitting slower, it was able to come up with a more reasonable idea for when the data should be received by the remote end.

And that, of course, is the key to avoiding bufferbloat issues in general. A system transmitting packets onto the net should not be sending them more quickly than the slowest link on the path to the destination can handle them. TCP implementations are actually designed to figure out what the transmission rate should be and stick to it, but massive buffering defeats the algorithms used to determine that rate. One way around this problem is to force users to come up with a suitable rate manually, but that is not the sort of network experience most users want to have. It would be far better to find a solution that Just Works.

Part of that solution, according to Nichols and Jacobson, is a new algorithm called CoDel (for "controlled delay"). Before describing that algorithm, though, they make it clear that just making buffers smaller is not a real solution to the problem. Network buffers serve an important function: they absorb traffic spikes and equalize packet rates into and out of a system. A long packet queue is not necessarily a problem, especially during the startup phase of a network connection, but long queues as a steady state just add delays without improving throughput at all. The point of CoDel is to allow queues to grow when needed, but to try to keep the steady state at a reasonable level.

Various automated queue management algorithms have been tried over the years; they have tended to suffer from complexity and a need for manual configuration. Having to tweak parameters by hand was never a great solution even in ideal situations, but it fails completely in situations where the network load or link delay time can vary widely over time. Such situations are the norm on the contemporary Internet; as a result, there has been little use of automated queue management even in the face of obvious problems.

One of the key insights in the design of CoDel is that there is only one parameter that really matters: how long it takes a packet to make its way through the queue and be sent on toward its destination. And, in particular, CoDel is interested in the minimum delay time over a time interval of interest. If that minimum is too high, it indicates a standing backlog of packets in the queue that is never being cleared, and that, in turn, indicates that too much buffering is going on. So CoDel works by adding a timestamp to each packet as it is received and queued. When the packet reaches the head of the queue, the time spent in the queue is calculated; it is a simple calculation of a single value, with no locking required, so it will be fast.

Less time spent in queues is always better, but that time cannot always be zero. Built into CoDel is a maximum acceptable queue time, called target; if a packet's time in the queue exceeds this value, then the queue is deemed to be too long. But an overly-long queue is not, in itself, a problem, as long as the queue empties out again. CoDel defines a period (called interval) during which the time spent by packets in the queue should fall below target at least once; if that does not happen, CoDel will start dropping packets. Dropped packets are, of course, a signal to the sender that it needs to slow down, so, by dropping them, CoDel should cause a reduction in the rate of incoming packets, allowing the queue to drain. If the queue time remains above target, CoDel will drop progressively more packets. And that should be all it takes to keep queue lengths at reasonable values on a CoDel-managed node.

The target and interval parameters may seem out of place in an algorithm that is advertised as having no knobs in need of tweaking. What the authors have found, though, is that a target of 5ms and an interval of 100ms work well in just about any setting. The use of time values (rather than packet or byte counts) makes the algorithm function independently of the speed of the links it is managing, so there is no real need to adjust them. Of course, as they note, these are early results based mostly on simulations; what is needed now is experience using a functioning implementation on the real Internet.

That experience may not be long in coming, at least for some kinds of links; there is now a CoDel patch for Linux available thanks to Dave Täht and Eric Dumazet. This code is likely to find its way into the mainline fairly quickly; it will also be available in the CeroWrt router distribution. As the early CoDel implementation starts to see some real use, some shortcomings will doubtless be encountered and it may well lose some of its current simplicity. But it has every appearance of being an important component in the solution to the bufferbloat problem.

Of course, it's not the only component; the problem is more complex than that. There is still a need to look at buffer sizes throughout the stack; in many places, there is simply too much buffering in places where it can do no good. Wireless networking adds some interesting challenges of its own, with its quickly varying link speeds and complexities added by packet aggregation. There is also the little problem of getting updated software distributed across the net. So a full solution is still somewhat distant, but the understanding of the problem is clearly growing and some interesting approaches are beginning to appear.

Comments (43 posted)

Statistics from the 3.4 development cycle

By Jonathan Corbet
May 8, 2012
With the release of the 3.4-rc6 prepatch, Linus let it be known that he thought the final 3.4 release was probably not too far away. That can only mean one thing: it's time to look at the statistics for this development cycle. 3.4 was an active cycle, with an interesting surprise or two.

As of this writing, Linus has merged just over 10,700 changes for 3.4; those changes were contributed from 1,259 developers. The total growth of the kernel source this time around is 215,000 lines. The developers most active in this cycle were:

Most active 3.4 developers
By changesets
Mark Brown2842.7%
Russell King2112.0%
Johannes Berg1471.4%
Al Viro1361.3%
Axel Lin1331.2%
Johan Hedberg1221.1%
Guenter Roeck1211.1%
Masanari Iida1091.0%
Stanislav Kinsbursky970.9%
Trond Myklebust850.8%
Jiri Slaby820.8%
Ben Hutchings820.8%
Greg Kroah-Hartman780.7%
Takashi Iwai780.7%
Dan Carpenter780.7%
Stephen Warren760.7%
Stanislaw Gruszka760.7%
Alex Deucher730.7%
By changed lines
Joe Perches565718.1%
Dan Magenheimer240773.4%
Stephen Rothwell173542.5%
Greg Kroah-Hartman150152.1%
Mark Brown122661.8%
Jiri Olsa118421.7%
Mark A. Allyn109761.6%
Stephen Warren103861.5%
Arun Murthy93471.3%
Ingo Molnar87791.3%
Alex Deucher87701.3%
David Howells80341.2%
Guenter Roeck76341.1%
Chris Kelly70231.0%
Johannes Berg66571.0%
Ben Hutchings66501.0%
Al Viro66280.9%
Russell King66100.9%

Mark Brown finds himself at the top of the list of changeset contributors for the second cycle in a row; as usual, he has done a great deal of work with sound drivers and related subsystems. Russell King is the chief ARM maintainer; he has also taken an active role in the refactoring and cleanup of the ARM architecture code. Johannes Berg continues to do a lot of work with the mac80211 layer and the iwlwifi driver, Al Viro has been improving the VFS API and fixing issues throughout the kernel, and Axel Lin has done a lot of cleanup work in the ALSA and regulator subsystems and beyond.

Joe Perches leads the "lines changed" column with coding-style fixes, pr_*() conversions, and related work. Dan Magenheimer added the "ramster" memory sharing mechanism to the staging tree. Linux-next maintainer Stephen Rothwell made it into the "lines changed" column with the removal of a lot of old PowerPC code. Greg Kroah-Hartman works all over the tree, but the bulk of his changed lines were to be found in the staging tree.

Some 195 companies contributed changes during the 3.4 development cycle. The top contributors this time around were:

Most active 3.4 employers
By changesets
(None)115610.8%
Intel113810.6%
Red Hat9609.0%
(Unknown)6886.4%
Texas Instruments4284.0%
IBM3813.6%
Novell3723.5%
(Consultant)2982.8%
Wolfson Microelectronics2862.7%
Samsung2342.2%
Google2222.1%
Oracle1881.8%
Freescale1751.6%
Qualcomm1611.5%
Linaro1431.3%
Broadcom1401.3%
NetApp1331.2%
MiTAC1331.2%
AMD1321.2%
By lines changed
(None)10850915.5%
Intel674649.7%
Red Hat659669.4%
(Unknown)509007.3%
IBM368005.3%
Oracle266173.8%
Texas Instruments256873.7%
Samsung249663.6%
NVidia206042.9%
Linux Foundation169172.4%
ST Ericsson157922.3%
Novell151852.2%
Wolfson Microelectronics140392.0%
(Consultant)134951.9%
AMD101511.5%
Freescale101021.4%
Linaro93601.3%
Google90701.3%
Qualcomm89721.3%

A longstanding invariant in the above table has been Red Hat as the top corporate contributor; in 3.4, however, Red Hat has been pushed down one position by Intel. Red Hat's contributions are down somewhat; 960 changesets in 3.4 compared to 1,290 in 3.3. But the more significant change is the burst of activity from Intel. This work is mostly centered around support for Intel's own hardware, as one would expect, but also extends to things like support for the x32 ABI. Meanwhile, Texas Instruments continues the growth in participation seen over the last few years, as do a number of other mobile and embedded companies. Once upon a time, it was said that Linux development was dominated by "big iron" enterprise-oriented companies; those companies have not gone away, but they are clearly not the only driving force behind Linux kernel development at this point. On the other hand, the participation by volunteers is at the lowest level seen in many cycles, continuing a longstanding trend.

A brief focus on ARM

Recent development cycles have seen a lot of work in the ARM subtree, and 3.4 is no exception; 1,100 changesets touched code in arch/arm this time around. Those changes were contributed by 178 developers representing 51 companies. Among those companies, the most active were:

Most active 3.4 employers (ARM subtree)
By changesets
(Consultant)14913.5%
Texas Instruments12111.0%
(None)1039.4%
Samsung918.3%
Linaro807.3%
NVidia544.9%
ARM524.7%
(Unknown)484.4%
Calxeda464.2%
Freescale403.6%
Atmel373.4%
Atomide302.7%
OpenSource AB242.2%
Google232.1%
ST Ericsson232.1%
By lines changed
Samsung816216.8%
(None)596712.3%
NVidia492910.2%
(Consultant)47559.8%
Linaro35507.3%
Texas Instruments31186.4%
ARM26595.5%
Calxeda24085.0%
Atmel20804.3%
(Unknown)18623.8%
Vista-Silicon S.L.11212.3%
Freescale11172.3%
Atomide10052.1%
Google7371.5%
PHILOSYS Software6591.4%

ARM is clearly an active area for consultants, who contributed over 13% of the changes this time around. Otherwise, there are few surprises to be seen in this area; the companies working in the mobile area are the biggest contributors to the ARM tree, while those focused on other types of systems have little presence here.

There is one other way to look at ARM development. Much of the work on ARM is done through the Linaro consortium. Many developers contributing code from a linaro.com address are "on loan" from other companies; the above table, to the extent possible, credits those changes to the "real" employer that paid for the work. If, instead, all changes from a Linaro address are credited to Linaro, the results change: Linaro, with 11.9% of all the changes in arch/arm, becomes the top employer, though it still accounts for fewer changes than independent consultants do. Linaro clearly has become an important part of the ARM development community.

In summary, it has been another busy and productive development cycle in the kernel community. Despite the usual hiccups, things are stabilizing and chances are good that 3.4-rc7 will be the last prepatch, meaning that this cycle will be a relatively short one. There is little rest for kernel developers, though; the 3.5 cycle with its frantic merge window will start shortly thereafter. Stay tuned to LWN, as always, for ongoing coverage of development in this large and energetic community.

Comments (1 posted)

Supporting multi-platform ARM kernels

By Jonathan Corbet
May 9, 2012
The diversity of the ARM architecture is one of its great strengths: manufacturers have been able to create a wide range of interesting system-on-chip devices around the common ARM processor core. But this diversity, combined with a general lack of hardware discoverability, makes ARM systems hard to support in the kernel. As things stand now, a special kernel must be built for any specific ARM system. With most other architectures, it is possible to support most or all systems with a single binary kernel (or maybe two for 32-bit and 64-bit configurations). In the ARM realm, there is no single binary kernel that can run everywhere. Work is being done to improve that situation, but some interesting decisions will have to be made on the way.

On an x86 system, the kernel is, for the most part, able to boot and ask the hardware to describe itself; kernels can thus configure themselves for the specific system on which they are run. In the ARM world, the hardware usually has no such capability, so the kernel must be told which devices are present and where they can be found. Traditionally, this configuration has been done in "board files," which have a number of tasks:

  • Define any system-specific functions and setup code.

  • Create a description of the available peripherals, usually through the definition of a number of platform devices.

  • Create a special machine description structure that includes a magic number defined for that particular system. That number must be passed to the kernel by the bootloader; the kernel uses it to find the machine description for the specific system being booted.

There are currently hundreds of board files in the ARM architecture subtree, and some unknown number of them shipped in devices but never contributed upstream. Within a given platform type (a specific system-on-chip line from a vendor), it is often possible to build multiple board files into a single kernel, with the actual machine type being specified at boot time. But combining board files across platform types is not generally possible.

One of the main goals of the current flurry of work in the ARM subtree is to make multi-platform kernels possible. An important step in that direction is the elimination of board files as much as possible; they are being replaced with device trees. In the end, a board file is largely a static data structure describing the topology of the system; that data structure can just as easily be put into a text file passed into the kernel by the boot loader. By moving the hardware configuration information out of the kernel itself, the ARM developers make the kernel more easily applicable to a wider variety of hardware. There are a lot of other things to be done before we have true multi-platform support—work toward properly abstracting interrupts and clocks continues, for example—but device tree support is an important piece of the puzzle.

Arnd Bergmann recently asked a question to the kernel development community: does it make sense to support legacy board files in multi-platform kernels? Or would it be better to limit support to systems that use device trees for hardware enumeration? Arnd was pretty clear on what his own position was:

My feeling is that we should just mandate DT booting for multiplatform kernels, because it significantly reduces the combinatorial space at compile time, avoids a lot of legacy board files that we cannot test anyway, reduces the total kernel size and gives an incentive for people to move forward to DT with their existing boards.

There was a surprising amount of opposition to this idea. Some developers seemed to interpret Arnd's message as a call to drop support for systems that lack device tree support, but that is not the point at all. Current single-platform builds will continue to work as they always have; nobody is trying to take that away. The point, instead, is to make life easier for developers trying to make multi-platform builds work; multi-platform ARM kernels have never worked in the past, so excluding some systems will not deprive their users of anything they already had.

Some others saw it as an arbitrary restriction without any real technical basis. There is nothing standing in the way of including non-device-tree systems in a multi-platform kernel except the extra complexity and bloat that they bring. But complexity and bloat are technical problems, especially when the problem being solved is difficult enough as it is. It was also pointed out that there are some older platforms that have not seen any real maintenance in recent times, but which are still useful for users.

In the end, it will come down to what the users of multi-platform ARM kernels want. It was not immediately clear to everybody that there are users for such kernels: ARM kernels are usually targeted to specific devices, so adding support for other systems gives no benefit at all. Thus, embedded systems manufacturers are likely to be uninterested in multi-platform support. Distributors are another story, though; they would like to support a wide range of systems without having to build large numbers of kernels. As Debian developer Wookey put it:

We are keen on multiplatform kernels because building a great pile of different ones is a massive pain (and not just for arm because it holds up security updates), and if we could still cover all that lot with one kernel, or indeed any number less than 7 that would be great.

In response, Arnd amended his proposal to allow board files for subarchitectures that don't look likely to support device trees anytime soon. At that point, the discussion wound down without any sort of formal conclusion. The topic will likely be discussed at the upcoming Linaro Connect event and, probably, afterward as well. There are a number of other issues to be dealt with before multi-platform ARM kernels are a reality; that gives some time for this particular decision to be considered with all the relevant needs in mind.

Comments (6 posted)

Patches and updates

Kernel trees

Linus Torvalds Linux 3.4-rc6 ?
Greg KH Linux 3.3.5 ?
Greg KH Linux 3.0.31 ?

Architecture-specific

Core kernel code

Development tools

Device drivers

Documentation

Michael Kerrisk (man-pages) man-pages-3.40 is released ?

Filesystems and block I/O

Networking

Virtualization and containers

Miscellaneous

Page editor: Jonathan Corbet

Distributions

Who should maintain Python for Debian?

By Jake Edge
May 9, 2012

A two-year old Debian technical committee "bug" highlights some interesting aspects of Debian governance. The problem comes down to technical and personal disagreements about Python maintenance for the distribution. That bug has remained open since March 2010, though it may soon be resolved—based on the history, even saying that may be premature. That history raises a larger question, however: how should a project handle a situation where developers and maintainers of cooperating packages can't seem to get along—or even communicate?

March 2010

More than two years ago, Sandro Tosi noted some problems in the maintenance of the Python interpreter package. In that report, he pointed out that Python 2.6 for Debian was delayed for 14 months after it was available upstream, though the maintainer (Matthias Klose) had released two Python 2.6 packages for Ubuntu in the interim. In addition, once a Debian version was uploaded to unstable, it contained changes to the location of installed modules that broke various packaging tools and packages (mostly Python modules). That transition came with no warning, Tosi said, which is symptomatic of another problem: Klose is not communicating with the rest of the Debian Python community.

Because of those problems, Tosi asked the committee to make a decision about who should maintain the interpreter packages going forward. Tosi suggested that a new maintenance team be appointed for the Python interpreter and python-defaults packages. That message to the committee was signed by Tosi and three others (Luca Falavigna, Josselin Mouette, and Bernd Zeimetz) all of whom were proposed as the new maintainers. Others "willing to help, including of course the current maintainer" were also to be included.

The discussion continued for several weeks, committee member Bdale Garbee did some investigation into the problems and concluded that a better Debian Python policy and plan was needed before any kind of decision could be made, while others discussed ways to add co-maintainers. The problems clearly go back further than the bug report, perhaps as far as a DebConf 6 Python packaging meeting that evidently went awry—probably even further back than that.

Beyond the technical complaints, one of the major problems that is mentioned frequently by the (self) proposed new maintainers group is a lack of communication from Klose. Coordination with the module and Python application maintainers has been essentially non-existent, they said. Certainly the bug report itself is one example of that; in a long thread over two years, there is not one message from Klose. In addition, a look at the debian-python mailing list shows only a handful of messages from him in that time frame.

Klose maintains some "key packages (bash, binutils, gcc, java, python, and several others)", according to Tosi. That may leave him stretched a little thin. It may also be that he prefers other forms of communication (IRC is mentioned frequently). There are also hints in the thread that Klose may no longer be talking to those in the "new maintainer" camp due to longstanding "bad blood" stemming from both technical and personality conflicts.

Whatever the reasons, there is some kind of fragmentation going on in the Debian Python community. Part of it seems to be caused by Ubuntu-Debian conflicts, but the bulk of it stems from Klose's maintainership, which, at least in the eyes of some, is characterized by a "my way or the highway" attitude. The technical committee was fairly obviously leery of stepping into the middle of that mess and just making a decision. The committee members discussing it seem to have reached consensus that there are problems in the community, but none of the proposed solutions look like they will clearly make things better.

November 2010

The initial discussion petered out in July 2010. In November 2010, Debian Project Leader (DPL) Stefano Zacchiroli noted that he was frequently asked about the issue. Things had gotten better, he said, and discussions on transition strategies were taking place on the mailing list, which was a step in the right direction. He noted that while Klose was not always participating in those discussions, "it is also clear that he follows them and seems to agree with where they are going". But, that said, he stills sees a problem:

Nevertheless, the big issue is undeniably still open: maintenance of the main Python interpreter packages is still up to a single maintainer, with no mutual trust and/or communication between him and (most of) the rest of the Debian Python community.

Additionally, as DPL, I'm worried by seeing packages as important as the Python interpreters maintained by a single person. Even if all other surrounding issues were not there, that would be a bus-factor problem worth fixing by itself. (I concede there are other similar situations in the archive, but this is no excuse; they are just other problems to be solved.)

He concluded by saying that he didn't envy the committee for the decision it has to make, but was clearly encouraging a resolution to the problem. After there was no response for nearly two months, another ping from Zacchiroli in December was mostly met with silence.

March 2011

That led Zacchiroli to make another proposal in March 2011. While he makes it clear that he is not trying to step on the committee's toes, he proposed that it defer the decision to him. The proposal looks like something of a last gasp attempt to help the committee make a decision of some kind.

That elicited some response, though no one really felt that it was right to delegate the decision to the DPL. Ian Jackson expressed disappointment in the lack of a decision and suggested that the packages in question be orphaned, while requesting that interested teams apply to become the maintainers. Steve Langasek was opposed to that, and suggested that the committee re-affirm Klose as maintainer with encouragement to take on co-maintainers.

On the other hand, Russ Allbery thought that finding a team to maintain the interpreter packages, one that included Klose, would be the ideal solution. But, like the others, he was not really in favor of delegating to the DPL. And that's pretty much where this iteration of the conversation dropped.

March 2012

Tosi pinged the bug again in November, then in March 2012 ("2-years-old ping"). The latter is what prompted the most recent re-kindling of the discussion. The participants in this round seem resigned to taking a vote, with some discussion on what the options should be. Zacchiroli volunteered to try to firm up the possible alternative teams for Python maintenance and, to that end, posted a message to debian-python asking for interested parties to speak up.

Several people spoke up to volunteer, along with some who were opposed to replacing Klose. That led to a message from Zacchiroli summarizing the discussion and outlining the teams that were available to be placed on the tech committee's ballot. He followed that up with a bit of a poke on April 27: "I hope this could help and that the tech-ctte have now all the input needed to quickly come to a conclusion on this issue, one way or another." A bit of dialogue on the makeup of the three possible "teams" ensued, but the discussion pretty much ended there. In his DPL report, Zacchiroli mentioned his recent involvement and concluded: "I hope the tech-ctte now have all the information needed to come to a decision".

May 2012 (and beyond?)

It is a rather strange situation overall. It seems clear that the committee is not completely comfortable affirming Klose as the sole maintainer, and he has not commented as to whether he would be willing to co-maintain the interpreter packages with others. But an "overthrow" of Klose is not very palatable either. By waiting, presumably hoping that things would correct themselves on their own, the committee has put itself into an awkward position.

Had it re-affirmed Klose two years ago (or one year ago, or ...) the problem may in fact have solved itself. Perhaps the unhappy petitioners would have "taken their marbles and gone home", but, by now, one would guess any package maintainership holes would have been filled. If it gives Klose a vote of confidence now, after a two year consideration phase, there are likely to be questions about why it was left to linger so long. Meanwhile, deposing Klose now will raise more or less the same questions. As is typical in a Debian ballot, however, all of the proposals so far also include the "further discussion" option, so the committee could conceivably kick the can further down the road.

It's clear that Zacchiroli and others would rather not see that. The powers of the DPL are famously limited by the Debian Constitution, but Zacchiroli has done everything in his power to try to get some kind of closure on the issue. It is up to the technical committee to pull together a final ballot and put it to a vote; it seems likely that almost any decision (other than "further discussion" perhaps) would be better than none at this point. Or maybe the conversation will just die until the "three-year ping" comes along.

Comments (5 posted)

Brief items

Distribution quotes of the week

You'll say: but the code is free! Yes, it is. Which is about as valuable as... Let's see - how many distribution-specific package managers have been ported to others?
-- Jos Poortvliet

My pet hypothetical solution of the day is that mailing lists might raise the quality of the debates by limiting the number of messages written by each person per day in each thread. This might, I think, induce people to write with more thought and put more effort into making each message count.
-- Lars Wirzenius

btw, is the concept of numbers smaller than zero but not negative known/used anywhere outside of debian/dpkg?
-- Holger Levsen

Tizen has drawn a lot of crap from their complete silence and secret cathedral building behaviour up to 1.0 release. But I can say that if I was in their shoes, having to launch a handset device .. and handset stack. I would probably end up doing the same things as they had. In a world where you will see your semi-ugly alpha release screenshots laughed at in news articles about your 1.0 launch, when you have a perfectly working and very shiny final release that nobody seems to bother to even check out, it's hard to argue for transparent development from day zero.
-- Carsten Munk

Comments (none posted)

Distribution News

Debian GNU/Linux

(overlapping) bits from the DPL: April 2012

Debian Project Leader Stefano Zacchiroli has a few bits from April that spans the end of his previous term and the beginning of his current term of office. The bits start off with a call for DPL helpers. Other topics include Debian's proposed diversity statement, revenue sharing with DuckDuckGo, the conflict on Python maintenance, multimedia packaging, hardware replacement, and more.

Full Story (comments: none)

Fedora

Appointment to the Fedora Board, and Elections Reminder

Garrett Holmstrom has been appointed to the Fedora Board. "In this election cycle, three Board seats are open for election, and two Board appointee seats are open; the first appointee, Garrett, is being appointed prior to nominations opening, and the second will be appointed after elections are completed." Nominations for open seats on the advisory board, FESCo (Fedora Engineering and Steering Committee), and FAmSCo (Fedora Ambassadors Steering Committee) close on May 15.

Full Story (comments: none)

The Future of Fedora release names

Fedora members were recently polled on whether Fedora releases should have code names. The consensus is that many people like having release names, but the method for choosing the names should be improved. The board is seeking volunteers to help come up with a new method.

Full Story (comments: none)

Mandriva Linux

"Dear Community – II" from Mandriva

The Mandriva blog has another letter to the community with very little in the way of actual useful information. "The Mandriva Linux project has the right to be given a space in which it may expand and the contributors and afficionados a place where they can express their talents. We are precisely working on this right now and during the next two weeks. We will announce the direction we intend to give to the project during the third week of May. It makes no doubt that it’ll be difficult to satisfy each an every expectation and wish, as they’re many of them and some are not compatible with the other, but we’ll try to achieve what can be useful and most promising for the community and, with it, the Mandriva Linux project."

Comments (10 posted)

Newsletters and articles of interest

Distribution newsletters

Comments (none posted)

Dell announces prototype development laptop with Ubuntu (The H)

Dell has announced Project Sputnik, which is aimed at producing a laptop for developing mobile and cloud applications. The H takes a look. "The laptop is pre-installed with an image based on Ubuntu 12.04 LTS that has been optimised for Dell's XPS13 Ultrabook. The project has already solved problems with the brightness control and the WiFi hotkey and is now working on issues with the touchpad which currently only works as a single touch pointing device with no scrolling. The install also includes several development packages such as version control systems and automatic deployment tools. Plans for the future include the automatic fetching of setup profiles for other software packages from GitHub."

Comments (3 posted)

Page editor: Rebecca Sobol

Development

LGM: Inkscape quietly evolves into a development platform

By Nathan Willis
May 9, 2012

There was no new release of Inkscape at Libre Graphics Meeting 2012 in Vienna, but the SVG vector editor still made a significant impact. One session showcased a new extension that enables drawing with variable-width strokes, but several others showed off independent applications that leverage Inkscape's extensibility to build other tools, including an interactive mock-up creator for UI design and a font editor.

Libre Graphics Meeting is the annual workshop and conference of the open source graphics community; there are presentations from developers and artists alike, as well as workshops and team meetings. 2012 was the event's seventh year. LGM mainstays include GIMP, Blender, Inkscape, Krita, and Scribus, among other projects, but the exact makeup varies each year due to the moving venue and the irregular release schedules kept by the various teams. Inkscape is nearing its next major release, but its presence was felt in other sessions this year.

Variable-width stroke

[Powerstroke]

The variable-width stroke feature is named Powerstroke, and was authored by presenter Johan Engelen. The implementation is based on Inkscape's Live Path Effects (LPE) technique, which allows the user to attach a non-destructive effect to any path object. These effects are functions that manipulate the path data itself — meaning its points and control points, not "stylistic" features like color or opacity. LPEs can deform paths, map text or images along a path, and many other tricks. LPEs produce a valid SVG path as output, so they are preserved in other SVG viewers, but the original data is also saved in an Inkscape-specific attribute, which makes them reversible.

Without Powerstroke, each path has a fixed stroke width along its entire length (which is the default in almost every vector editor). The new feature adds "width points" along the path; each contains the location along the path and the width at that point — where they change, the effect interpolates smoothly between them. In the user interface, each width is shown as a line perpendicular to the curve of the path, and has "handles" that allow you to directly adjust the width on-canvas.

For the stroke width itself, there is very little else to it. The interpolation between widths is performed by the lib2geom library, and width control points have special purple handles to distinguish them from regular points. There is an "advanced" option which allows you to rearrange the order of the stroke-width points, which creates some zany effects, and an auxiliary LPE called "Clone original path" was created to enable filling Powerstroked shapes.

What is more complicated is how to handle sharp corners. The SVG specification defines three possibilities: rounded, mitered (pointed), or beveled (flattened). Powerstroke adds two of its own: extrapolated and "spiro." The extrapolated corner is a variant on the miter, but it is designed to more smoothly follow the shape that a pen might take on paper. The spiro corner is more rounded, based on the Spiro curve type created by Raph Levien.

Engelen hinted at several improvements for Powerstroke in future releases. He would like to make Powerstroke output an option for Inkscape's calligraphy tool rather than a stand-alone LPE, as well as tackle asymmetric stroke widths. Calligraphy tool support might make Powerstroke usable with pressure-sensitive pen tablets, which artists would like. There are also pathological cases where the math currently breaks down, such as coupling extremely sharp corners with extremely large widths; fixing those is something mathematicians would like.

Mocking the user interface

Red Hat's Máirín Duffy and Emily Dirsh presented a session entitled "An awesome FLOSS design collaboration workflow," covering a range of projects developed to support the Fedora Design Team. Duffy explained that working as a user experience (UX) designer, she found the existing collaboration tools frustrating when compared to Git and other tools made for developers. Designers need to collaborate with each other and with developers, she said, but often had little choice beyond shared-folder synchronization and email attachments. The first product of her campaign to create better design tools was SparkleShare, a Git-backed storage service that functions like Dropbox, but with the full power of commit, forking, and revision history.

[Máirín Duffy and Emily Dirsh]

SparkleShare helps developers share and iterate designs via flat files, but it does not help when creating interactive UX mock-ups. For that, Duffy said, most designers are stuck with unfavorable options like proprietary tools, Adobe Flash, and web services that may or may not be around in years to come. Her solution to this dilemma is Magic Mockup, a utility for creating clickable, interactive mock-ups with Inkscape. The nexus of the idea came from Sozi, which makes animated presentations using SVG. Just as Sozi uses SVG's ability to embed JavaScript to transition between slide frames, Magic Mockup lets designers draw interactive buttons, dialogs, and other widgets that respond to mouse events. Clicks trigger a simple "change frames" action, which lets designers mimic application state-changes, user input, or animations. Duffy wrote the original implementation (in JavaScript), which Garrett LeSage then rewrote in CoffeeScript.

Still in development is a way for designers to share their Magic Mockup work with the public. Dirsh demonstrated her project, Glitter Gallery, which is built primarily for sharing and commenting on Magic Mockup SVG files, but supports other file types, too. Glitter Gallery is a Ruby application and is designed to run on Red Hat's OpenShift platform.

Typography

I presented another Inkscape-built utility in my talk about new open font development tools. The Inkscape typography extensions are a collection of related extensions that let font designers use Inkscape as the glyph-drawing canvas. The workflow allows the designer to draw each glyph on a separate layer, keeping the entire font in a single file (both because SVG does not have the concept of "pages," and to make comparing glyphs simpler). The first extension sets up a blank glyph-creation document, with guides set for baseline, x-height, cap-height, ascenders, and descenders. The second is a "new layer" function, which creates a new layer named for whichever letter of the alphabet the user specifies. The third extension cycles through the layers and builds an SVG font file, mapping each layer to the appropriate encoding slot. The extensions can also open and edit existing SVG fonts.

SVG fonts are not nearly as prevalent as TrueType or PostScript fonts, but the extensions make for a good start. FontForge is the application of choice for open source font crafting, but it does not offer a particularly pleasant editing experience. Inkscape has better and more flexible tools, plus an easier to work with canvas (for example, FontForge's glyph editor does not support on-canvas transformations). Is is also less crash-prone than FontForge, and has better essential application features (such as a fully-functional Undo/Redo).

More with SVG

In addition to the Inkscape-specific talks, there were several sessions about SVG itself. Jeroen Dijkmeijer presented his iScriptDesign project, a web-based application that lets you construct CAD-like blueprints suitable for laser cutting or CNC milling. Dijkmeijer uses iScriptDesign to create and build wooden furniture, but it is suitable for any project made of flat parts that can be cut with a 2-D tool.

What makes iScriptDesign an improvement over bare SVG drawings is that it supports dynamic, adjustable measurements — for example, defining object X as half the width of object Y. Dijkmeijer has added support for JavaScript pre-processing directives, calling the result "JSVG". The list includes named-variable substitution, mathematical expressions that are evaluated when the image is rendered, and user input methods like text-entry boxes and adjustment sliders. He demonstrated a JSVG plan for a sofa table that incorporated adjustable measurements for height, width, and depth. On-page sliders allow the user to scale various dimensions of the design, and the application rearranges the resulting pieces to fit them onto the smallest total area, to minimize production cost.

Dijkmeijer explained that the application also took steps to transform complex paths in the image so that they were optimized for a laser-cutter's computer-controlled motion. For example, a shape might include reflected segments, but it needs to have all of its paths oriented in the same direction so that the cutting head can trace it with one continuous pass.

Chris Lilley from the World Wide Web Consortium (W3C)'s SVG working group was on hand at LGM as well, and provided feedback to several of the SVG-oriented talks. He also presented an update on the ongoing development of the SVG 2.0 specification, which will sport several enhancements of interest to artists. First, it will allow images to specify colors in more precise terms than the generic 8-bit RGB triples common in HTML. The initial plan was to use the Lab color space and specify a white point, but thanks to a Q&A exchange on that subject with Kai-Uwe Behrmann of Oyranos and Richard Hughes of colord, the standard may soon use the more abstract (but simpler) XYZ color space instead. It will also support attaching ICC color profiles to documents, and will use them for embedded raster images, both ensuring better color matching.

I spoke to Lilley later about Dijkmeijer's JSVG effort, and he confirmed that the SVG Working Group is interested in eventually adding mathematical expression, dynamic variables, and other such constructs to the specification, although it will probably not make 2.0. The Q&A exchange with the color management developers was not the only point in the week where the SVG specification took hints from the artists and developers at the event; Lilley asked questions of many of the speakers, and called their feedback to the process valuable — such as Magic Markup's interest in having layers become part of the core format. Likewise, he was able to point some of the projects to helpful-if-not-well-known options that could simplify development.

The last few Inkscape releases have added more and more via the application's extensions mechanism, and they are increasingly specialized. For example, although she did not present an update on it this year, Susan Spencer's Sew Brilliant uses Inkscape extensions to assist textile-makers, implementing dynamic pattern-changing options akin to what iScriptDesign does with furniture designs. There may not be many projects that combine font development, UI mock-ups, and textile making, so it is impressive to see that Inkscape has evolved — under the radar — into a tool that so many people are using in such diverse tasks. Likewise, although at times standards bodies seem like remote and unapproachable entities, it is interesting to see a specification like SVG evolve in real-time as developers and artists give their feedback. That sort of frank back-and-forth between developers and end users is also one of the facets of LGM that makes it worth attending, whether your favorite application has a new release to unveil or not.

[Thanks to the Libre Graphics Meeting for assistance with travel to Vienna.]

Comments (2 posted)

Brief items

Quotes of the week

I'm sure things like trackstick emulation mode will be thouroughly missed, but I've heard Linux is all about choice and I choose not to maintain this any longer.
Peter Hutterer

  1. Assume you have 2 pastures, with a gate between them, and a flock of sheep. Your flock is in one pasture, and you want to get them to the other pasture through the gate.

  2. Sheep are, to use the terms we have been using, 'unbreakable', and 'atomic'. (If you slice them into pieces to try to get them to fit through the gate, you will end up with non-functioning sheep on the other side). :-)

  3. If your gate is narrow, you will have to serialise your sheep, and have them pass through one at a time.

  4. If it is wider then parts of different sheep will pass through the gate intermingled from the perspective of a camera mounted on the gate. "Nose of sheep 1, nose of sheep 2, head of sheep 1, nose of sheep 3, head of sheep 3, body of sheep 3, tail of sheep 3, body of sheep 2, body of sheep 1, tail of sheep 2, tail of sheep 1" is what the camera might report as 'having gone passed', and we might conclude that sheep 3 is a small lamb, and that sheep 1 is its mother who slowed down going through the gate so that the lamb could keep up with her -- but all of this doesn't matter because, as long as you do not try to break them, the flock will function perfectly on the other side of the gate without any attention being paid to them by you.
Laura Creighton on the PyPy transaction model

Comments (3 posted)

Apache OpenOffice 3.4 released

The first release of Apache OpenOffice, 3.4, has been announced. The version numbering picks up from the last OpenOffice.org major release, which was 3.3. New features include improved ODF support, better pivot table support in Calc, native support for SVG, enhanced graphics, and more. "'With the donation of OpenOffice.org to the ASF, the Foundation, and especially the podling project, was given a daunting task: re-energize a community and transform OpenOffice from a codebase of unknown Intellectual Property heritage, to a vetted and Apache Licensed software suite,' said Jim Jagielski, ASF President and an Apache OpenOffice project mentor. 'The release of Apache OpenOffice 3.4 shows just how successful the project has been: pulling in developers from over 21 corporate affiliations, while avoiding undue influence which is the death-knell of true open source communities; building a solid and stable codebase, with significant improvement and enhancements over other variants; and, of course, creating a healthy, vibrant and diverse user and developer community.'"

Comments (23 posted)

GIMP 2.8 released

The long-awaited release of version 2.8 of the GIMP image editor is out. There are lots of new features, many of which were previewed in this article last November. See the release notes for lots of details.

Comments (23 posted)

Git hints: ORIG_HEAD and merging

For those looking for some advanced git tricks: this Google+ conversation has a lot to offer, especially with regard to difficult merges. Much of it comes from Linus himself: "You didn't know about ORIG_HEAD? That's literally a 'Day One' feature of git, exactly because it's so incredibly useful (especially to a maintainer). We had ORIG_HEAD back when you had to script your stuff manually and bang two rocks together to make git do anything at all."

Comments (none posted)

nPth - The new GNU portable threads library

The "GNU nPth" library project, under development as part of GnuPG, has decloaked and made its first release available. "nPth is a non-preemptive threads implementation using an API very similar to the one known from GNU Pth. It has been designed as a replacement of GNU Pth for non-ancient operating systems. In contrast to GNU Pth is is based on the system's standard threads implementation. Thus nPth allows the use of libraries which are not compatible to GNU Pth." It is dual-licensed under LGPLv3 and GPLv2.

Full Story (comments: 24)

Open Build Service version 2.3 released

Version 2.3 of the Open Build Service is out. New features include a number of improvements around release maintenance, an improved web interface, better cross-build support, and issue tracking support.

Full Story (comments: none)

Newsletters and articles

Development newsletters from the last week

Comments (none posted)

Control Centre: The systemd Linux init system (The H)

The H has a four page article by Lennart Poettering, Kay Sievers and Thorsten Leemhuis on systemd. From the third page: "The unit files that are associated with systemd and the services are located in the /lib/systemd/system/ directory; if an identically named file exists in /etc/systemd/system/, systemd will ignore the one in the lib directory. This allows administrators to copy and customise a systemd unit file without running the risk that it could be overwritten during the next update – this can happen in SysVinit distributions if one of the init scripts stored in /etc/rc.d/init.d/ has been modified."

Comments (22 posted)

Hands-on: testing the GIMP 2.8 and its new single-window interface (ars technica)

Over at ars technica Ryan Paul takes the GNU Image Manipulation Program (GIMP) 2.8 release for a spin and looks at the future plans for the project, including a full transition to the Generic Graphics Library (GEGL) in the 2.10 release. "After the 2.10 release arrives, the next major version will be 3.0. According to the roadmap, the goal for 3.0 will be delivering support for Gtk+ 3, a major new version of the underlying widget toolkit that is used to build the GIMP’s interface. The other major feature item included in the roadmap for version 3.0 is pervasive high bit-depth support, a major feature that will made possible by the GEGL transition."

Comments (17 posted)

Hermann: sigrok - cross-platform, open-source logic analyzer software with protocol decoder support

On his blog, Uwe Hermann writes about the free logic analyzer software that he and Bert Vermeulen have been working on. "I originally started working on an open-source logic analyzer software named "flosslogic" in 2010, because I grew tired of almost all devices having a proprietary and Windows-only software, often with limited features, limited input/output file formats, limited usability, limited protocol decoder support, and so on. Thus, the goal was to write a portable, GPL'd, software that can talk to many different logic analyzers via modules/plugins, supports many input/output formats, and many different protocol decoders. [...] The advantage being, that every time we add a new driver for another logic analyzer it automatically supports all the input/output formats we already have, you can use all the protocol decoders we already wrote, etc. It also works the other way around: If someone writes a new protocol decoder or file format driver, it can automatically be used with any of the supported logic analyzers out of the box." (Thanks to Paul Wise.)

Comments (10 posted)

Page editor: Jonathan Corbet

Announcements

Brief items

A new round in GNOME's outreach program for women

The GNOME project has announced a new round in its outreach program for women, with ten applicants accepted to work with the project. "Over three quarters of the women involved in the program have stayed connected to the GNOME community. Better still, Outreach Program for Women participants have a strong tradition of becoming mentors in GNOME."

Comments (none posted)

The Document Foundation announces a Certification Program

The Document Foundation has announced a Certification Program, "to foster the provision of professional services around LibreOffice and help the growth of the ecosystem of the world's best free office suite."

Full Story (comments: none)

Articles of interest

FSFE Newsletter - May 2012

The May edition of the Free Software Foundation Europe newsletter covers Document Freedom Day and the Day against DRM, Free Software and the French Presidential elections, vendor lock-in in Helsinki, the UK Open Standard consultation, and several other topics.

Full Story (comments: none)

Google guilty of infringement in Oracle trial; future legal headaches loom (ars technica)

Ars technica reports on the confused verdict in the first phase of Oracle v. Google, where Google won most of the arguments but, maybe, was found to have infringed copyright via its use of the Java APIs. "But the jury couldn't reach agreement on a second issue—whether Google had a valid 'fair use' defense when it used the APIs. Google has asked for a mistrial based on the incomplete verdict, and that issue will be briefed later this week."

Comments (44 posted)

Fragmentation on the Linux Desktop (Is it Normal?) (Datamation)

In this two page article on Datamation, Bruce Byfield looks at the history and the current state of the Linux desktop. From the second page: "In studying this transformation of the Linux desktop, you can easily see possible turning points. What would have happened if the KDE 4.0 release had been delayed until it had more features? If Ubuntu had been more patient about its changes getting into GNOME? If GNOME 3 had been less radical, or user complaints addressed? If some or all of these events had occurred, then maybe GNOME and KDE would have remained as dominant as ever. However, I doubt it. More likely, other incidents would have caused a similar fragmentation sooner or later, no matter how anyone acted."

Comments (76 posted)

SAS v. WPL decision addresses boundaries of copyrights on software (opensource.com)

Over at opensource.com, Richard Fontana explains the recent European Court of Justice (ECJ, Europe's equivalent to the US Supreme Court) ruling on the copyrightability of software. It's not at all hard to see parallels in that ruling and the current copyright questions in the Oracle v. Google case (in fact the judge in that case has asked the parties to answer questions about the ruling). "With respect to manuals concerning programming or scripting languages, the court said that 'the keywords, syntax, commands and combinations of commands, options, defaults and iterations consist of words, figures or mathematical concepts' which are not copyrightable expression in themselves, even where they are contained in a larger work that is copyrightable. Copyrightable expression can arise only from 'the choice, sequence and combination of those words, figures or mathematical concepts'."

Comments (12 posted)

Hands On with Boot2Gecko (Wired)

Wired plays with a Boot2Gecko phone. "At this point, B2G’s user interface consists of a few home screens’ worth of apps, each of which can be launched by tapping a rectangular icon. The apps may be web-based, but launched blazingly fast because most were cached onto the phones. Thanks to the caching scheme, B2G phones will still work when a network signal is out of reach."

Comments (none posted)

New Books

Programming in Go - Addison-Wesley Professional

Addison-Wesley Professional has released "Programming in Go" by Mark Summerfield.

Full Story (comments: none)

Programming Clojure, 2nd Edition--New from Pragmatic Bookshelf

Pragmatic Bookshelf has released "Programming Clojure, 2nd Edition" by Stuart Halloway and Aaron Bedra.

Full Story (comments: none)

Calls for Presentations

PyCon Ireland 2012

Python Ireland will take place October 13-14, 2012 in Dublin, Ireland. Early bird registration and the call for papers are open.

Full Story (comments: none)

14th Real Time Linux Workshop - Call for Papers

The 14th Real Time Linux Workshop will take place in Chapel Hill, North Carolina, October 18-20, 2012. The call for papers is open until July 23. "Authors from regulatory bodies, academics, industry as well as the user-community are invited to submit original work dealing with general topics related to Open Source and Free Software based real-time systems research, experiments and case studies, as well as issues of integration of open-source real-time and embedded OS. A special focus will be on industrial case studies and safety related systems."

Full Story (comments: none)

Upcoming Events

Formally announcing FUDCon: Paris and FUDCon: Lawrence.

Two upcoming FUDCons (Fedora Users and Developers Conference) have been announced. There will be a FUDCon in Paris, France October 13-15, 2012 and a FUDCon in Lawrence, Kansas January 18-20, 2013.

Full Story (comments: none)

Events: May 10, 2012 to July 9, 2012

The following event listing is taken from the LWN.net Calendar.

Date(s)EventLocation
May 7
May 11
Ubuntu Developer Summit - Q Oakland, CA, USA
May 8
May 11
samba eXPerience 2012 Göttingen, Germany
May 11
May 13
Debian BSP in York York, UK
May 11
May 12
Professional IT Community Conference 2012 New Brunswick, NJ, USA
May 13
May 18
C++ Now! Aspen, CO, USA
May 17
May 18
PostgreSQL Conference for Users and Developers Ottawa, Canada
May 22
May 24
Military Open Source Software - Atlantic Coast Charleston, SC, USA
May 23
May 26
LinuxTag Berlin, Germany
May 23
May 25
Croatian Linux Users' Convention Zagreb, Croatia
May 25
May 26
Flossie 2012 London, UK
May 28
June 1
Linaro Connect Q2.12 Gold Coast, Hong Kong
May 29
May 30
International conference NoSQL matters 2012 Cologne, Germany
June 1
June 3
Wikipedia & MediaWiki hackathon & workshops Berlin, Germany
June 6
June 8
LinuxCon Japan Yokohama, Japan
June 6
June 10
Taiwan Mini DebConf 2012 Hualien, Taiwan
June 7
June 10
Linux Vacation / Eastern Europe 2012 Grodno, Belarus
June 8
June 10
SouthEast LinuxFest Charlotte, NC, USA
June 9
June 10
GNOME.Asia Hong Kong, China
June 11
June 16
Programming Language Design and Implementation Beijing, China
June 11
June 15
YAPC North America Madison, Wisconsin, USA
June 12 UCMS '12: 2012 USENIX Configuration Management Workshop: Virtualization, the Cloud, and Scale Boston, USA
June 12 WiAC '12: 2012 USENIX Women in Advanced Computing Summit Boston, USA
June 12 USENIX Cyberlaw '12: 2012 USENIX Workshop on Hot Topics in Cyberlaw Boston, USA
June 12
June 13
HotCloud '12: 4th USENIX Workshop on Hot Topics in Cloud Computing Boston, USA
June 13
June 15
2012 USENIX Annual Technical Conference Boston, MA, USA
June 13 WebApps '12: 3rd USENIX Conference on Web Application Development Boston, USA
June 13
June 14
HotStorage '12: 4th USENIX Workshop on Hot Topics in Storage and File Systems Boston, MA, USA
June 14
June 17
FUDCon LATAM 2012 Margarita Margarita, Venezuela
June 14
June 15
TaPP '12: 4th USENIX Workshop on the Theory and Practice of Provenance Boston, MA, USA
June 15 NSDR '12: 6th USENIX/ACM Workshop on Networked Systems for Developing Regions Boston, MA, USA
June 15
June 16
Nordic Ruby Stockholm, Sweden
June 15
June 16
Devaamo summit Tampere, Finland
June 16 Debrpm Linux Packaging Workshop in the Netherlands The Hague, Netherlands
June 19
June 21
Solutions Linux Open Source Paris, France
June 20
June 21
Open Source Summit (NASA, State Dept, VA) College Park, MD, USA
June 26
June 29
Open Source Bridge: The conference for open source citizens Portland, Oregon, USA
June 26
July 2
GNOME & Mono Festival of Love 2012 Boston, MA, USA
June 30
July 6
Akademy (KDE conference) 2012 Tallinn, Estonia
June 30
July 1
Quack And Hack 2012 Paoli, PA, USA
July 1
July 7
DebConf 2012 Managua, Nicaragua
July 2
July 8
EuroPython 2012 Florence, Italy
July 5 London Lua user group London, UK
July 6
July 8
3. Braunschweiger Atari & Amiga Meeting Braunschweig, Germany
July 7
July 12
Libre Software Meeting / Rencontres Mondiales du Logiciel Libre Geneva, Switzerland
July 7
July 8
10th European Tcl/Tk User Meeting Munich, Germany
July 8
July 14
DebConf12 Managua, Nicaragua

If your event does not appear here, please tell us about it.

Page editor: Rebecca Sobol


Copyright © 2012, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds