LWN.net Weekly Edition for May 10, 2012
Who owns your data?
The Economist is concerned that our "digital heritage" may be lost because the formats (or media) may be unreadable in, say, 20 years time. The problem is complicated by digital rights management (DRM), of course, and the magazine is spot on with suggestions that circumventing those restrictions is needed to protect that heritage. But in calls for more regulation (not a usual Economist stance) the magazine misses one of the most important ways that digital formats can be future-proofed: free and open data standards.
DRM is certainly a problem, but a bigger problem may well be the formats that much of digital data is stored in. The vast majority of that data is not stored in DRM-encumbered formats, it is, instead, stored in "secret" data formats. Proprietary software vendors are rather fond of creating their own formats, updating them with some frequency, and allowing older versions to (surprise!) become unsupported. If users of those formats are not paying attention, documents and other data from just a few years ago can sometimes become unreadable.
There are few advantages to users from closed formats, but there are several for the vendors involved, of course. Lock-in and the income stream from what become "forced" upgrades are two of the biggest reasons that vendors continue with their "secret sauce" formats. But it is rather surprising that users, businesses and governments in particular, haven't rebelled. How did we get to a point where we will pay for the "privilege" of having a vendor take our data and lock it up such that we have to pay them, again and again, to access it?
There is a cost associated with documenting a data format, so the proprietary vendors would undoubtedly cite that as leading to higher purchase prices. But that's largely disingenuous. In many cases, there are existing formats (e.g. ODF, PNG, SVG, HTML, EPUB, ...) that could be used, or new ones that could be developed. The easiest way to "document" a format is to release code—not binaries—that can read it, but that defeats much of the purpose for using the proprietary formats in the first place so it's not something that most vendors are willing to do.
Obviously, free software fits the bill nicely here. Not only is code available to read the format, but the code that writes the format is there as well. While documentation that specifies all of the different values, flags, corner cases, and so on, would be welcome, being able to look at the code that actually does the work will ensure that data saved in that format can be read for years (centuries?) to come. As long as the bits that make up the data can be retrieved from the storage medium and that quantum computers running Ubuntu 37.04 ("Magnificent Mastodon") can still be programmed, the data will still be accessible. There may even be a few C/C++ programmers still around who can be lured out of retirement to help—if they aren't all busy solving the 2038 problem, anyway.
More seriously, though, maintaining access to digital data will require some attention. Storage device technology continues to evolve, and there are limits on the lifetime of the media itself. CDs, DVDs, hard drives, tapes, flash, and so on all will need refreshing from time to time. Moving archives from one medium to another is costly enough, why add potentially lossy format conversions and the cost of upgrading software to read the data—if said software is even still available.
Proprietary vendors come and go; their formats right along with them. Trying to read a Microsoft Word document from 20 years ago is likely to be an exercise in frustration, but trying to read a Windows 3.0 WordStar document will be far worse. There are ways to do so, of course, but they are painful—if one can even track down a 3.5" floppy drive (not to mention 5.25"). If the original software is still available somewhere (e.g. Ebay, backup floppies, ...) then it may be possible to use emulators to run the original program, but that still may not help with getting the data into a supported format.
Amusingly, free software often supports older formats far longer than the vendors do. While the results are often imperfect, reverse engineering proprietary data formats is a time-honored tradition in our communities. Once that's been done, there's little reason not to keep supporting the old format. That's not to say that older formats don't fall off the list at times, but the code is still out there for those who need it.
As internet services come and go, there will also be issues with preserving data from those sources. Much of it is stored in free software databases, though that may make little difference if there is no access to the raw data. In addition, the database schema and how it relates articles, comments, status updates, wall postings, and so on, is probably not available either. If some day Facebook, Google+, Twitter, Picasa, or any of the other proprietary services goes away—perhaps with little or no warning—that data may well be lost to the ages too. Some might argue that the majority of it should be lost, but some of it certainly qualifies as part of our digital heritage.
Beyond the social networks and their ilk, there are a huge number of news and information sites with relevant data locked away on their servers. Data from things like the New York Times (or Wall Street Journal), Boing Boing and other blogs, the article from The Economist linked above, the articles and comments here at LWN, and thousands (perhaps millions) more, are all things that one might like to preserve. The Internet Archive can only do so much.
Solutions for data from internet sites are tricky, since the data is closely held by the services and there are serious privacy considerations for some of it. But some way to archive some of that data is needed. By the time the service or site itself is on the ropes, it may well be too late.
Users should think long and hard before they lock up their long-term data in closed formats. While yesterday's email may not be all that important (maybe), that unfinished novel, last will and testament, or financial records from the 80s may well be. Beyond that, shareholders and taxpayers should be pressuring businesses and governments to store their documents in open formats. In the best case scenario, it will just cost more money to deal with old, closed-format data; in the worst case, after enough time passes, there may be no economically plausible way to retrieve it. That is something worth avoiding.
TizenConf: Pitching HTML5 as a development framework
The Tizen Project has considerable technical history on its side, as it is the successor to the well-known Moblin, MeeGo, and LiMo projects. Yet in a way that pedigree also works against it, as the project makes its pitch to third-party application developers who have seen the aforementioned predecessors come and go — sometimes first-hand. At the first Tizen Developer Conference in San Francisco, the project worked hard to establish its "developer story" — in particular highlighting the broader support from industry players and the stability of HTML5 and related open web specifications as a development platform.
The industry
In Tuesday's keynote sessions, Intel's Imad Sousou and Samsung's J.D. Choi took a quick tour through the platform as exposed to application developers (a detailed examination was reserved for the break-out sessions); the project defines a Web API that uses the World Wide Web Consortium (W3C)'s packaging and configuration format, and "custom" APIs for accessing contact data, NFC, Bluetooth, and other subsystems. They then went deeper into three specific areas of the stack: security, animation, and connection management.
![[Imad Sousou]](https://static.lwn.net/images/2012/tizen-sousou-sm.jpg)
The security framework is based
on Smack, which Sousou described as being preferable to other Linux
alternatives that required "setting up 8,000 policy
files
". The platform also provides integrity protection by
checking application signatures at install time, and isolates each
application in its own process (although he did not go into specifics,
Sousou described the setup as less complicated than the
"draconian
" measures taken by other platforms).
The animation framework is based on OpenGL ES and the Emotion scene graph library provided by the Enlightenment Foundation Libraries (EFL), LiMo's underlying application framework. Connection management is handled by ConnMan, which Sousou announced had finally been declared 1.0. The project has worked on reducing ConnMan's overhead in the past three years, specifically for mobile devices, where the typical 2-3 second DHCP configuration time is a deal-breaker for users. The enhanced ConnMan now performs DHCP setup in milliseconds.
Several points in Sousou and Choi's talk about the architecture drew contrasts with other mobile platforms — primarily Android and the latest Blackberry offering. The point they made was that Tizen is open to input on the design from anyone willing to join the project and contribute — which is hardly the case, they suggested, for Android.
They also used their time to discuss the distinction between the Tizen Project and the Tizen Association. The project is the actual open source software project, which is led by a technical steering group (headed by Sousou and Choi), and at this stage largely developed by full-time employees from the two companies, plus smaller partners. In contrast, the Tizen Association is the marketing group that works to sell Tizen as a solution to OEM device makers, carriers, third-party application vendors, and any other industry customers. In addition to marketing the project to industry players, though, the Association also attempts to gather their requirements for an OS platform.
The next keynote was presented by Kiyohito Nagata, chairman of the Tizen Association. Nagata is also senior vice-president of NTT Docomo, Japan's largest wireless carrier. He talked about Docomo's research in user demands of smartphone devices, making the case that Tizen offers carriers the flexibility to implement their own application stores and custom services — across a range of devices. Again, this aspect of Tizen was placed in contrast to the competition.
Nagata ended his talk by discussing the board membership of the Tizen Association, which includes other large mobile phone carriers — notably Orange, Telefónica, SK Telecom, and Sprint. Tizen is marketing itself as a cross-device platform, serving in-vehicle systems (IVI), set-top boxes, tablets, and smartphones. That list is identical to MeeGo's target platforms, of course, but like MeeGo the vast majority of the talk centered around handsets — including the keynotes and the current work of the Tizen Association.
The web
Buy-in from mobile carriers is a plus, but third-party applications are what those carriers are interested in attracting in order to make their plans appealing. Tizen's case as a development platform comes down to its HTML5-based API, which was the subject of numerous breakout sessions at the conference: from the overall API to specific components (e.g., graphics, I/O, NFC, and Bluetooth).
Intel's Sakari Poussa and Samsung's Taehee Lee led a breakout session that covered the overall Web API suite. As we covered when we looked at the SDK in January, a significant chunk of the Web API is drawn from existing work spearheaded by the W3C. But there are other APIs, some exploring ways to expose mobile device functionality to web applications (for example, the ability to lock the screen rotation into landscape mode, which is reportedly of interest to game developers), others defining new general-purpose functionality like mapping-and-routing. The Tizen APIs also cover system-maintenance tasks, such as application installation, update and removal, and creating and managing user accounts for online services.
The bigger news, however, was Sousou's announcement that the Tizen project is working with the W3C to develop these "missing piece" APIs into general standards. The project wants them to be standard APIs, not "Tizen APIs," he said. In particular, Tizen is part of the W3C's new Core Mobile Web Platform Group, and Tizen is committed to adhering to the standard, whatever decisions the working group makes.
Of course, standards are just words, and many developers have heard the "write once, run anywhere" song multiple times. The "Advanced HTML5 Features" session dealt with that question specifically, arguing that the web has always been a fragmented platform, but that web development has evolved to cope with varying implementation details on desktop browsers, and has done so better than most other development platforms.
If that seems like a mild assurance, Facebook's head of mobile developer relations James Pearce was on hand to offer a more concrete testing tool, the company's new compliance tester RingMark. RingMark defines three levels (or to be more precise, "rings") of compatibility: Ring 0 covers the status quo of existing W3C device APIs, Ring 1 covers "aspirational" extensions to Ring 0, including audio/video and other high-performance tasks that are currently the domain of native APIs on most platforms. Ring 2 covers the still-in-development suite of web APIs for the future, such as WebGL.
Attendees in several of the sessions I sat in on expressed interest in Tizen's compliance program. Although Tizen so far has no formal compliance plan, it was made clear that compliance will be assessed based on a product's adherence to the API. That makes for a stark contrast against MeeGo, which demanded specific versions of specific libraries and Linux system components — a requirements set that ultimately proved too arduous for even MeeGo co-founder Nokia to pass with its N9 phone.
The future
The project, then, is making its case as an HTML5-based development platform; the next question is how it will be received by the developer community. One independent developer I talked to (who requested anonymity) expressed his doubts that HTML5 scales up to industrial devices and serious applications; he cited medical tablets among other possible upscale device classes. Most of the speakers addressed JavaScript performance and latency as points needing work in HTML5 applications, although as you might expect, most also said they were pleased with Tizen's performance.
There were a handful of companies present who are already developing applications on Tizen. Cell phone carrier Orange was among them, and presented a session on its experiences. The team from Orange has deployed HTML5 applications for news, movie ticket offers, and streaming TV, and has built enhanced user-information tools, integrating items like data and SMS counters into the phone UI.
Tizen's community manager Dawn Foster dealt with the outreach question in her state-of-the-community talk on Tuesday. In brief, the Tizen community at the moment is small; considerably smaller than the MeeGo community was, with fewer volunteer contributors joining the paid developers from Intel and Samsung. But that is to be expected, she said, primarily because it is hard to build excitement about a platform before consumer devices are available. On that front, she added, Tizen is
trying to take a different approach, by underplaying the hype of the platform and "letting the code lead
". Likewise, while MeeGo established a complicated working group structure at the outset, well before any code was delivered, Tizen's project structure is intentionally loose at this stage.
Perhaps that "release-first" strategy will also help deal with the other hurdle facing Tizen, developer burnout among veterans of the earlier projects in Tizen's lineage. Fundamentally, burnout with platform-switching may be one of the reasons Tizen is pressing so hard on the HTML5 front at the moment. Whatever else developers may think of HTML5, it is at least a platform-neutral approach to application development. The keynotes talked of more options still-to-come in the Tizen 2.0 release currently scheduled for the end of 2012 — for example, the Emotion animation framework mentioned by Choi. But at least for now, HTML5 and the web APIs remain the sole story for application developers.
Intel and Samsung are both ramping up their outreach to those developers. Intel is running an application developer contest, while Samsung distributed mobile developer devices to registered attendees. Foster also highlighted two tools to develop HTML5 applications that are designed to be lighter-weight than the full Tizen SDK: the Rapid Interface Builder (RIB) and Web Simulator. The contest runs until August — which is plenty of time for developers to explore the code base. As of May 9, however, there had still not been any consumer device announcements.
It is understandable that independent developers might be wary of Tizen given how recently they were being told about MeeGo. Ultimately no trick can undo that wariness; the only remedy will be to see the project grow in its own right and earn its own place. There are some key differences already — fairly or not, MeeGo was always perceived largely as a Nokia-only party without much connection to the all-important phone carrier industry, while Tizen has a longer list of mobile partners on board. MeeGo also presented potential contributors with a top-heavy compliance process and byzantine project structure, all well before there was any code to examine. With Tizen, however a developer feels about the commercial parties behind the scenes, there is code to see, and an API that exists outside the project itself; both of which are in the "plus" column.
[ The author would like to thank the Tizen project and the Linux Foundation for support to attend the conference. ]
Accounting systems: a rant and a quest
Attentive long-time readers of LWN may remember that this business is based entirely on free software with one distressing exception: our business accounting is still done using the proprietary "QuickBooks Pro" package. QuickBooks does not lack for aggravations, but the task of replacing it has never quite attained a high enough priority for something to actually happen. Good replacements in the free software community are hard to come by, accounting is boring, our accountant deals easily (and cheaply) with QuickBooks files, and the existing solution, for the most part, simply works. Or, at least, it used to simply work.The monthly accounting ritual involves importing a lot of data from the web site into the accounting application; in particular, subscription sales need to be properly fed in so that we can minimize our taxes on the income in the proper American tradition. This process normally works just fine, but, recently, it failed, saying: "Cannot import, not enough disk space or too many records exist." Naturally, in QuickBooks style, it failed partway through the import process, leaving a corrupted accounting file behind. But QuickBooks users usually learn to make backups frequently and can take such things in stride.
The inability to feed data into the system is a little harder to take in stride, though, especially once some investigation proved that disk space is not in short supply and the failure is elsewhere. It didn't take much time searching to turn up an interesting, unadvertised QuickBooks antifeature: there is a software-imposed limit of 14,500 "list items," which include products offered by the company, vendors, customers, and more. Once that limit is hit, QuickBooks will not allow any more items to be entered; the only supported way out is to upgrade to the "enterprise" version, which can currently be done for a special offer price of only $2400.
In other words: Intuit sells a program that is intended to become an integral part of a business's core processes, perhaps even functioning as a point-of-sale system. This program will, without warning, simply cease to function once the business accumulates an arbitrary number of entries. The only way for that business to get a working accounting system back is to "upgrade" to a new version that costs ten times as much. One can only conclude that this proprietary software package has not been written with its users' needs as the top priority. Instead, it contains a hidden trap to force them into more expensive offerings at a time when they may have little alternative. Who would have ever thought proprietary programs could be that way?
Here at LWN, we had no particularly urgent need to get things working again; other businesses may well not have the luxury of enough time to find an acceptable way out of this situation. It is, thus, unsurprising that there are entire businesses being built around this little surprise from Intuit. Needless to say, there is little enthusiasm in the LWN head office for the purchase of an expensive and proprietary "enterprise" accounting system. In the short term, a workaround has been found: sacrifice most of our accounting history to bring the record count to a level where the program will consent to function as advertised. That has other interesting side effects, like mysteriously changing the balances of reconciled accounts from previous years, but it does take the immediate pressure off. For now, we can continue to do our books.
But a clear message has been delivered here: it is about time that we at LWN read some pages from our own publication and realize that a dependence on proprietary software poses a real risk to our business. A company that is willing to put one such hostile surprise into an important application will put in others and, without the source, there is no way anybody can look for them or remove them if they are found. QuickBooks is too risky to continue to use.
It is, in other words, time to make the move to a free accounting program.
When we have looked at the available tools in the past, the results have always been a little disappointing. There is no shortage of software that can maintain a chart of accounts and a set of double-ledger books. But there has been, in the past, a relative scarcity of useful accounting tools for small businesses. Instead, what's out there is:
- Various personal finance utilities, including GnuCash, KMyMoney,
and others. For basic accounting they work well, but they fall short
of a business's needs.
- Massive enterprise-oriented toolkits that can be used to build systems implementing accounting, inventory-tracking, point-of-sale, customer relationship management, supply-chain management, human resources, and invoicing, with add-on modules for bill collection, weather prediction, automated trading, and bread baking. These systems have names like ADempiere, Compiere, OpenERP, LedgerSMB, and Apache OFBiz. The target users for these projects appear to be consultants and businesses with full-time people dedicated to keeping the system running. To a business like LWN, they tend to look like a box with hundreds of nearly identical parts and a little note saying "some assembly required."
What is missing in the middle is a package for a business with no special accounting needs, but which needs to be able to automate data entry, generate tax forms at the end of the year, and interface with an accountant so it can get its taxes done. Given how incredibly exciting small-business accounting is, it's surprising that so few developers have felt a burning need to scratch that particular itch. There is no accounting for taste, it seems.
That said, it has been a few years since we last made a serious effort to learn about free software accounting alternatives; clearly the time has come for another pass. So we'll be doing it, with an eye toward, hopefully, making the transition at the end of the calendar year. That gives us several months to forget about the problem while still allowing a few months of panic at the end, so the schedule should be plausible.
Stay tuned for updates, it should be an interesting ride. But we are pretty well determined not to find out what other surprises our proprietary accounting system may have in store for us. In 2012, it should be possible to run a small, simple business on free software and never have to wonder when the accounting system will stop functioning and demand more money. We intend to prove it.
Security
Internet censorship and OONI
Internet "censorship" is often associated with repressive governments filtering the traffic of their citizens, but it goes well beyond that. Internet service providers sometimes filter—or alter—the traffic that they carry, companies restrict employees based on keywords and URLs, courts naïvely order certain URLs to be blocked, and so on. But it is difficult for any particular internet user to know just what it is they can't get at. That problem is what the Tor Open Observatory of Network Interference (OONI) project is hoping to help solve.
The overall goal for the OONI project is "to collect data which shows
an accurate representation of network interference on the Filternet we call
the internet
", according to the web site. One obvious, though time
consuming, way to do that is to gather information from multiple different
"locations" on the internet, and that is what OONI has set out to do. Of
course, the OONI project itself can only reach out so far, so the intent is
to enlist other participants—essentially "crowdsourcing" the data
collection.
There are other internet censorship tracking projects—Google's Transparency Report and Herdict for example—but the OONI project's README notes that other efforts either use a closed methodology or closed software. As befits a Tor project, though, OONI is fully open source. No top-level LICENSE file for OONI is present at the moment, but one would guess it will be similar to Tor's permissive license.
The core piece (ooni-probe) is written as a framework in Python, with an eye toward contributions of additional tests (called "plugoos") and reports. "Tests" are meant to detect censorship events by comparing the results obtained locally with some kind of experimental control. That control could be obtained via the Tor network, for example, or via some other means. The tests can use various kinds of "assets", which might include lists of URLs, IP addresses and ports, or keywords, as their input. Current tests include checking that Tor bridges are functioning, determining whether HTTP "Host" field filtering is occurring, checking for DNS tampering, doing address and port scans, detecting Squid proxies, and so on.
While there are plenty of tests that could be added, seemingly the area needing the most attention right now is the "reports". Currently, test failures are essentially just written to an unstructured text log file, which can be stored locally or uploaded to a server. Tools to interpret the data and to provide higher-level visualizations of the types and locations of internet censorship are planned.
While the OONI code is under heavy development, the project can
already claim some successes. ooni-probe was used to detect eight
blocked web sites for internet users in Bethlehem, West Bank. The
probe scanned more than one million sites and found that users are blocked
from eight news sites "whose reporting is critical of
[Palestinian Authority] President Mahmoud Abbas
".
In addition,
ooni-probe found that T-Mobile USA's Web Guard "feature" blocks
access to much more than the advertised categories. In particular,
sites for Tor, the Internet Archive WaybackMachine, Chinese sports news,
French economics and financial news, a Japanese URL shortener, and many
others, were blocked though they didn't fall into any of the listed categories: "Alcohol,
Mature Content, Violence, Drugs, Pornography, Weapons, Gambling, Suicide,
Guns, Hate, Tobacco, Ammunition
".
OONI is just getting started, but it is clearly a welcome addition to the
internet landscape. In order for John Gilmore's famous quote ("The
Net interprets censorship as damage and routes around
it
"—which seems to be an informal slogan for OONI) to be
true, the internet, or really its users and operators, must be aware of
where that censorship is occurring and how it is being applied. With tools
like OONI (and the others, though it's unclear why they aren't more
transparent), routing around that censorship will be easier. The free flow
of information on the internet depends on being able to do so.
Brief items
Security quotes of the week
Yes and no. It correctly detects that your /sbin/init is something hideous and nasty, but fails to realise that it's something hideous and nasty that Fedora ships 8)
An important PHP security update
PHP 5.3.12 and 5.4.2 have been released to fix a nasty security hole that was disclosed somewhat sooner than planned. Essentially, it allows any remote attacker to pass command-line arguments to the PHP interpreter behind a web page—but only in the (hopefully rare) setups where PHP is invoked via the CGI mechanism. "If you are using Apache mod_cgi to run PHP you may be vulnerable. To see if you are just add ?-s to the end of any of your URLs. If you see your source code, you are vulnerable. If your site renders normally, you are not."
Linux Format censored over 'Learn to Hack' feature (bit-tech)
Bit-tech reports that Barnes & Noble pulled the last issue of Linux Format magazine because of an article featuring hacking techniques. "Issue 154 of Linux Format magazine had as its cover feature a piece entitled 'Learn to Hack,' walking readers through the use of the Metasploit Framework exploitation toolkit to gain access to computer systems running a variety of operating systems. The article also covered password cracking, network sniffing, and man-in-the-middle attacks over encrypted protocols. More importantly, the guide also covered how best to protect your systems from the self-same attacks, providing readers with information that the publication hoped would help keep them safe from the ne'er-do-wells inhabiting the seedier sides of the net." Future, Linux Format's parent company, has made the article available online.
New vulnerabilities
argyllcms: code execution
Package(s): | argyllcms | CVE #(s): | CVE-2012-1616 | ||||||||
Created: | May 7, 2012 | Updated: | June 19, 2012 | ||||||||
Description: | From the Red Hat bugzilla:
A Use-after-free vulnerability was found in the way icclib, a library used for reading and writing of color profile files that conform to the International Color Consortium (ICC) Profile Format Specification, processed certain crafted ICC profile files. The ICC Profile Format is a cross-platform device profile format that can be used to translate color data created on one device into another device's native color space. A remote attacker could provide a specially crafted file and trick a local user into opening it, which could lead to arbitrary code execution with the privileges of the user running an application linked against icclib. | ||||||||||
Alerts: |
|
asterisk: denial of service
Package(s): | asterisk | CVE #(s): | CVE-2012-2416 | ||||||||||||
Created: | May 4, 2012 | Updated: | May 9, 2012 | ||||||||||||
Description: | From the CVE entry:
chan_sip.c in the SIP channel driver in Asterisk Open Source 1.8.x before 1.8.11.1 and 10.x before 10.3.1 and Asterisk Business Edition C.3.x before C.3.7.4, when the trustrpid option is enabled, allows remote authenticated users to cause a denial of service (daemon crash) by sending a SIP UPDATE message that triggers a connected-line update attempt without an associated channel. | ||||||||||||||
Alerts: |
|
flash-player: code execution
Package(s): | flash-player | CVE #(s): | CVE-2012-0779 | ||||||||||||||||||||
Created: | May 7, 2012 | Updated: | May 23, 2012 | ||||||||||||||||||||
Description: | From the SUSE advisory:
Adobe Flash Player before 10.3.183.19 and 11.x before 11.2.202.235 on Windows, Mac OS X, and Linux; before 11.1.111.9 on Android 2.x and 3.x; and before 11.1.115.8 on Android 4.x allows remote attackers to execute arbitrary code via a crafted file, related to an "object confusion vulnerability," as exploited in the wild in May 2012. | ||||||||||||||||||||||
Alerts: |
|
horizon: multiple vulnerabilities
Package(s): | horizon | CVE #(s): | CVE-2012-2094 CVE-2012-2144 | ||||
Created: | May 7, 2012 | Updated: | May 9, 2012 | ||||
Description: | From the
Matthias Weckbecker discovered a cross-site scripting (XSS) vulnerability in Horizon via the log viewer refrash mechanism. If a user were tricked into viewing a specially crafted log message, a remote attacker could exploit this to modify the contents or steal confidential data within the same domain. (CVE-2012-2094) Thomas Biege discovered a session fixation vulnerability in Horizon. An attacker could exploit this to potentially allow access to unauthorized information and capabilities. (CVE-2012-2144) | ||||||
Alerts: |
|
kernel: denial of service
Package(s): | linux | CVE #(s): | CVE-2012-2100 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Created: | May 8, 2012 | Updated: | December 19, 2012 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Description: | From the Ubuntu advisory:
A flaw was found in the Linux kernel's ext4 file system when mounting a corrupt filesystem. A user-assisted remote attacker could exploit this flaw to cause a denial of service. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Alerts: |
|
mahara: insecure default/privilege escalation
Package(s): | mahara | CVE #(s): | |||||
Created: | May 9, 2012 | Updated: | May 9, 2012 | ||||
Description: | From the Debian advisory: It was discovered that Mahara, the portfolio, weblog, and resume builder, had an insecure default with regards to SAML-based authentication used with more than one SAML identity provider. Someone with control over one IdP could impersonate users from other IdP's. | ||||||
Alerts: |
|
mozilla-https-everywhere: no SSL switch for some URLs
Package(s): | mozilla-https-everywhere | CVE #(s): | |||||||||
Created: | May 3, 2012 | Updated: | May 9, 2012 | ||||||||
Description: | From the Tor bug entry: If you go to a URL such as http://www.google.com./ HTTPS-Everywhere will *not* switch to HTTPS. This is a legal DNS value, technically but not practically distinct from http://www.google.com/ and as such, it should be handled similarly. [...] (it would allow an active attacker to perform Firesheep-style cookie stealing accounts against sites that HTTPS Everywhere protects with domain-wide redirects, if the ruleset does not also have a <securecookie> directive) | ||||||||||
Alerts: |
|
openconnect: denial of service
Package(s): | openconnect | CVE #(s): | |||||||||
Created: | May 7, 2012 | Updated: | May 9, 2012 | ||||||||
Description: | Version 3.18 of openconnect, a client for Cisco's "AnyConnect" VPN, fixes a potential buffer overrun when handling the greeting banner from the server. Also this update fixes a potential crash when processing libproxy results. | ||||||||||
Alerts: |
|
php: code execution
Package(s): | php5 | CVE #(s): | CVE-2012-2311 CVE-2012-1823 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Created: | May 7, 2012 | Updated: | July 2, 2012 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Description: | From the Ubuntu advisory:
It was discovered that PHP, when used as a stand alone CGI processor for the Apache Web Server, did not properly parse and filter query strings. This could allow a remote attacker to execute arbitrary code running with the privilege of the web server. Configurations using mod_php5 and FastCGI were not vulnerable. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Alerts: |
|
python3: multiple vulnerabilities
Package(s): | python3 | CVE #(s): | CVE-2012-1150 CVE-2012-0845 CVE-2011-3389 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Created: | May 3, 2012 | Updated: | November 12, 2014 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Description: | From the Fedora advisory: Bug #750555 - CVE-2012-1150 python: hash table collisions CPU usage DoS (oCERT-2011-003) https://bugzilla.redhat.com/show_bug.cgi?id=750555 Bug #789790 - CVE-2012-0845 python: SimpleXMLRPCServer CPU usage DoS via malformed XML-RPC request https://bugzilla.redhat.com/show_bug.cgi?id=789790 Bug #812068 - python: SSL CBC IV vulnerability (CVE-2011-3389, BEAST) https://bugzilla.redhat.com/show_bug.cgi?id=812068 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Alerts: |
|
Page editor: Jake Edge
Kernel development
Brief items
Kernel release status
The current development kernel is 3.4-rc6, released on May 6. "Another week, another -rc - and I think we're getting close to final 3.4. So please do test."
Stable updates: the 3.0.31 and 3.3.5 updates were released on May 7 with the usual pile of important fixes.
The 3.2.17 update, with 167 fixes, is in the review process as of this writing; it can be expected on or after May 11.
Quotes of the week
Nichols, Jacobson: Controlling Queue Delay
Kathleen Nichols and Van Jacobson have published a paper describing a new network queue management algorithm that, it is hoped, will play a significant role in the solution to the bufferbloat problem. "CoDel (Controlled Delay Management) has three major innovations that distinguish it from prior AQMs. First, CoDel’s algorithm is not based on queue size, queue-size averages, queue-size thresholds, rate measurements, link utilization, drop rate or queue occupancy time. Starting from Van Jacobson’s 2006 insight, we used the local minimum queue as a more accurate and robust measure of standing queue. Then we observed that it is sufficient to keep a single-state variable of how long the minimum has been above or below the target value for standing queue delay rather than keeping a window of values to compute the minimum. Finally, rather than measuring queue size in bytes or packets, we used the packet-sojourn time through the queue. Use of the actual delay experienced by each packet is independent of link rate, gives superior performance to use of buffer size, and is directly related to the user-visible performance."
For more information, see this
blog post from Jim Gettys. "A preliminary Linux implementation
of CoDel written by Eric Dumazet and Dave Täht is now being tested on
Ethernet over a wide range of speeds up to 10gigE, and is showing very
promising results similar to the simulation results in Kathie and Van’s
article. CoDel has been run on a CeroWrt home router as well, showing its
performance.
"
Kernel development news
The CoDel queue management algorithm
"Bufferbloat" can be thought of as the buffering of too many packets in flight between two network end points, resulting in excessive delays and confusion of TCP's flow control algorithms. It may seem like a simple problem, but the simple solution—make buffers smaller—turns out not to work. A true solution to bufferbloat requires a deeper understanding of what is going on, combined with improved software across the net. A new paper from Kathleen Nichols and Van Jacobson provides some of that understanding and an algorithm for making things better—an algorithm that has been implemented first in Linux.Your editor had a classic bufferbloat experience at a conference hotel last year. An attempt to copy a photograph to the LWN server (using scp) would consistently fail with a "response timeout" error. There was so much buffering in the path that scp was able to "send" the entire image before any of it had been received at the other end. The scp utility would then wait for a response from the remote end; that response would never come in time because most of the image had not, contrary to what scp thought, actually been transmitted. The solution was to use the -l option to slow down transmission to a rate closer to what the link could actually manage. With scp transmitting slower, it was able to come up with a more reasonable idea for when the data should be received by the remote end.
And that, of course, is the key to avoiding bufferbloat issues in general. A system transmitting packets onto the net should not be sending them more quickly than the slowest link on the path to the destination can handle them. TCP implementations are actually designed to figure out what the transmission rate should be and stick to it, but massive buffering defeats the algorithms used to determine that rate. One way around this problem is to force users to come up with a suitable rate manually, but that is not the sort of network experience most users want to have. It would be far better to find a solution that Just Works.
Part of that solution, according to Nichols and Jacobson, is a new algorithm called CoDel (for "controlled delay"). Before describing that algorithm, though, they make it clear that just making buffers smaller is not a real solution to the problem. Network buffers serve an important function: they absorb traffic spikes and equalize packet rates into and out of a system. A long packet queue is not necessarily a problem, especially during the startup phase of a network connection, but long queues as a steady state just add delays without improving throughput at all. The point of CoDel is to allow queues to grow when needed, but to try to keep the steady state at a reasonable level.
Various automated queue management algorithms have been tried over the years; they have tended to suffer from complexity and a need for manual configuration. Having to tweak parameters by hand was never a great solution even in ideal situations, but it fails completely in situations where the network load or link delay time can vary widely over time. Such situations are the norm on the contemporary Internet; as a result, there has been little use of automated queue management even in the face of obvious problems.
One of the key insights in the design of CoDel is that there is only one parameter that really matters: how long it takes a packet to make its way through the queue and be sent on toward its destination. And, in particular, CoDel is interested in the minimum delay time over a time interval of interest. If that minimum is too high, it indicates a standing backlog of packets in the queue that is never being cleared, and that, in turn, indicates that too much buffering is going on. So CoDel works by adding a timestamp to each packet as it is received and queued. When the packet reaches the head of the queue, the time spent in the queue is calculated; it is a simple calculation of a single value, with no locking required, so it will be fast.
Less time spent in queues is always better, but that time cannot always be zero. Built into CoDel is a maximum acceptable queue time, called target; if a packet's time in the queue exceeds this value, then the queue is deemed to be too long. But an overly-long queue is not, in itself, a problem, as long as the queue empties out again. CoDel defines a period (called interval) during which the time spent by packets in the queue should fall below target at least once; if that does not happen, CoDel will start dropping packets. Dropped packets are, of course, a signal to the sender that it needs to slow down, so, by dropping them, CoDel should cause a reduction in the rate of incoming packets, allowing the queue to drain. If the queue time remains above target, CoDel will drop progressively more packets. And that should be all it takes to keep queue lengths at reasonable values on a CoDel-managed node.
The target and interval parameters may seem out of place in an algorithm that is advertised as having no knobs in need of tweaking. What the authors have found, though, is that a target of 5ms and an interval of 100ms work well in just about any setting. The use of time values (rather than packet or byte counts) makes the algorithm function independently of the speed of the links it is managing, so there is no real need to adjust them. Of course, as they note, these are early results based mostly on simulations; what is needed now is experience using a functioning implementation on the real Internet.
That experience may not be long in coming, at least for some kinds of links; there is now a CoDel patch for Linux available thanks to Dave Täht and Eric Dumazet. This code is likely to find its way into the mainline fairly quickly; it will also be available in the CeroWrt router distribution. As the early CoDel implementation starts to see some real use, some shortcomings will doubtless be encountered and it may well lose some of its current simplicity. But it has every appearance of being an important component in the solution to the bufferbloat problem.
Of course, it's not the only component; the problem is more complex than that. There is still a need to look at buffer sizes throughout the stack; in many places, there is simply too much buffering in places where it can do no good. Wireless networking adds some interesting challenges of its own, with its quickly varying link speeds and complexities added by packet aggregation. There is also the little problem of getting updated software distributed across the net. So a full solution is still somewhat distant, but the understanding of the problem is clearly growing and some interesting approaches are beginning to appear.
Statistics from the 3.4 development cycle
With the release of the 3.4-rc6 prepatch, Linus let it be known that he thought the final 3.4 release was probably not too far away. That can only mean one thing: it's time to look at the statistics for this development cycle. 3.4 was an active cycle, with an interesting surprise or two.As of this writing, Linus has merged just over 10,700 changes for 3.4; those changes were contributed from 1,259 developers. The total growth of the kernel source this time around is 215,000 lines. The developers most active in this cycle were:
Most active 3.4 developers
By changesets Mark Brown 284 2.7% Russell King 211 2.0% Johannes Berg 147 1.4% Al Viro 136 1.3% Axel Lin 133 1.2% Johan Hedberg 122 1.1% Guenter Roeck 121 1.1% Masanari Iida 109 1.0% Stanislav Kinsbursky 97 0.9% Trond Myklebust 85 0.8% Jiri Slaby 82 0.8% Ben Hutchings 82 0.8% Greg Kroah-Hartman 78 0.7% Takashi Iwai 78 0.7% Dan Carpenter 78 0.7% Stephen Warren 76 0.7% Stanislaw Gruszka 76 0.7% Alex Deucher 73 0.7%
By changed lines Joe Perches 56571 8.1% Dan Magenheimer 24077 3.4% Stephen Rothwell 17354 2.5% Greg Kroah-Hartman 15015 2.1% Mark Brown 12266 1.8% Jiri Olsa 11842 1.7% Mark A. Allyn 10976 1.6% Stephen Warren 10386 1.5% Arun Murthy 9347 1.3% Ingo Molnar 8779 1.3% Alex Deucher 8770 1.3% David Howells 8034 1.2% Guenter Roeck 7634 1.1% Chris Kelly 7023 1.0% Johannes Berg 6657 1.0% Ben Hutchings 6650 1.0% Al Viro 6628 0.9% Russell King 6610 0.9%
Mark Brown finds himself at the top of the list of changeset contributors for the second cycle in a row; as usual, he has done a great deal of work with sound drivers and related subsystems. Russell King is the chief ARM maintainer; he has also taken an active role in the refactoring and cleanup of the ARM architecture code. Johannes Berg continues to do a lot of work with the mac80211 layer and the iwlwifi driver, Al Viro has been improving the VFS API and fixing issues throughout the kernel, and Axel Lin has done a lot of cleanup work in the ALSA and regulator subsystems and beyond.
Joe Perches leads the "lines changed" column with coding-style fixes, pr_*() conversions, and related work. Dan Magenheimer added the "ramster" memory sharing mechanism to the staging tree. Linux-next maintainer Stephen Rothwell made it into the "lines changed" column with the removal of a lot of old PowerPC code. Greg Kroah-Hartman works all over the tree, but the bulk of his changed lines were to be found in the staging tree.
Some 195 companies contributed changes during the 3.4 development cycle. The top contributors this time around were:
Most active 3.4 employers
By changesets (None) 1156 10.8% Intel 1138 10.6% Red Hat 960 9.0% (Unknown) 688 6.4% Texas Instruments 428 4.0% IBM 381 3.6% Novell 372 3.5% (Consultant) 298 2.8% Wolfson Microelectronics 286 2.7% Samsung 234 2.2% 222 2.1% Oracle 188 1.8% Freescale 175 1.6% Qualcomm 161 1.5% Linaro 143 1.3% Broadcom 140 1.3% NetApp 133 1.2% MiTAC 133 1.2% AMD 132 1.2%
By lines changed (None) 108509 15.5% Intel 67464 9.7% Red Hat 65966 9.4% (Unknown) 50900 7.3% IBM 36800 5.3% Oracle 26617 3.8% Texas Instruments 25687 3.7% Samsung 24966 3.6% NVidia 20604 2.9% Linux Foundation 16917 2.4% ST Ericsson 15792 2.3% Novell 15185 2.2% Wolfson Microelectronics 14039 2.0% (Consultant) 13495 1.9% AMD 10151 1.5% Freescale 10102 1.4% Linaro 9360 1.3% 9070 1.3% Qualcomm 8972 1.3%
A longstanding invariant in the above table has been Red Hat as the top corporate contributor; in 3.4, however, Red Hat has been pushed down one position by Intel. Red Hat's contributions are down somewhat; 960 changesets in 3.4 compared to 1,290 in 3.3. But the more significant change is the burst of activity from Intel. This work is mostly centered around support for Intel's own hardware, as one would expect, but also extends to things like support for the x32 ABI. Meanwhile, Texas Instruments continues the growth in participation seen over the last few years, as do a number of other mobile and embedded companies. Once upon a time, it was said that Linux development was dominated by "big iron" enterprise-oriented companies; those companies have not gone away, but they are clearly not the only driving force behind Linux kernel development at this point. On the other hand, the participation by volunteers is at the lowest level seen in many cycles, continuing a longstanding trend.
A brief focus on ARM
Recent development cycles have seen a lot of work in the ARM subtree, and 3.4 is no exception; 1,100 changesets touched code in arch/arm this time around. Those changes were contributed by 178 developers representing 51 companies. Among those companies, the most active were:
Most active 3.4 employers (ARM subtree)
By changesets (Consultant) 149 13.5% Texas Instruments 121 11.0% (None) 103 9.4% Samsung 91 8.3% Linaro 80 7.3% NVidia 54 4.9% ARM 52 4.7% (Unknown) 48 4.4% Calxeda 46 4.2% Freescale 40 3.6% Atmel 37 3.4% Atomide 30 2.7% OpenSource AB 24 2.2% 23 2.1% ST Ericsson 23 2.1%
By lines changed Samsung 8162 16.8% (None) 5967 12.3% NVidia 4929 10.2% (Consultant) 4755 9.8% Linaro 3550 7.3% Texas Instruments 3118 6.4% ARM 2659 5.5% Calxeda 2408 5.0% Atmel 2080 4.3% (Unknown) 1862 3.8% Vista-Silicon S.L. 1121 2.3% Freescale 1117 2.3% Atomide 1005 2.1% 737 1.5% PHILOSYS Software 659 1.4%
ARM is clearly an active area for consultants, who contributed over 13% of the changes this time around. Otherwise, there are few surprises to be seen in this area; the companies working in the mobile area are the biggest contributors to the ARM tree, while those focused on other types of systems have little presence here.
There is one other way to look at ARM development. Much of the work on ARM is done through the Linaro consortium. Many developers contributing code from a linaro.com address are "on loan" from other companies; the above table, to the extent possible, credits those changes to the "real" employer that paid for the work. If, instead, all changes from a Linaro address are credited to Linaro, the results change: Linaro, with 11.9% of all the changes in arch/arm, becomes the top employer, though it still accounts for fewer changes than independent consultants do. Linaro clearly has become an important part of the ARM development community.
In summary, it has been another busy and productive development cycle in the kernel community. Despite the usual hiccups, things are stabilizing and chances are good that 3.4-rc7 will be the last prepatch, meaning that this cycle will be a relatively short one. There is little rest for kernel developers, though; the 3.5 cycle with its frantic merge window will start shortly thereafter. Stay tuned to LWN, as always, for ongoing coverage of development in this large and energetic community.
Supporting multi-platform ARM kernels
The diversity of the ARM architecture is one of its great strengths: manufacturers have been able to create a wide range of interesting system-on-chip devices around the common ARM processor core. But this diversity, combined with a general lack of hardware discoverability, makes ARM systems hard to support in the kernel. As things stand now, a special kernel must be built for any specific ARM system. With most other architectures, it is possible to support most or all systems with a single binary kernel (or maybe two for 32-bit and 64-bit configurations). In the ARM realm, there is no single binary kernel that can run everywhere. Work is being done to improve that situation, but some interesting decisions will have to be made on the way.On an x86 system, the kernel is, for the most part, able to boot and ask the hardware to describe itself; kernels can thus configure themselves for the specific system on which they are run. In the ARM world, the hardware usually has no such capability, so the kernel must be told which devices are present and where they can be found. Traditionally, this configuration has been done in "board files," which have a number of tasks:
- Define any system-specific functions and setup code.
- Create a description of the available peripherals, usually through
the definition of a number of platform
devices.
- Create a special machine description structure that includes a magic number defined for that particular system. That number must be passed to the kernel by the bootloader; the kernel uses it to find the machine description for the specific system being booted.
There are currently hundreds of board files in the ARM architecture subtree, and some unknown number of them shipped in devices but never contributed upstream. Within a given platform type (a specific system-on-chip line from a vendor), it is often possible to build multiple board files into a single kernel, with the actual machine type being specified at boot time. But combining board files across platform types is not generally possible.
One of the main goals of the current flurry of work in the ARM subtree is to make multi-platform kernels possible. An important step in that direction is the elimination of board files as much as possible; they are being replaced with device trees. In the end, a board file is largely a static data structure describing the topology of the system; that data structure can just as easily be put into a text file passed into the kernel by the boot loader. By moving the hardware configuration information out of the kernel itself, the ARM developers make the kernel more easily applicable to a wider variety of hardware. There are a lot of other things to be done before we have true multi-platform support—work toward properly abstracting interrupts and clocks continues, for example—but device tree support is an important piece of the puzzle.
Arnd Bergmann recently asked a question to the kernel development community: does it make sense to support legacy board files in multi-platform kernels? Or would it be better to limit support to systems that use device trees for hardware enumeration? Arnd was pretty clear on what his own position was:
There was a surprising amount of opposition to this idea. Some developers seemed to interpret Arnd's message as a call to drop support for systems that lack device tree support, but that is not the point at all. Current single-platform builds will continue to work as they always have; nobody is trying to take that away. The point, instead, is to make life easier for developers trying to make multi-platform builds work; multi-platform ARM kernels have never worked in the past, so excluding some systems will not deprive their users of anything they already had.
Some others saw it as an arbitrary restriction without any real technical basis. There is nothing standing in the way of including non-device-tree systems in a multi-platform kernel except the extra complexity and bloat that they bring. But complexity and bloat are technical problems, especially when the problem being solved is difficult enough as it is. It was also pointed out that there are some older platforms that have not seen any real maintenance in recent times, but which are still useful for users.
In the end, it will come down to what the users of multi-platform ARM kernels want. It was not immediately clear to everybody that there are users for such kernels: ARM kernels are usually targeted to specific devices, so adding support for other systems gives no benefit at all. Thus, embedded systems manufacturers are likely to be uninterested in multi-platform support. Distributors are another story, though; they would like to support a wide range of systems without having to build large numbers of kernels. As Debian developer Wookey put it:
In response, Arnd amended his proposal to allow board files for subarchitectures that don't look likely to support device trees anytime soon. At that point, the discussion wound down without any sort of formal conclusion. The topic will likely be discussed at the upcoming Linaro Connect event and, probably, afterward as well. There are a number of other issues to be dealt with before multi-platform ARM kernels are a reality; that gives some time for this particular decision to be considered with all the relevant needs in mind.
Patches and updates
Kernel trees
Architecture-specific
Core kernel code
Development tools
Device drivers
Documentation
Filesystems and block I/O
Networking
Virtualization and containers
Miscellaneous
Page editor: Jonathan Corbet
Distributions
Who should maintain Python for Debian?
A two-year old Debian technical committee "bug" highlights some interesting aspects of Debian governance. The problem comes down to technical and personal disagreements about Python maintenance for the distribution. That bug has remained open since March 2010, though it may soon be resolved—based on the history, even saying that may be premature. That history raises a larger question, however: how should a project handle a situation where developers and maintainers of cooperating packages can't seem to get along—or even communicate?
March 2010
More than two years ago, Sandro Tosi noted some problems in the maintenance of the Python interpreter package. In that report, he pointed out that Python 2.6 for Debian was delayed for 14 months after it was available upstream, though the maintainer (Matthias Klose) had released two Python 2.6 packages for Ubuntu in the interim. In addition, once a Debian version was uploaded to unstable, it contained changes to the location of installed modules that broke various packaging tools and packages (mostly Python modules). That transition came with no warning, Tosi said, which is symptomatic of another problem: Klose is not communicating with the rest of the Debian Python community.
Because of those problems, Tosi asked the committee to make a decision
about who should maintain the
interpreter packages going forward.
Tosi suggested that a new maintenance team be appointed for the Python
interpreter and python-defaults packages. That message to the committee was
signed by Tosi and three others (Luca Falavigna, Josselin Mouette, and
Bernd Zeimetz) all of whom were proposed as the new maintainers. Others
"willing to help, including of course the current
maintainer
" were also to be included.
The discussion continued for several weeks, committee member Bdale Garbee did some investigation into the problems and concluded that a better Debian Python policy and plan was needed before any kind of decision could be made, while others discussed ways to add co-maintainers. The problems clearly go back further than the bug report, perhaps as far as a DebConf 6 Python packaging meeting that evidently went awry—probably even further back than that.
Beyond the technical complaints, one of the major problems that is mentioned frequently by the (self) proposed new maintainers group is a lack of communication from Klose. Coordination with the module and Python application maintainers has been essentially non-existent, they said. Certainly the bug report itself is one example of that; in a long thread over two years, there is not one message from Klose. In addition, a look at the debian-python mailing list shows only a handful of messages from him in that time frame.
Klose maintains some "key packages (bash, binutils, gcc, java, python, and
several others)
", according
to Tosi. That may leave him stretched a little thin. It may also be
that he prefers other forms of communication (IRC is mentioned
frequently). There are also hints in the thread that Klose may no longer
be talking to those in the "new maintainer" camp due to longstanding "bad
blood" stemming from both technical and personality conflicts.
Whatever the reasons, there is some kind of fragmentation going on in the Debian Python community. Part of it seems to be caused by Ubuntu-Debian conflicts, but the bulk of it stems from Klose's maintainership, which, at least in the eyes of some, is characterized by a "my way or the highway" attitude. The technical committee was fairly obviously leery of stepping into the middle of that mess and just making a decision. The committee members discussing it seem to have reached consensus that there are problems in the community, but none of the proposed solutions look like they will clearly make things better.
November 2010
The initial discussion petered out in July 2010. In November 2010, Debian Project
Leader (DPL) Stefano Zacchiroli noted
that he was frequently asked about the issue. Things had gotten better, he
said, and discussions on transition strategies were taking place on the
mailing list, which was a step in the right direction. He noted that while
Klose was not always participating in those discussions, "it is also clear that he follows them and seems
to agree with where they are going
". But, that said, he stills sees
a problem:
Additionally, as DPL, I'm worried by seeing packages as important as the Python interpreters maintained by a single person. Even if all other surrounding issues were not there, that would be a bus-factor problem worth fixing by itself. (I concede there are other similar situations in the archive, but this is no excuse; they are just other problems to be solved.)
He concluded by saying that he didn't envy the committee for the decision it has to make, but was clearly encouraging a resolution to the problem. After there was no response for nearly two months, another ping from Zacchiroli in December was mostly met with silence.
March 2011
That led Zacchiroli to make another proposal in March 2011. While he makes it clear that he is not trying to step on the committee's toes, he proposed that it defer the decision to him. The proposal looks like something of a last gasp attempt to help the committee make a decision of some kind.
That elicited some response, though no one really felt that it was right to delegate the decision to the DPL. Ian Jackson expressed disappointment in the lack of a decision and suggested that the packages in question be orphaned, while requesting that interested teams apply to become the maintainers. Steve Langasek was opposed to that, and suggested that the committee re-affirm Klose as maintainer with encouragement to take on co-maintainers.
On the other hand, Russ Allbery thought that finding a team to maintain the interpreter packages, one that included Klose, would be the ideal solution. But, like the others, he was not really in favor of delegating to the DPL. And that's pretty much where this iteration of the conversation dropped.
March 2012
Tosi pinged the bug again in November, then in March 2012
("2-years-old ping
"). The latter is what prompted the most
recent re-kindling of the discussion. The participants in this round seem
resigned to taking a vote, with some discussion on what the options should
be. Zacchiroli volunteered to try to firm up the possible alternative
teams for Python maintenance and, to that end, posted a message to debian-python asking for
interested parties to speak up.
Several people spoke up to volunteer, along with some who were opposed to
replacing Klose. That led to a message
from Zacchiroli summarizing the discussion and outlining the teams that
were available to
be placed on the tech committee's ballot. He followed that up with a bit
of a poke on April 27: "I hope this could help and that the tech-ctte have now all the input
needed to quickly come to a conclusion on this issue, one way or
another.
" A bit of dialogue on the makeup of the three possible
"teams" ensued, but the discussion pretty much ended there. In his DPL
report, Zacchiroli mentioned his recent involvement and concluded: "I hope the tech-ctte now have all the information
needed to come to a decision
".
May 2012 (and beyond?)
It is a rather strange situation overall. It seems clear that the committee is not completely comfortable affirming Klose as the sole maintainer, and he has not commented as to whether he would be willing to co-maintain the interpreter packages with others. But an "overthrow" of Klose is not very palatable either. By waiting, presumably hoping that things would correct themselves on their own, the committee has put itself into an awkward position.
Had it re-affirmed Klose two years ago (or one year ago, or ...) the problem may in fact have solved itself. Perhaps the unhappy petitioners would have "taken their marbles and gone home", but, by now, one would guess any package maintainership holes would have been filled. If it gives Klose a vote of confidence now, after a two year consideration phase, there are likely to be questions about why it was left to linger so long. Meanwhile, deposing Klose now will raise more or less the same questions. As is typical in a Debian ballot, however, all of the proposals so far also include the "further discussion" option, so the committee could conceivably kick the can further down the road.
It's clear that Zacchiroli and others would rather not see that. The powers of the DPL are famously limited by the Debian Constitution, but Zacchiroli has done everything in his power to try to get some kind of closure on the issue. It is up to the technical committee to pull together a final ballot and put it to a vote; it seems likely that almost any decision (other than "further discussion" perhaps) would be better than none at this point. Or maybe the conversation will just die until the "three-year ping" comes along.
Brief items
Distribution quotes of the week
Distribution News
Debian GNU/Linux
(overlapping) bits from the DPL: April 2012
Debian Project Leader Stefano Zacchiroli has a few bits from April that spans the end of his previous term and the beginning of his current term of office. The bits start off with a call for DPL helpers. Other topics include Debian's proposed diversity statement, revenue sharing with DuckDuckGo, the conflict on Python maintenance, multimedia packaging, hardware replacement, and more.
Fedora
Appointment to the Fedora Board, and Elections Reminder
Garrett Holmstrom has been appointed to the Fedora Board. "In this election cycle, three Board seats are open for election, and two Board appointee seats are open; the first appointee, Garrett, is being appointed prior to nominations opening, and the second will be appointed after elections are completed." Nominations for open seats on the advisory board, FESCo (Fedora Engineering and Steering Committee), and FAmSCo (Fedora Ambassadors Steering Committee) close on May 15.
The Future of Fedora release names
Fedora members were recently polled on whether Fedora releases should have code names. The consensus is that many people like having release names, but the method for choosing the names should be improved. The board is seeking volunteers to help come up with a new method.
Mandriva Linux
"Dear Community – II" from Mandriva
The Mandriva blog has another letter to the community with very little in the way of actual useful information. "The Mandriva Linux project has the right to be given a space in which it may expand and the contributors and afficionados a place where they can express their talents. We are precisely working on this right now and during the next two weeks. We will announce the direction we intend to give to the project during the third week of May. It makes no doubt that it’ll be difficult to satisfy each an every expectation and wish, as they’re many of them and some are not compatible with the other, but we’ll try to achieve what can be useful and most promising for the community and, with it, the Mandriva Linux project."
Newsletters and articles of interest
Distribution newsletters
- DistroWatch Weekly, Issue 455 (May 7)
- Maemo Weekly News (May 7)
- Ubuntu Weekly Newsletter, Issue 264 (May 6)
Dell announces prototype development laptop with Ubuntu (The H)
Dell has announced Project Sputnik, which is aimed at producing a laptop for developing mobile and cloud applications. The H takes a look. "The laptop is pre-installed with an image based on Ubuntu 12.04 LTS that has been optimised for Dell's XPS13 Ultrabook. The project has already solved problems with the brightness control and the WiFi hotkey and is now working on issues with the touchpad which currently only works as a single touch pointing device with no scrolling. The install also includes several development packages such as version control systems and automatic deployment tools. Plans for the future include the automatic fetching of setup profiles for other software packages from GitHub."
Page editor: Rebecca Sobol
Development
LGM: Inkscape quietly evolves into a development platform
There was no new release of Inkscape at Libre Graphics Meeting 2012 in Vienna, but the SVG vector editor still made a significant impact. One session showcased a new extension that enables drawing with variable-width strokes, but several others showed off independent applications that leverage Inkscape's extensibility to build other tools, including an interactive mock-up creator for UI design and a font editor.
Libre Graphics Meeting is the annual workshop and conference of the open source graphics community; there are presentations from developers and artists alike, as well as workshops and team meetings. 2012 was the event's seventh year. LGM mainstays include GIMP, Blender, Inkscape, Krita, and Scribus, among other projects, but the exact makeup varies each year due to the moving venue and the irregular release schedules kept by the various teams. Inkscape is nearing its next major release, but its presence was felt in other sessions this year.
Variable-width stroke
![[Powerstroke]](https://static.lwn.net/images/2012/lgm-powerstroke-sm.png)
The variable-width stroke feature is named Powerstroke, and was authored by presenter Johan Engelen. The implementation is based on Inkscape's Live Path Effects (LPE) technique, which allows the user to attach a non-destructive effect to any path object. These effects are functions that manipulate the path data itself — meaning its points and control points, not "stylistic" features like color or opacity. LPEs can deform paths, map text or images along a path, and many other tricks. LPEs produce a valid SVG path as output, so they are preserved in other SVG viewers, but the original data is also saved in an Inkscape-specific attribute, which makes them reversible.
Without Powerstroke, each path has a fixed stroke width along its entire length (which is the default in almost every vector editor). The new feature adds "width points" along the path; each contains the location along the path and the width at that point — where they change, the effect interpolates smoothly between them. In the user interface, each width is shown as a line perpendicular to the curve of the path, and has "handles" that allow you to directly adjust the width on-canvas.
For the stroke width itself, there is very little else to it. The interpolation between widths is performed by the lib2geom library, and width control points have special purple handles to distinguish them from regular points. There is an "advanced" option which allows you to rearrange the order of the stroke-width points, which creates some zany effects, and an auxiliary LPE called "Clone original path" was created to enable filling Powerstroked shapes.
What is more complicated is how to handle sharp corners. The SVG specification defines three possibilities: rounded, mitered (pointed), or beveled (flattened). Powerstroke adds two of its own: extrapolated and "spiro." The extrapolated corner is a variant on the miter, but it is designed to more smoothly follow the shape that a pen might take on paper. The spiro corner is more rounded, based on the Spiro curve type created by Raph Levien.
Engelen hinted at several improvements for Powerstroke in future releases. He would like to make Powerstroke output an option for Inkscape's calligraphy tool rather than a stand-alone LPE, as well as tackle asymmetric stroke widths. Calligraphy tool support might make Powerstroke usable with pressure-sensitive pen tablets, which artists would like. There are also pathological cases where the math currently breaks down, such as coupling extremely sharp corners with extremely large widths; fixing those is something mathematicians would like.
Mocking the user interface
Red Hat's Máirín Duffy and Emily Dirsh presented a session entitled "An awesome FLOSS design collaboration workflow," covering a range of projects developed to support the Fedora Design Team. Duffy explained that working as a user experience (UX) designer, she found the existing collaboration tools frustrating when compared to Git and other tools made for developers. Designers need to collaborate with each other and with developers, she said, but often had little choice beyond shared-folder synchronization and email attachments. The first product of her campaign to create better design tools was SparkleShare, a Git-backed storage service that functions like Dropbox, but with the full power of commit, forking, and revision history.
![[Máirín Duffy and Emily Dirsh]](https://static.lwn.net/images/2012/lgm-duffy-dirsh-sm.jpg)
SparkleShare helps developers share and iterate designs via flat files, but it does not help when creating interactive UX mock-ups. For that, Duffy said, most designers are stuck with unfavorable options like proprietary tools, Adobe Flash, and web services that may or may not be around in years to come. Her solution to this dilemma is Magic Mockup, a utility for creating clickable, interactive mock-ups with Inkscape. The nexus of the idea came from Sozi, which makes animated presentations using SVG. Just as Sozi uses SVG's ability to embed JavaScript to transition between slide frames, Magic Mockup lets designers draw interactive buttons, dialogs, and other widgets that respond to mouse events. Clicks trigger a simple "change frames" action, which lets designers mimic application state-changes, user input, or animations. Duffy wrote the original implementation (in JavaScript), which Garrett LeSage then rewrote in CoffeeScript.
Still in development is a way for designers to share their Magic Mockup work with the public. Dirsh demonstrated her project, Glitter Gallery, which is built primarily for sharing and commenting on Magic Mockup SVG files, but supports other file types, too. Glitter Gallery is a Ruby application and is designed to run on Red Hat's OpenShift platform.
Typography
I presented another Inkscape-built utility in my talk about new open font development tools. The Inkscape typography extensions are a collection of related extensions that let font designers use Inkscape as the glyph-drawing canvas. The workflow allows the designer to draw each glyph on a separate layer, keeping the entire font in a single file (both because SVG does not have the concept of "pages," and to make comparing glyphs simpler). The first extension sets up a blank glyph-creation document, with guides set for baseline, x-height, cap-height, ascenders, and descenders. The second is a "new layer" function, which creates a new layer named for whichever letter of the alphabet the user specifies. The third extension cycles through the layers and builds an SVG font file, mapping each layer to the appropriate encoding slot. The extensions can also open and edit existing SVG fonts.
SVG fonts are not nearly as prevalent as TrueType or PostScript fonts, but the extensions make for a good start. FontForge is the application of choice for open source font crafting, but it does not offer a particularly pleasant editing experience. Inkscape has better and more flexible tools, plus an easier to work with canvas (for example, FontForge's glyph editor does not support on-canvas transformations). Is is also less crash-prone than FontForge, and has better essential application features (such as a fully-functional Undo/Redo).
More with SVG
In addition to the Inkscape-specific talks, there were several sessions about SVG itself. Jeroen Dijkmeijer presented his iScriptDesign project, a web-based application that lets you construct CAD-like blueprints suitable for laser cutting or CNC milling. Dijkmeijer uses iScriptDesign to create and build wooden furniture, but it is suitable for any project made of flat parts that can be cut with a 2-D tool.
What makes iScriptDesign an improvement over bare SVG drawings is that it supports dynamic, adjustable
measurements — for example, defining object X as half the width of object Y. Dijkmeijer has added support for JavaScript pre-processing directives, calling the result
"JSVG
". The list includes named-variable substitution, mathematical expressions
that are evaluated when the image is rendered, and user input methods like text-entry boxes
and adjustment sliders. He demonstrated a JSVG plan for a sofa table that incorporated adjustable
measurements for height, width, and depth. On-page sliders allow the user to scale various
dimensions of the design, and the application rearranges the resulting pieces to fit them onto the
smallest total area, to minimize production cost.
Dijkmeijer explained that the application also took steps to transform complex paths in the image so that they were optimized for a laser-cutter's computer-controlled motion. For example, a shape might include reflected segments, but it needs to have all of its paths oriented in the same direction so that the cutting head can trace it with one continuous pass.
Chris Lilley from the World Wide Web Consortium (W3C)'s SVG working group was on hand at LGM as well, and provided feedback to several of the SVG-oriented talks. He also presented an update on the ongoing development of the SVG 2.0 specification, which will sport several enhancements of interest to artists. First, it will allow images to specify colors in more precise terms than the generic 8-bit RGB triples common in HTML. The initial plan was to use the Lab color space and specify a white point, but thanks to a Q&A exchange on that subject with Kai-Uwe Behrmann of Oyranos and Richard Hughes of colord, the standard may soon use the more abstract (but simpler) XYZ color space instead. It will also support attaching ICC color profiles to documents, and will use them for embedded raster images, both ensuring better color matching.
I spoke to Lilley later about Dijkmeijer's JSVG effort, and he confirmed that the SVG Working Group is interested in eventually adding mathematical expression, dynamic variables, and other such constructs to the specification, although it will probably not make 2.0. The Q&A exchange with the color management developers was not the only point in the week where the SVG specification took hints from the artists and developers at the event; Lilley asked questions of many of the speakers, and called their feedback to the process valuable — such as Magic Markup's interest in having layers become part of the core format. Likewise, he was able to point some of the projects to helpful-if-not-well-known options that could simplify development.
The last few Inkscape releases have added more and more via the application's extensions mechanism, and they are increasingly specialized. For example, although she did not present an update on it this year, Susan Spencer's Sew Brilliant uses Inkscape extensions to assist textile-makers, implementing dynamic pattern-changing options akin to what iScriptDesign does with furniture designs. There may not be many projects that combine font development, UI mock-ups, and textile making, so it is impressive to see that Inkscape has evolved — under the radar — into a tool that so many people are using in such diverse tasks. Likewise, although at times standards bodies seem like remote and unapproachable entities, it is interesting to see a specification like SVG evolve in real-time as developers and artists give their feedback. That sort of frank back-and-forth between developers and end users is also one of the facets of LGM that makes it worth attending, whether your favorite application has a new release to unveil or not.
[Thanks to the Libre Graphics Meeting for assistance with travel to Vienna.]
Brief items
Quotes of the week
- Assume you have 2 pastures, with a gate between them, and a flock
of sheep. Your flock is in one pasture, and you want to get them to
the other pasture through the gate.
- Sheep are, to use the terms we have been using, 'unbreakable', and
'atomic'. (If you slice them into pieces to try to get them to fit
through the gate, you will end up with non-functioning sheep on the other
side). :-)
- If your gate is narrow, you will have to serialise your sheep, and
have them pass through one at a time.
- If it is wider then parts of different sheep will pass through the gate intermingled from the perspective of a camera mounted on the gate. "Nose of sheep 1, nose of sheep 2, head of sheep 1, nose of sheep 3, head of sheep 3, body of sheep 3, tail of sheep 3, body of sheep 2, body of sheep 1, tail of sheep 2, tail of sheep 1" is what the camera might report as 'having gone passed', and we might conclude that sheep 3 is a small lamb, and that sheep 1 is its mother who slowed down going through the gate so that the lamb could keep up with her -- but all of this doesn't matter because, as long as you do not try to break them, the flock will function perfectly on the other side of the gate without any attention being paid to them by you.
Apache OpenOffice 3.4 released
The first release of Apache OpenOffice, 3.4, has been announced. The version
numbering picks up from the last OpenOffice.org major release, which was
3.3. New features include improved ODF support, better pivot table support
in Calc, native support for SVG, enhanced graphics, and more. "'With the donation of OpenOffice.org to the ASF, the Foundation, and especially the podling project, was given a daunting task: re-energize a community and transform OpenOffice from a codebase of unknown Intellectual Property heritage, to a vetted and Apache Licensed software suite,' said Jim Jagielski, ASF President and an Apache OpenOffice project mentor. 'The release of Apache OpenOffice 3.4 shows just how successful the project has been: pulling in developers from over 21 corporate affiliations, while avoiding undue influence which is the death-knell of true open source communities; building a solid and stable codebase, with significant improvement and enhancements over other variants; and, of course, creating a healthy, vibrant and diverse user and developer community.'
"
GIMP 2.8 released
The long-awaited release of version 2.8 of the GIMP image editor is out. There are lots of new features, many of which were previewed in this article last November. See the release notes for lots of details.Git hints: ORIG_HEAD and merging
For those looking for some advanced git tricks: this Google+ conversation has a lot to offer, especially with regard to difficult merges. Much of it comes from Linus himself: "You didn't know about ORIG_HEAD? That's literally a 'Day One' feature of git, exactly because it's so incredibly useful (especially to a maintainer). We had ORIG_HEAD back when you had to script your stuff manually and bang two rocks together to make git do anything at all."
nPth - The new GNU portable threads library
The "GNU nPth" library project, under development as part of GnuPG, has decloaked and made its first release available. "nPth is a non-preemptive threads implementation using an API very similar to the one known from GNU Pth. It has been designed as a replacement of GNU Pth for non-ancient operating systems. In contrast to GNU Pth is is based on the system's standard threads implementation. Thus nPth allows the use of libraries which are not compatible to GNU Pth." It is dual-licensed under LGPLv3 and GPLv2.
Open Build Service version 2.3 released
Version 2.3 of the Open Build Service is out. New features include a number of improvements around release maintenance, an improved web interface, better cross-build support, and issue tracking support.
Newsletters and articles
Development newsletters from the last week
- Caml Weekly News (May 8)
- What's cooking in git.git (May 2)
- Perl Weekly (May 7)
- PostgreSQL Weekly News (May 6)
- Ruby Weekly (May 3)
- Tahoe-LAFS Weekly News (May 5)
Control Centre: The systemd Linux init system (The H)
The H has a four page article by Lennart Poettering, Kay Sievers and Thorsten Leemhuis on systemd. From the third page: "The unit files that are associated with systemd and the services are located in the /lib/systemd/system/ directory; if an identically named file exists in /etc/systemd/system/, systemd will ignore the one in the lib directory. This allows administrators to copy and customise a systemd unit file without running the risk that it could be overwritten during the next update – this can happen in SysVinit distributions if one of the init scripts stored in /etc/rc.d/init.d/ has been modified."
Hands-on: testing the GIMP 2.8 and its new single-window interface (ars technica)
Over at ars technica Ryan Paul takes the GNU Image Manipulation Program (GIMP) 2.8 release for a spin and looks at the future plans for the project, including a full transition to the Generic Graphics Library (GEGL) in the 2.10 release. "After the 2.10 release arrives, the next major version will be 3.0. According to the roadmap, the goal for 3.0 will be delivering support for Gtk+ 3, a major new version of the underlying widget toolkit that is used to build the GIMP’s interface. The other major feature item included in the roadmap for version 3.0 is pervasive high bit-depth support, a major feature that will made possible by the GEGL transition."
Hermann: sigrok - cross-platform, open-source logic analyzer software with protocol decoder support
On his blog, Uwe Hermann writes about the free logic analyzer software that he and Bert Vermeulen have been working on. "I originally started working on an open-source logic analyzer software named "flosslogic" in 2010, because I grew tired of almost all devices having a proprietary and Windows-only software, often with limited features, limited input/output file formats, limited usability, limited protocol decoder support, and so on. Thus, the goal was to write a portable, GPL'd, software that can talk to many different logic analyzers via modules/plugins, supports many input/output formats, and many different protocol decoders. [...] The advantage being, that every time we add a new driver for another logic analyzer it automatically supports all the input/output formats we already have, you can use all the protocol decoders we already wrote, etc. It also works the other way around: If someone writes a new protocol decoder or file format driver, it can automatically be used with any of the supported logic analyzers out of the box." (Thanks to Paul Wise.)
Page editor: Jonathan Corbet
Announcements
Brief items
A new round in GNOME's outreach program for women
The GNOME project has announced a new round in its outreach program for women, with ten applicants accepted to work with the project. "Over three quarters of the women involved in the program have stayed connected to the GNOME community. Better still, Outreach Program for Women participants have a strong tradition of becoming mentors in GNOME."
The Document Foundation announces a Certification Program
The Document Foundation has announced a Certification Program, "to foster the provision of professional services around LibreOffice and help the growth of the ecosystem of the world's best free office suite."
Articles of interest
FSFE Newsletter - May 2012
The May edition of the Free Software Foundation Europe newsletter covers Document Freedom Day and the Day against DRM, Free Software and the French Presidential elections, vendor lock-in in Helsinki, the UK Open Standard consultation, and several other topics.Google guilty of infringement in Oracle trial; future legal headaches loom (ars technica)
Ars technica reports on the confused verdict in the first phase of Oracle v. Google, where Google won most of the arguments but, maybe, was found to have infringed copyright via its use of the Java APIs. "But the jury couldn't reach agreement on a second issue—whether Google had a valid 'fair use' defense when it used the APIs. Google has asked for a mistrial based on the incomplete verdict, and that issue will be briefed later this week."
Fragmentation on the Linux Desktop (Is it Normal?) (Datamation)
In this two page article on Datamation, Bruce Byfield looks at the history and the current state of the Linux desktop. From the second page: "In studying this transformation of the Linux desktop, you can easily see possible turning points. What would have happened if the KDE 4.0 release had been delayed until it had more features? If Ubuntu had been more patient about its changes getting into GNOME? If GNOME 3 had been less radical, or user complaints addressed? If some or all of these events had occurred, then maybe GNOME and KDE would have remained as dominant as ever. However, I doubt it. More likely, other incidents would have caused a similar fragmentation sooner or later, no matter how anyone acted."
SAS v. WPL decision addresses boundaries of copyrights on software (opensource.com)
Over at opensource.com, Richard Fontana explains the recent European Court of Justice (ECJ, Europe's equivalent to the US Supreme Court) ruling on the copyrightability of software. It's not at all hard to see parallels in that ruling and the current copyright questions in the Oracle v. Google case (in fact the judge in that case has asked the parties to answer questions about the ruling). "With respect to manuals concerning programming or scripting languages, the court said that 'the keywords, syntax, commands and combinations of commands, options, defaults and iterations consist of words, figures or mathematical concepts' which are not copyrightable expression in themselves, even where they are contained in a larger work that is copyrightable. Copyrightable expression can arise only from 'the choice, sequence and combination of those words, figures or mathematical concepts'."
Hands On with Boot2Gecko (Wired)
Wired plays with a Boot2Gecko phone. "At this point, B2G’s user interface consists of a few home screens’ worth of apps, each of which can be launched by tapping a rectangular icon. The apps may be web-based, but launched blazingly fast because most were cached onto the phones. Thanks to the caching scheme, B2G phones will still work when a network signal is out of reach."
New Books
Programming in Go - Addison-Wesley Professional
Addison-Wesley Professional has released "Programming in Go" by Mark Summerfield.Programming Clojure, 2nd Edition--New from Pragmatic Bookshelf
Pragmatic Bookshelf has released "Programming Clojure, 2nd Edition" by Stuart Halloway and Aaron Bedra.
Calls for Presentations
PyCon Ireland 2012
Python Ireland will take place October 13-14, 2012 in Dublin, Ireland. Early bird registration and the call for papers are open.14th Real Time Linux Workshop - Call for Papers
The 14th Real Time Linux Workshop will take place in Chapel Hill, North Carolina, October 18-20, 2012. The call for papers is open until July 23. "Authors from regulatory bodies, academics, industry as well as the user-community are invited to submit original work dealing with general topics related to Open Source and Free Software based real-time systems research, experiments and case studies, as well as issues of integration of open-source real-time and embedded OS. A special focus will be on industrial case studies and safety related systems."
Upcoming Events
Formally announcing FUDCon: Paris and FUDCon: Lawrence.
Two upcoming FUDCons (Fedora Users and Developers Conference) have been announced. There will be a FUDCon in Paris, France October 13-15, 2012 and a FUDCon in Lawrence, Kansas January 18-20, 2013.Events: May 10, 2012 to July 9, 2012
The following event listing is taken from the LWN.net Calendar.
If your event does not appear here, please tell us about it.
Page editor: Rebecca Sobol