|
|
Subscribe / Log in / New account

LWN.net Weekly Edition for March 11, 2010

Who is Fedora for?

By Jonathan Corbet
March 10, 2010
Anybody who has gone near the Fedora mailing lists recently may have noticed that they have been a little...active. The discussions have reached the point where the "hall monitors" have intervened to shut down threads and many participants may have unsubscribed in favor of the relative calm and politeness of lists like linux-kernel. It's easy to dismiss it all as yet another Fedora flame war, but there are some serious issues at stake in the discussion. What it comes down to, it seems, is that the Fedora Project is still not entirely sure of who its users are or how to deliver what those users want.

Fedora is a rapidly-releasing distribution, with a new version coming out twice each year. Support limited to just over one year means that Fedora users must upgrade at least once annually or find themselves in a situation where security updates are no longer available. So one assumes that Fedora users are people who have a relatively high level of interest in running recent software, and who are not averse to updating that software with at least moderate frequency. But, it seems, there are limits.

Back in October, Fedora 11 users were surprised to discover that a routine update brought in a new version of Thunderbird with significantly changed behavior. In January, another Thunderbird update created trouble for a number of users. In March, some KDE users were surprised to discover that a "stable update" moved them to the 4.4.0 release, breaking things for some users. In all of these cases (and more), contentious email threads have ensued.

Fedora does indeed not hold back on the updates; a quick look in the LWN mailbox turns up over 600 package updates for the Fedora 11 release - in just the last month. This is a release which is scheduled for end-of-life in a few months. Many of these updates involve significant changes, and others have been deemed "worthless". Regardless of worth, there can be no doubt that all these updates represent a significant degree of churn in a distribution which is in the latter part of its short life. It is difficult to avoid breaking things when things are changing at that rate.

The parts of the discussion which were focused on constructive solutions were concerned with two overall topics: (1) what kind of stable updates are appropriate for a released Fedora distribution, and (2) how to minimize the number of regressions and other problems caused by whatever updates are considered appropriate.

With regard to the first question, it seems that some Fedora maintainers believe - probably with good reason - that their users want "adventurous updates," so it makes sense to them to push new versions of software into released distributions. Others describe their vision of Fedora as a "rolling update" distribution which is naturally following upstream releases. Others, instead, wonder why Fedora bothers making releases at all if it is devoted to rolling updates; users who want adventure, they say, can find plenty of it in Rawhide.

Several proposals have been put onto the release lifecycle proposals wiki page, and others have been posted to the list. They vary from nearly frozen releases to ideas that make releases look like a moderately-slowed version of Rawhide. This decision is one of fundamental distribution policy; it must be faced, or Fedora will continue to have different maintainers doing very different things. Given that need, it's unfortunate that the project seems to be unable to discuss the topic on its mailing lists; there is no clear means by which a consensus can be reached, currently.

Part (2), above, dodges the issue of what updates should be made and just concerns itself with the quality of those updates. The discussion is partly motivated by the fact that the system which Fedora has in place for the review of proposed updates - Bodhi - is often circumvented by updates which go straight out to users. The testing and voting which is supposed to happen in Bodhi is, in fact, not happening much of the time, and the quality of the distribution is suffering as a result. So some Fedora developers are looking for ways to beef up the system.

Matthew Garrett posted a proposal for a new policy which would eliminate developers' ability to push package updates directly into the update stream. Instead, updates would have to sit in the Bodhi system until they receive a minimal +3 "karma" value there; the only exception would be for security updates. By disabling direct pushes, the policy aims to ensure that every package which gets into the updates stream has actually been tested by some users who were happy with the results.

Suffice to say, this proposal was not received with universal acclaim. Some developers simply resent the imposition of extra bureaucracy into their workflow. Karel Zak's response is instructive:

Fedora strongly depends on well-motivated and non-frustrated maintainers and open source developers. We want to increment number of responsible maintainers who are able to use common sense. Our mission is to keep maintainers happy otherwise we will lost them and then we will lost users and our good position in Linux community. [...]

Always when I see that someone is trying to introduce a new rule I have to ask myself ... why so large project like kernel is able to successfully exist for 20 years without a huge collection of rules?

One might observe that the kernel has, in fact, accumulated a fairly substantial set of rules over the last ten years, often in response to discussions with a striking resemblance to those being held in the Fedora community. The merge window, signoff requirements, review requirements, no-regression policy, etc. are all aimed primarily at improving release quality. The kernel also has layers of developers with something close to veto power over potentially problematic changes - a form of dynamic rule-making that Fedora lacks.

So rules might make sense; that says nothing about any specific proposal, though. Many developers feel that very few users test packages in Bodhi, and that large numbers of updates would languish there indefinitely. As Tom Callaway noted, the obstacles to getting those karma votes are significant. So one alternative which has been suggested is that, after 14 days without negative karma, a package would be allowed to proceed to the update stream. Other proposals have included requirements for regression tests or separate (more stringent) requirements for "critical path" packages; see this page for the contents of all of the proposals.

A rather contentious FESCO meeting was held on this topic on March 9. The apparent conclusion was to ponder further on Bill Nottingham's proposal, which involves regression testing and a requirement for positive Bodhi karma for all "critical path" and "important" components; others could proceed after a week in the updates-testing repository. It looks like another meeting will be held in the near future; whether it will come to concrete conclusions remains to be seen.

The "concrete conclusions" part is probably more important than the specific policy adopted (within reason) at this point. Many large and successful projects go through the occasional period where they try to determine what their goals are and how those goals can best be met. Properly handled, these discussions can lead to a more focused and more successful project, even if much heat is generated in the process. A good outcome, though, requires that there be a way to end the discussion with a clear conclusion. Fedora has governance institutions which should be able to do that; until those institutions act, Fedora risks looking like a contentious organization lacking a clear idea of what it is trying to do.

Comments (53 posted)

SCALE 8x: Gnash, the free Flash player

By Jake Edge
March 10, 2010

Rob Savoye of Open Media Now! (OMN) gave an overview of the work OMN has been doing on making free versions of various Adobe products available for use on free platforms. He concentrated mostly on Gnash, the GNU Flash player, but also touched on the Cygnal media server, and the Ming ActionScript compiler. Gnash has been one of the Free Software Foundation's priority projects for several years, which has resulted in more developers as well as raising the profile of the project.

Flash is important because it is used for web site navigation, video web sites, and, perhaps most importantly to Savoye, for educational applications. There are an enormous number of educational programs written in Flash and free software couldn't support running them. Because he is a "fanatic" about freedom, Savoye never installed the Flash plugin, which made him and others like him into "second-class citizens on the internet".

There are a number of reasons to implement a free replacement for the Flash plugin, beyond just being able to view YouTube. The Adobe plugin is full of security problems and doesn't integrate well with Linux. There is no 64-bit support and it is essentially only available for the x86 architecture. In addition, some day archeologists may want to play Flash content and the Adobe plugin may have long faded away.

Though he likes working on Gnash, Savoye is no fan of Flash. In answer to a question at the end of his talk, he said: "I hope Flash falls over dead", and that is something he is starting to see happen. In the meantime, though, he recommends against using Flash for web sites, and to use HTML 5 instead. He also suggested encouraging sites that do use Flash to at least test against Gnash so that the site will work for those on other platforms.

Some history

Gnash was started in 2004 because Savoye wanted a user interface for his stereo system. That was Gnash's first platform and he still runs his stereo that way today. In 2005, John Gilmore asked him to turn it into a browser plugin, and he delivered plugins for Firefox and Konqueror in 2006. YouTube support came in 2007.

The development community that formed after the FSF high-priority rating decided that it "would be really nice to have funding", so they started OMN, which has been funded by Bob Young, Mark Shuttleworth, John Gilmore, and others. They have continued to reverse-engineer the Adobe formats and protocols, while also getting Gnash running on "all sorts of weird hardware".

Weird devices

He put up a slide with a picture of Gnash running on various embedded devices: OLPC, Sharp Zaurus, Pepper Pad, Classmate PC, OpenMoko, Playstation 3, etc. He noted that Adobe Flash didn't run on any of them. The OLPC isn't able to redistribute the Adobe Flash player, so they turned to Gnash. As part of getting better Gnash performance on the OLPC, Savoye wrote some GCC and Glibc optimizations for the Geode processor.

Gnash is a clean-room re-implementation of Flash, which means that none of its developers have ever used Adobe's Flash. The EULA that comes with the Flash plugin restricts users from being able to create a competing Flash implementation. So, all of the development was done using publicly available documentation, which is important because if it wasn't done legally, the distributions won't include it. Though they were worried about legal action in the first few years, Adobe recently announced that Gnash is a "legal re-implementation" of Flash.

Gnash features

Gnash can be run either standalone or as a browser plugin for Firefox and Konqueror, with Safari support coming soon. It got OpenGL support for desktop graphics rendering before Adobe did, and has added Anti-Grain Geometry (AGG) support for embedded framebuffer-only devices.

One of the areas that Gnash has concentrated on is security. Adobe's Flash is "really insecure", Savoye said, and if you use a banking site with a Flash interface, you should "be worried". It also has better privacy protection because, by default, Gnash deletes all Flash cookies whenever it exits.

Gnash allows users to extend ActionScript with their own code, or by writing a wrapper around an existing development library. It also supports patent-free codecs, he said, in addition to the standard proprietary ones.

Compatibility, portability, and performance

Gnash can read SWF ("Shockwave Flash") version 9 and earlier files, but primarily supports SWF version 8. Version 9 is under active development, though. Roughly 80% of the ActionScript v2.0 library has been implemented, and the rest of it he has "never seen in the real world". SWF version 10 support is underway as well, but "it's pretty nasty". The ActionScript v3.0 library can reuse many of the v2.0 classes, but version 10 requires support for all of the previous versions, each running in different virtual machines, so there is a lot of work to be done.

Savoye said that they rarely port Gnash any more as it is just a matter of reconfiguring and recompiling it for new hardware. It will run on any system that is POSIX conforming and has ANSI C++ support. It also supports some non-POSIX environments; he noted that he had never heard of Haiku, which is a BeOS clone, but saw it and Gnash running on it down on the SCALE Expo floor. Gnash supports many different architectures, with big or little-endian processors of 32 or 64-bits. It also supports many different GUIs and desktop environments, as well as several back-end renderers (AGG, OpenGL, and Cairo).

For performance, Gnash can use the X11 Xvideo extension for high-resolution full-screen video. Xvideo also reduces the memory footprint. Support is also being added for hardware video decoding on Intel, ATI, and NVIDIA hardware using libvaapi.

It is written in C++ and uses the Boost libraries. Gnash uses either Gstreamer or FFmpeg for media handling. He noted that most distributions use Gstreamer to avoid the codec issues, but that FFmpeg is much faster and Gstreamer can use that as well. For HTTP and HTTPS, Gnash uses libcurl. It supports either GNOME or KDE desktops, or no desktop at all, he said.

Much like Perl or Python, Gnash can wrap any development library so that they can be used from ActionScript. Currently, there are extensions available for things like direct filesystem access, MySQL, GTK2, D-Bus, and so on. The extensions are added directly into ActionScript and can be accessed just like any other ActionScript class.

Current focus

The Gnash team is currently concentrating on supporting SWF 9 and 10, as well as ActionScript 3. "Chasing Adobe" is what they will be doing "for the rest of our lives, at least in this project", Savoye said. There is also ongoing work on the RTMP protocols for Gnash and Cygnal, getting better performance from low-end hardware, and better support for hardware acceleration. They are also working on Flash-based video conferencing so that there will be free solutions in that area.

There is also a lot of work going into Cygnal because there isn't a good rich media server in the free software world. Various groupware and video conferencing applications are written in Flash, but they need server-side support. By implementing a free media server, they can concentrate on better security and privacy than Adobe or another proprietary company is likely to provide, he said.

How to help

Savoye was not shy about suggesting "free beer" as one of the best aids for helping Gnash development, but there are others as well. "Good bug reports" are crucial. The usual suspects for a free software projects: translations, documentation, web site development and maintenance, build farm help, and so forth, are areas where people could help out. Also, donations are always appreciated, he said.

While it is sometimes galling to think that the "open web" requires some way to play Flash content, it is an unfortunate reality today. In six years or so, Gnash has come a long way towards replacing Adobe's closed plugin on x86 desktops, and is the only solution for many other devices and architectures. When one considers that Savoye and the rest of the Gnash team have never actually installed Flash for themselves, that feat is even more amazing. If the adoption of the newer versions of Flash can be slowed—or stopped—there is even hope that Gnash can catch up and we can get rid of one more non-free blob on our desktops.

Comments (14 posted)

Open source and the Morevna project

March 10, 2010

This article was contributed by Nathan Willis

Konstatin Dmitriev's Morevna Project is to 2-D animation what the Blender Foundation's Open movie projects have been for 3-D. The goal is to produce a production-quality, full-length animated feature, using only open source software, and license the source content and final product under free, re-use-friendly terms. Along the way, the work provides stress-testing, feedback, and development help to the open source software used, while raising awareness of the quality of the code.

[Synfig animate]

Despite the popularity of 3-D animated features churned out by Pixar and its competitors, 2-D animation is not a has-been style — particularly when you consider the wildly popular world of anime. Dmitriev is an anime fan as well as an animator and open source contributor, and in mid-2008 decided to combine his interests in one project. The first product was a brief short created entirely with the Synfig Studio animation package.

Synfig Studio is an animation suite built for 2-D production. Like Blender, it was originally written in-house at a private animation studio as closed source software, but was later opened. Unlike traditional cell-based animation, in which each frame is individually drawn, Synfig uses vector graphics as its underlying elements. The animator needs only to draw key frames, and the software smoothly interpolates between them to create motion.

Production and workflow

Dmitriev is active in the Synfig Studio project and, since announcing the Morevna Project, has gathered a small team of like-minded contributors and artists. Their process reflects that of a traditional animated movie team: it starts with an idea, followed by a screenplay, storyboard, character designs and other creative work, well before animation itself gets underway. For its story, the project decided on the Russian folk tale "Marya Morevna" — but re-imagined in a futuristic setting befitting the anime style.

[Ivan design]

The script (in English and in Russian), character and production designs are all publicly available on the project's wiki — so don't look if you wish to avoid spoilers. The first portion of the screenplay has already been storyboarded and broken down into shots, and as the team completes work it has been posting demo videos to YouTube — completed animations and "animatics" — the wireframe, in-progress animations that bridge the gap between static storyboard and finished product.

For the actual production pipeline, Dmitriev and the other artists use a variety of open source tools. Animatics are made in Pencil, a cell-oriented "flipbook"-style animation tool. Rough sketches as well as backgrounds and other static imagery are produced in the raster editors Krita and Gimp. Vector-based character designs are drawn in Inkscape, while 3-D models for buildings, machines, and other non-character entities are produced in Blender. All of the content is stored in a Git repository, to allow the remote team members to coordinate their work.

[Morevna key frame]

When ready, the various layers of artwork are combined and converted into key frames in Synfig, which is used to render the animation. Further compositing (such as special effects) is done in Blender for final output. The final movie will be rendered in 16:9 1080p resolution.

Openness

As with Blender's open movies, part of the Morevna Project's goals are to improve Synfig and open source animation in general. The team documents progress on the movie on its blog, and has posted several entries about new Synfig features spawned along the way. For example, using Blender's IPO (interpolation) drivers gives the animator more fine-grained control over timing; the Morevna Project uses the technique to suddenly send a scene into slow-motion — an effect often seen in action movies these days — but which was not available in Synfig Studio until the project implemented the technique. The project also created a widget allowing the animator to manipulate the "camera view" in Synfig Studio, adding easy-to-use pan and zoom functions.

The project is licensing its artwork under Creative Commons' Attribution license, so that it can be freely reused. The plan is to do the same for final product and sounds, although licensing for some of the music may dictate different terms. It has also released a character-animation template called Stickman under the no-copyright CC Zero license. Stickman is used by Morevna Project artists to produce animatics with Synfig Studio.

The members of the Morevna Project are taking the openness of the content itself to a new level. Not only is the entire screenplay available online, but the wiki captures the evolution of character and scene designs, not just the final product — including variants and experiments that will not make it to the final product. If you think 2-D animation is somehow simple, take a look at the Battlefield concept art page to see how much work goes into creating a scene.

Still to come

The Morevna Project is still a long way from its final product, does not have corporate or grant funding, and the team is only six people strong — but it is attracting a great deal of attention. Dmitriev writes on the project blog that the anime and open source communities seems to have a great deal of overlap, and the Ubuntu Massachusetts LoCo is planning to promote Morevna at an upcoming Boston anime convention. Anyone who is interesting in joining the creative team should read the Contributor's Guide on the project wiki.

In fact, anyone with an open source project that could use more contributors would do well to read the Morevna Contributor's Guide, because it is a remarkably complete, thorough, and well-written introduction to the project and how to get started joining it. That bodes well for its future success.

Regardless of whether you are an anime addict or not, Morevna — like Blender's open movies — is a project everyone in the open source community should support. Large-scale creative projects like these do something that many other niches in open source cannot — they bring awareness of open source to the general public. Few people care what operating system runs on their mobile phones. The fact that an enterprise's ERP system and web infrastructure runs Linux, MySQL, and other open source components is nebulous at best to people who work outside the IT industry. A high-quality animated movie, on the other hand, anyone can see and appreciate.

Comments (4 posted)

Mozilla to update the MPL

The Mozilla Foundation has launched a process to update the Mozilla Public License. The project is described this way:

We've been using version 1.1 of the Mozilla Public License for about a decade now. Its spirit has served us well, helping to communicate some of the values that underpin our large and growing community. However, some of its wording may be showing its age. Keeping both those things in mind, we're launching this process to update the license, hoping to modernize and simplify it while still keeping the things that have made the license and the Mozilla project such a success.

While the update process is inspired by the GPLv3 update, the objectives are far less ambitious: Mozilla would like to smooth various rough edges without making major changes to the license. They hope to have the process complete - after releasing three drafts for comments - by November of this year.

Comments (16 posted)

Page editor: Jonathan Corbet

Security

SCALE 8x: Ten million and one penguins

By Jake Edge
March 10, 2010

At SCALE 8x, Ronald Minnich gave a presentation about the difficulties in trying to run millions of Linux kernels for simulating botnets. The idea is to be able to run a botnet "at scale" to try to determine how it behaves. But, even with all of the compute power available to researchers at the US Department of Energy's Sandia National Laboratories—where Minnich works—there are still various stumbling blocks to be overcome.

While the number of systems participating in botnets is open to argument, he said, current estimates are that there are ten million systems compromised in the US alone. He listed the current sizes of various botnets, based on a Network World article, noting that "depending on who you talk to, these numbers are either low by an order of magnitude or high by an order of magnitude". He also said that it is no longer reported when thousands of systems are added into a botnet, instead the reports are of thousands of organizations whose systems have been compromised. "Life on the Internet has started to really suck."

Botnet implementations

Botnets are built on peer-to-peer (P2P) technology that largely came from file-sharing applications—often for music and movies—which were shut down by the RIAA. This made the Overnet, which was an ostensibly legal P2P network, into an illegal network, but, as he pointed out, that didn't make it disappear. In fact, those protocols and algorithms are still being used: "being illegal didn't stop a damn thing". For details, Minnich recommended the Wikipedia articles on subjects like the Overnet, eDonkey2000, and Kademlia distributed hash table.

P2P applications implemented Kademlia to identify other nodes in a network overlaid on the Internet, i.e. an overnet. Information could be stored and retrieved from the nodes participating in the P2P network. That information could be movies or songs, but it could also be executable programs or scripts. It's a "resilient distributed store". He also pointed out that computer scientists have been trying to build large, resilient distributed systems for decades, but had little or nothing to do with the currently working example; in fact, it's apparently currently being maintained by money from organized crime syndicates.

Because the RIAA has shut down any legal uses of these protocols, it makes it difficult to study: "The good guys can't use it, but it's all there for the bad guys" And the bad guys are using it, though it is difficult to get accurate numbers as he mentioned earlier. The software itself is written to try to hide its presence, so that it only replies to some probes.

Studying botnets with supercomputers

In the summer of 2008, when Estonia "went down, more or less" and had to shut down its Internet because of an attack, Minnich and his colleagues started thinking about how to model these kinds of attacks. He likened the view of an attack to the view a homeowner might get of a forest fire: "my house is on fire, but what about the other side of town?". Basically, there is always a limited view of what is being affected by a botnet—you may be able to see local effects, but the effects on other people or organizations aren't really known: "we really can't get a picture of what's going on".

So, they started thinking about various supercomputer systems they have access to: "Jaguar" at Oak Ridge which has 180,000 cores in 30,000 nodes, "Thunderbird" at Sandia with 20,000 cores and 5,000 nodes, and "a lot of little 10,000 core systems out there". All of them run Linux, so they started to think about running "the real thing"—a botnet with ten million systems. By using these supercomputers and virtualization, they believe they could actually run a botnet.

Objections

Minnich noted that there have been two main objections to this idea. The first is that the original botnet authors didn't need a supercomputer, so why should one be needed to study them? He said that much of the research for the Storm botnet was done by academics (Kademlia) and by the companies that built the Overnet. "When they went to scale up, they just went to the Internet". Before the RIAA takedown, the network was run legally on the Internet, and after that "it was done by deception".

The Internet is known to have "at least dozens of nodes", really, "dozens of millions of nodes", and the Internet was the supercomputer that was used to develop these botnets, he said. Sandia can't really use the Internet that way for its research, so they will use their in-house supercomputers instead.

The second objection is that "you just can't simulate it". But Minnich pointed out that every system suffers from the same problem—people don't believe it can be simulated—yet simulation is used very successfully. They believe that they can simulate a botnet this way, and "until we try, we really won't know". In addition, researchers of the Storm botnet called virtualization the "holy grail" that allowed them to learn a lot about the botnet.

Why ten million?

There are multiple attacks that we cannot visualize on a large scale, including denial of service, exfiltration of data, botnets, and virus transmission, because we are "looking at one tiny corner of the elephant and trying to figure out what the elephant looks like", he said. Predicting this kind of behavior can't be done by running 1000 or so nodes, so a more detailed simulation is required. Botnets exhibit "emergent behavior", and pulling them apart or running them at smaller scales does not work.

For example, the topology of the Kademlia distributed hash network falls apart if there aren't enough (roughly 50,000) nodes in the network. The botnet nodes are designed to stop communicating if they are disconnected too long. One researcher would hook up a PC at home to capture the Storm botnet client, then bring it into work and hook it up to the research botnet immediately because if it doesn't get connected to something quickly, "it just dies". And if you don't have enough connections, the botnet dies: "It's kind of like a living organism".

So, they want to run ten million nodes, including routers, in a "nation-scale" network. Since they can't afford to buy that many machines, they will use virtualization on the supercomputer nodes to scale up to that size. They can "multiply the size of those machines by a thousand" by running that many virtual machines on each node.

Using virtualization and clustering

Virtualization is a nearly 50-year-old technique to run multiple kernels in virtual machines (VMs) on a single machine. It was pioneered by IBM, but has come to Linux in the last five years or so. Linux still doesn't have all of the capabilities that IBM machines have, in particular, arbitrarily deep nesting of VMs: "IBM has forgotten more about VMs than we know". But, Linux virtualization will allow them to run ten million nodes on a cluster of several thousand nodes, he said.

The project is tentatively called "V-matic" and they hope to release the code at the SC10 conference in November. It consists of the OneSIS cluster management software that has been extended based on what Minnich learned from the Los Alamos Clustermatic system. OneSIS is based on having NFS-mounted root filesystems, but V-matic uses lightweight RAMdisk-based nodes.

When you want to run programs on each node, you collect the binaries and libraries and send them to each node. Instead of doing that iteratively, something called "treespawn" was used, which would send the binary bundle to 32 nodes at once, and each of those would send to 32 nodes. In that way, they could bring up a 16M image on 1000 nodes in 3 seconds. The NFS root "couldn't come close" to that performance.

Each node requires a 20M footprint, which means "50 nodes per gigabyte". So, a laptop is just fine for a 100-node cluster, which is something that Minnich routinely runs for development. "This VM stuff for Linux is just fantastic", he said. Other cluster solutions just can't compete because of their size.

For running on the Thunderbird cluster, which consists of nodes that are roughly five years old, they were easily able to get 250 VMs per node. They used Lguest virtualization because the Thunderbird nodes were "so old they didn't have hardware virtualization". For more modern clusters, they can easily get 1000 VMs per node using KVM. Since they have 10,000 node Cray XT4 clusters at Sandia, they are confident they can get to ten million nodes.

Results so far

So far, they have gotten to 1 million node systems on Thunderbird. They had one good success and some failures in those tests. The failures were caused by two things: Infiniband not being very happy being rebooted all the time, and the BIOS on the Dell boxes using Intelligent Platform Management Interface (IPMI), which Minnich did not think very highly of. In fact, Minnich has a joke about how to tell when a standard "sucks": if it starts with an "I" (I20), ends with an "I" (ACPI, EFI), or has the word "intelligent" in it somewhere; IPMI goes three-for-three on that scale. So "we know we can do it", but it's hard, and not for very good reasons, but for "a lot of silly reasons".

Scaling issues

Some of the big problems that you run into when trying to run a nation-scale network are the scaling issues themselves. How do you efficiently start programs on hundreds of thousands of nodes? How do you monitor millions of VMs? There are tools to do all of that "but all of the tools we have will break—actually we've already broken them all". Even the monitoring rate needs to be adjusted for the size of the network. Minnich is used to monitoring cluster nodes at 6Hz, but most big cluster nodes are monitored every ten minutes or 1/600Hz—otherwise the amount of data is just too overwhelming.

Once the system is up, and is being monitored, then they want to attack it. It's pretty easy to get malware, he said, as "you are probably already running it". If not, it is almost certainly all over your corporate network, so "just connect to the network and you've probably got it".

Trying to monitor the network for "bad" behavior is also somewhat difficult. Statistically separating bad behavior from normal behavior is a non-trivial problem. Probing the networking stack may be required, but must be done carefully to avoid "the firehose of data".

In a ten million node network, a DHCP file is at least 350MB, even after you get rid of the colons "because they take up space", and parsing the /etc/hosts file can dominate startup time. If all the nodes can talk to all other nodes, the kernel tables eat all of memory; "that's bad". Unlike many of the other tools, DNS is designed for this "large world", and they will need to set that up, along with the BGP routing protocol so that the network will scale.

Earlier experiments

In an earlier experiment, on a 50,000 node network, Minnich modeled the Morris worm and learned some interesting things. Global knowledge doesn't really scale, so thinking in terms of things like /etc/hosts and DHCP configuration is not going to work; self-configuration is required. Unlike the supercomputer world, you can't expect all of the nodes to always be up, nor can you really even know if they are. Monitoring data can easily get too large. For example, 1Hz monitoring of 10 million nodes results in 1.2MB per second of data if each node only reports a single bit—and more than one bit is usually desired.

There is so much we don't know about a ten million node network, Minnich said. He would like to try to do a TCP-based denial of service from 10,000 nodes against the other 9,990,000. He has no idea whether it would work, but it is just the kind of experiment that this system will be able to run.

For a demonstration at SC09, they created a prototype botnet ("sandbot") using 8000 nodes and some very simple rules, somewhat reminiscent of Conway's game of Life. Based on the rules, the nodes would communicate with their neighbors under certain circumstances, and, once they had heard from their neighbors enough times would "tumble", resetting their state to zero. The nodes were laid out on a grid which were colored based on the state of the node, so that pictures and animations could be made. Each node that tumbled would be colored red.

Once the size of the botnet got over a threshold somewhere between 1,000 and 10,000 nodes, the behavior became completely unpredictable. Cascades of tumbles, called "avalanches" would occur with some frequency, and occasionally the entire grid turned red. Looking at the statistical features of how the avalanches occur may be useful in detecting malware in the wild.

Conclusion

There is still lots of work to be done, he said, but they are making progress. It will be interesting to see what kind of practical results come from this research. Minnich and his colleagues have already learned a great deal about trying to run a nation-scale network, but there are undoubtedly many lessons on botnets and malware waiting to be found. We can look forward to hearing about them over the next few years.

Comments (14 posted)

Brief items

Microsoft's Charney Suggests 'Net Tax to Clean Computers (PCWorld)

PCWorld reports on a speech given by Microsoft's Vice President for Trustworthy Computing, Scott Charney, at the RSA security conference in San Francisco. In it, he suggests that a tax of some sort might be just the way to pay for cleaning up systems that are infected with viruses and other malware. "So who would foot the bill? 'Maybe markets will make it work,' Charney said. But an Internet usage tax might be the way to go. 'You could say it's a public safety issue and do it with general taxation,' he said."

Comments (55 posted)

'Severe' OpenSSL vuln busts public key crypto (Register)

The Register has posted an article on a reported OpenSSL vulnerability that allows attackers to obtain a system's private key. Before hitting the panic button, though, it's worth seeing what's involved in carrying out this attack: "The university scientists found that they could deduce tiny pieces of a private key by injecting slight fluctuations in a device's power supply as it was processing encrypted messages. In a little more than 100 hours, they fed the device enough 'transient faults' that they were able to assemble the entirety of its 1024-bit key." It could be a problem for keys hidden in embedded systems, but that is probably about the extent of it.

Comments (22 posted)

Security reports

IETF draft - "Security Assessment of the Internet Protocol"

A draft security assessment of IP, which may one day become an Internet Engineering Task Force (IETF) RFC, has been announced. "This document is the result of an assessment the IETF specifications of the Internet Protocol (IP), from a security point of view. Possible threats were identified and, where possible, countermeasures were proposed. Additionally, many implementation flaws that have led to security vulnerabilities have been referenced in the hope that future implementations will not incur the same problems. Furthermore, this document does not limit itself to performing a security assessment of the relevant IETF specifications, but also provides an assessment of common implementation strategies found in the real world."

Comments (2 posted)

New vulnerabilities

apache: information leak

Package(s):apache CVE #(s):CVE-2010-0434
Created:March 8, 2010 Updated:April 12, 2011
Description: From the Mandriva advisory:

The ap_read_request function in server/protocol.c in the Apache HTTP Server 2.2.x before 2.2.15, when a multithreaded MPM is used, does not properly handle headers in subrequests in certain circumstances involving a parent request that has a body, which might allow remote attackers to obtain sensitive information via a crafted request that triggers access to memory locations associated with an earlier request.

Alerts:
Gentoo 201206-25 apache 2012-06-24
rPath rPSA-2011-0014-1 httpd 2011-04-11
rPath rPSA-2010-0056-1 httpd 2010-09-13
Fedora FEDORA-2010-6055 httpd 2010-04-09
Fedora FEDORA-2010-6131 httpd 2010-04-09
SuSE SUSE-SR:2010:010 krb5, clamav, systemtap, apache2, glib2, mediawiki, apache 2010-04-27
Debian DSA-2035-1 apache2 2010-04-17
Pardus 2010-45 apache-2.2.15-36-11 apache-2.2.15-34-12 2010-03-29
CentOS CESA-2010:0175 httpd 2010-03-28
CentOS CESA-2010:0168 httpd 2010-03-28
Red Hat RHSA-2010:0168-01 httpd 2010-03-25
Red Hat RHSA-2010:0175-01 httpd 2010-03-25
Ubuntu USN-908-1 apache2 2010-03-10
Mandriva MDVSA-2010:057 apache 2010-03-06

Comments (none posted)

apache: remote attack via orphaned callback pointers

Package(s):httpd CVE #(s):CVE-2010-0425
Created:March 9, 2010 Updated:March 30, 2010
Description: From the CVE entry:

modules/arch/win32/mod_isapi.c in mod_isapi in the Apache HTTP Server 2.3.x before 2.3.7 on Windows does not ensure that request processing is complete before calling isapi_unload for an ISAPI .dll module, which has unspecified impact and remote attack vectors related to "orphaned callback pointers."

Alerts:
Pardus 2010-45 apache-2.2.15-36-11 apache-2.2.15-34-12 2010-03-29
Slackware SSA:2010-067-01 httpd 2010-03-09

Comments (none posted)

argyllcms: udev rules set incorrect tty permissions

Package(s):argyllcms CVE #(s):
Created:March 4, 2010 Updated:March 10, 2010
Description:

From the Red Hat bugzilla entry:

From /lib/udev/rules.d/55-Argyll.rules which is part of argyllcms-1.0.4-4.fc13.x86_64

 # Enable serial port connected instruments connected on first two ports.
 KERNEL=="ttyS[01]", MODE="666"

 # Enable serial port connected instruments on USB serial converteds connected
 # on  first two ports.
 KERNEL=="ttyUSB[01]", MODE="666"
This gives world-write read/write access to any tty device.
Alerts:
Fedora FEDORA-2010-3587 argyllcms 2010-03-03

Comments (none posted)

bournal: multiple vulnerabilities

Package(s):bournal CVE #(s):CVE-2010-0118 CVE-2010-0119
Created:March 9, 2010 Updated:March 10, 2010
Description: From the Red Hat bugzilla:

Bournal before 1.4.1 allows local users to overwrite arbitrary files via a symlink attack on unspecified temporary files associated with a --hack_the_gibson update check. CVE-2010-0118

Bournal before 1.4.1 on FreeBSD 8.0, when the -K option is used, places a ccrypt key on the command line, which allows local users to obtain sensitive information by listing the process and its arguments, related to "echoing." CVE-2010-0119

Alerts:
Fedora FEDORA-2010-3301 bournal 2010-03-02
Fedora FEDORA-2010-3221 bournal 2010-03-02
Fedora FEDORA-2010-3168 bournal 2010-03-01

Comments (none posted)

cups: arbitrary code execution

Package(s):cups CVE #(s):CVE-2010-0393
Created:March 4, 2010 Updated:April 20, 2010
Description:

From the Debian advisory:

Ronald Volgers discovered that the lppasswd component of the cups suite, the Common UNIX Printing System, is vulnerable to format string attacks due to insecure use of the LOCALEDIR environment variable. An attacker can abuse this behaviour to execute arbitrary code via crafted localization files and triggering calls to _cupsLangprintf(). This works as the lppasswd binary happens to be installed with setuid 0 permissions.

Alerts:
Gentoo 201207-10 cups 2012-07-09
Pardus 2010-54 cups 2010-04-20
Mandriva MDVSA-2010:073-1 cups 2010-04-14
Mandriva MDVSA-2010:073 cups 2010-04-14
Mandriva MDVSA-2010:072 cups 2010-04-14
Pardus 2010-49 cups 2010-04-09
SuSE SUSE-SR:2010:007 cifs-mount/samba, compiz-fusion-plugins-main, cron, cups, ethereal/wireshark, krb5, mysql, pulseaudio, squid/squid3, viewvc 2010-03-30
Ubuntu USN-906-1 cups, cupsys 2010-03-03
Debian DSA-2007-1 cups 2010-03-03

Comments (none posted)

cups: denial of service

Package(s):cups CVE #(s):CVE-2010-0302
Created:March 4, 2010 Updated:April 14, 2010
Description:

From the Red Hat advisory:

It was discovered that the Red Hat Security Advisory RHSA-2009:1595 did not fully correct the use-after-free flaw in the way CUPS handled references in its file descriptors-handling interface. A remote attacker could send specially-crafted queries to the CUPS server, causing it to crash. (CVE-2010-0302)

Alerts:
Gentoo 201207-10 cups 2012-07-09
Mandriva MDVSA-2010:073-1 cups 2010-04-14
Mandriva MDVSA-2010:073 cups 2010-04-14
SuSE SUSE-SR:2010:007 cifs-mount/samba, compiz-fusion-plugins-main, cron, cups, ethereal/wireshark, krb5, mysql, pulseaudio, squid/squid3, viewvc 2010-03-30
Fedora FEDORA-2010-2743 cups 2010-02-24
CentOS CESA-2010:0129 cups 2010-03-12
Fedora FEDORA-2010-3761 cups 2010-03-06
Ubuntu USN-906-1 cups, cupsys 2010-03-03
Red Hat RHSA-2010:0129-01 cups 2010-03-03

Comments (none posted)

curl: arbitrary code execution

Package(s):curl CVE #(s):
Created:March 9, 2010 Updated:March 15, 2010
Description: From the Red Hat bugzilla:

A stack based buffer overflow flaw was found in the way libcurl used to uncompress zlib compressed data. If an application, using libcurl, was downloading compressed content over HTTP and asked libcurl to automatically uncompress data, it might lead to denial of service (application crash) or, potentially, to arbitrary code execution with the privileges of that application.

Alerts:
Fedora FEDORA-2010-2720 curl 2010-02-24
Fedora FEDORA-2010-2762 curl 2010-02-24

Comments (none posted)

drupal: multiple vulnerabilities

Package(s):drupal CVE #(s):
Created:March 8, 2010 Updated:March 10, 2010
Description: Multiple vulnerabilities and weaknesses were discovered in Drupal. See the Drupal advisory for more information.
Alerts:
Fedora FEDORA-2010-3739 drupal 2010-03-06
Fedora FEDORA-2010-3787 drupal 2010-03-06

Comments (none posted)

php: multiple vulnerabilities

Package(s):php CVE #(s):
Created:March 10, 2010 Updated:March 30, 2010
Description:

From the Mandriva advisory:

Multiple vulnerabilities has been found and corrected in php:

  • Improved LCG entropy. (Rasmus, Samy Kamkar)
  • Fixed safe_mode validation inside tempnam() when the directory path does not end with a /). (Martin Jansen)
  • Fixed a possible open_basedir/safe_mode bypass in the session extension identified by Grzegorz Stachowiak. (Ilia)
Alerts:
Fedora FEDORA-2010-4114 maniadrive 2010-03-11
Fedora FEDORA-2010-4114 php 2010-03-11
Fedora FEDORA-2010-4212 maniadrive 2010-03-11
Fedora FEDORA-2010-4212 php 2010-03-11
Mandriva MDVSA-2010:058 php 2010-03-09

Comments (none posted)

samba: access restriction bypass

Package(s):samba CVE #(s):CVE-2010-0728
Created:March 10, 2010 Updated:March 11, 2010
Description:

From the Samba advisory:

This flaw caused all smbd processes to inherit CAP_DAC_OVERRIDE capabilities, allowing all file system access to be allowed even when permissions should have denied access.

Alerts:
Gentoo 201206-22 samba 2012-06-24
Fedora FEDORA-2010-3999 samba 2010-03-10
Fedora FEDORA-2010-4050 samba 2010-03-10

Comments (none posted)

tdiary: cross-site scripting

Package(s):tdiary CVE #(s):CVE-2010-0726
Created:March 10, 2010 Updated:March 10, 2010
Description:

From the Debian advisory:

It was discovered that tdiary, a communication-friendly weblog system, is prone to a cross-site scripting vulnerability due to insuficient input sanitising in the TrackBack transmission plugin.

Alerts:
Debian DSA-2009-1 tdiary 2010-03-09

Comments (none posted)

typo3-src: multiple vulnerabilities

Package(s):typo3-src CVE #(s):
Created:March 9, 2010 Updated:September 8, 2010
Description: From the Debian advisory:

Several remote vulnerabilities have been discovered in the TYPO3 web content management framework: Cross-site scripting vulnerabilities have been discovered in both the frontend and the backend. Also, user data could be leaked.

Alerts:
Debian DSA-2098-2 typo3-src 2010-09-07
Debian DSA-2098-1 typo3-src 2010-08-29
Debian DSA-2008-1 typo3-src 2010-03-08

Comments (none posted)

Page editor: Jake Edge

Kernel development

Brief items

Kernel release status

The current development kernel is 2.6.34-rc1, released on March 8. This release came a bit earlier than usual, but Linus has reserved the right to pull in a few more trees yet. "So if you feel like you sent me a pull request bit might have been over-looked, please point that out to me, but in general the merge window is over. And as promised, if you left your pull request to the last day of a two-week window, you're now going to have to wait for the 2.6.35 window." Nouveau users should note that they can't upgrade to this kernel without updating their user-space as well.

There have been no stable updates releases since 2.6.32.9 on February 23.

Comments (none posted)

Quotes of the week

This is a big motivation behind our "fish" names for boards -- they're pretty unappetizing to the pr/marketing folks so they never get mixed up with final product names and we can concentrate on making the hardware work.
-- Brian Swetland

selinux relabels are the new fsck
-- Dave Airlie

The lack of any changelog in a patch is usually a good sign that the patch needs a changelog.
-- Andrew Morton

In the end neither side is right. There are useful things that you can do with either, but as everyone and his demented gerbil has pointed out, no one has the True Security Solution. Not even SELinux, which violates some pretty fundamental security principles (see: "small enough to be analyzed") in actual deployment. TOMOYO violates "non-circumventable", just in case anyone thinks I'm picking on someone. Heck, even Smack isn't perfect, although I will leave it to others to autoclave that puppy.
-- Casey Schaufler

If we are only talking about obligations under the GPL, sure, no one violated copyright licenses. But what *did* happen is someone basically said, "I want to experiment on a whole bunch of users, but I don't want to spend the effort to do things in the right way. I want to take short cuts; I don't want to worry about the fact that it will be impossible to test kernels without pulling Frankenstein combinations of patches between Fedora 13 and Fedora 12." It's much like people who drill oil in the Artic Ocean, but use single-hulled tankers and then leave so much toxic spillage in their wake, but then say, "hey, the regulations said what we did was O.K. Go away; don't bother us."
-- Ted Ts'o

Comments (none posted)

QOTW 2: the zombie edition

It was ugly enough in <compress/mm.h> (which really should be nuked from orbit - it's the only way to be sure), but when I see it spreading, I go into full zombie-attack mode, and want to start up the chainsaw and run around naked.
-- Linus Torvalds

OK, we really really don't want that... simply because there just aren't enough zombies to go around already. Last I heard, the EPA was considering classifying them as an endangered species, only to get stuck in a bureaucratic mess if they can be classified as a "species" at all, or if they should be classified together with bread mold, toxic waste and Microsoft salesmen.
-- H. Peter Anvin

Personally I think we should all get together and agree on a framework and fix the framework to meet all of the needs and look like a swiss army hammer driver drill thing rather than having 4 options, none of which meet all the needs, and then forcing our uneducated users to choose between them. But, hey, we all know that isn't going to happen so I'll just go back to happy go lucky dream land where Linus is not running around naked with a chain saw.
-- Eric Paris

Comments (4 posted)

LogFS merged into the mainline kernel

LWN first looked at LogFS, a new filesystem aimed at solid-state storage devices, back in 2007. It has taken a long time, but, as of 2.6.34, LogFS will be in the mainline kernel and available for use; let the benchmarking begin.

Comments (34 posted)

A new deadline scheduler patch

By Jonathan Corbet
March 10, 2010
The POSIX approach to realtime scheduling is based on priorities: the highest-priority task gets the CPU. The research community has long since moved on from priorities, though, and has been putting a lot of effort into deadline scheduling instead. Deadline schedulers allow each process to provide a "worst case execution time" and the deadline by which it must get that time; it can then schedule all tasks so that they meet their deadlines while refusing tasks which would cause that promise to be broken. There are a few deadline scheduler patches in circulation, but the SCHED_DEADLINE patch by Dario Faggioli and friends looks like the most likely one to make it into the mainline at this time; LWN looked at this patch back in October.

Recently, version 2 of the SCHED_DEADLINE patch was posted. The changes reflect a number of comments which were made the first time around; among other things, there is a new implementation of the group scheduling mechanism. Perhaps most significant in this patch, though, is an early attempt at addressing priority inversion problems, where a low-priority process can, by holding shared resources, prevent a higher-priority process from running. Priority inversion is a hard problem, and, in the deadline scheduling area, it remains without a definitive solution.

In classic realtime scheduling, priority inversion is usually addressed by raising the priority of a process which is holding a resource required by a higher-priority process. But there are no priorities in deadline scheduling, so a variant of this approach is required. The new patch works by "deadline inheritance" - if a process holds a resource required by another process which has a tighter deadline, the holding process has its deadline shortened until the resource is released. It is also necessary to exempt the process from bandwidth throttling (exclusion from the CPU when the stated execution time is exceeded) during this time. That, in turn, could lead to the CPU being oversubscribed - something deadline schedulers are supposed to prevent - but the size of the problem is expected to be small.

The "to do" list for this patch still has a number of entries, including less disruptive bandwidth throttling, a port to the realtime preemption tree, truly global deadline scheduling on multiprocessor systems (another hard problem), and more. The code is progressing, though, and Linux can be expected to have a proper deadline scheduler at some point in the not-too-distant future - though no deadline can be given as the worst case development time is still unknown.

Comments (3 posted)

No huge pages article this week

Mel Gorman's series on the use of huge pages in Linux is taking a one-week intermission, so there will be no installment this week. The fourth installment (on huge page benchmarking) will appear next week.

Comments (none posted)

Kernel development news

2.6.34 Merge window, part 2

By Jonathan Corbet
March 10, 2010
There have been nearly 1600 non-merge changesets incorporated into the mainline kernel since last week's summary; that makes a total of just over 6000 changesets for the 2.6.34-rc1 release. Some of the most significant, user-visible changes merged since last week include:

  • Signal-handling semantics have been changed so that "synchronous" signals (SIGSEGV, for example) are delivered prior to asynchronous signals like SIGUSR1. This fixes a problem where synchronous signal handlers could be invoked with the wrong context, something that apparently came up occasionally in WINE. Users are unlikely to notice the change, but it is a slight semantics change that developers may want to be aware of.

  • A new Nouveau driver with an incompatible interface has been merged; as of this writing, it will break all user-space code which worked with the older API. See this article for more information on the Nouveau changes. Nouveau also no longer needs external firmware for NV50-based cards.

  • The direct rendering layer now supports "VGA switcheroo" on systems which provide more than one graphical processor. For most needs, a simple, low-power GPU can be used, but the system can switch to the more power-hungry GPU when its features are needed.

  • The umount() system call supports a new UMOUNT_NOFOLLOW flag which prevents the following of symbolic links. Without this flag, local users who can perform unprivileged mounts can use a symbolic link to unmount arbitrary filesystems.

  • The exofs filesystem (for object storage devices) has gained support for groups and for RAID0 striping.

  • The LogFS filesystem for solid-state storage devices has been merged.

  • New drivers:

    • Media: Wolfson Microelectronics WM8994 codecs, and Broadcom Crystal HD video decoders (staging).

    • Miscellaneous: Freescale MPC512x built-in DMA engines, Andigilog aSC7621 monitoring chips, Analog Devices ADT7411 monitoring chips, Maxim MAX7300 GPIO expanders, HP Processor Clocking Control interfaces, DT3155 Digitizers (staging), Intel SCH GPIO controllers, Intel Langwell APB Timers, ST-Ericsson Nomadik/Ux500 I2C controllers, Maxim Semiconductor MAX8925 power management ICs, Max63xx watchdog timers, Technologic TX-72xx watchdog timers, and Hilscher NetX based fieldbus cards.

Changes visible to kernel developers include:

  • There has been a subtle change to the early boot code, wherein the kernel will open the console device prior to switching to the root filesystem. That eliminates problems where booting fails on a system with an empty /dev directory because the console device cannot be found, and eliminates the need to use devtmpfs in such situations.

  • The kprobes jump optimization patch has been merged.

  • The write_inode() method in struct super_operations is now passed a pointer to the relevant writeback_control structure.

  • Two new helper functions - sysfs_create_files() and sysfs_remove_files() - ease the process of creating a whole array of attribute files.

  • The show() and store() methods of struct class_attribute have seen a prototype change: the associated struct class_attribute pointer is now passed in. A similar change has been made to struct sysdev_class_attribute.

  • The sem lock found in struct device should no longer be accessed directly; instead, use device_lock() and device_unlock().

At "only" 6000 changesets, 2.6.34 looks like a relatively calm development cycle; both 2.6.32 and 2.6.33 had over 8000 changesets by the time the -rc1 release came out. It may be that there is less work to be done, but it may also be that some trees got caught out in the cold by Linus's decision to close the merge window early. Linus suggested that he might yet consider a few pull requests, so we might still see some new features added to this kernel; stay tuned.

Comments (8 posted)

Nouveau and interface compatibility

By Jake Edge
March 10, 2010

A recent linux-kernel discussion, which descended into flames at times, took on the question of the stability of user-space interfaces. The proximate cause was a change in the interface for the Nouveau drivers for NVIDIA graphics hardware, but the real issues go deeper than that. Though the policy for the main kernel is that user-space interfaces live "forever", the policy in the staging tree has generally been looser. But some, including Linus Torvalds, believe that staging drivers that have been shipped by major distributions should be held to a higher standard.

As part of the just-completed 2.6.34 merge window, Torvalds pulled from the DRM tree at Dave Airlie's request, but immediately ran into problems on his Fedora 12 system:

Hmm. What the hell am I supposed to do about
	(II) NOUVEAU(0): [drm] nouveau interface version: 0.0.16
	(EE) NOUVEAU(0): [drm] wrong version, expecting 0.0.15

The problem stemmed from the Nouveau driver changing its interface, which required an upgrade to libdrm—an upgrade that didn't exist for Fedora 12. The Nouveau changes have been backported into the Fedora 13 2.6.33 kernel, which comes with a new libdrm, but there are no plans to put that kernel into Fedora 12. Users that stick with Fedora kernels upgraded via yum won't run into the problem as Airlie explains:

At the moment in Fedora we deal with this for our users, we have dependencies between userspace and kernel space and we upgrade the bits when they upgrade the kernels, its a pain in the ass, but its what we accepted we needed to do to get nouveau in front of people. We are currently maintain 3 nouveau APIs across F11, F12 and F13.

That makes it impossible to test newer kernels on Fedora 12 systems with NVIDIA graphics, though, which reduces the number of people who are able to test. In addition, there is no "forward compatibility" either—the kernel and DRM library must upgrade (or downgrade) in lockstep. Torvalds is concerned about losing testers who run Fedora 12, as well as problems for those on Fedora 13 (Rawhide right now) who might need to bisect a kernel bug—going back and forth across the interface-change barrier is not possible, at least easily. In his original complaint, Torvalds is characteristically blunt: "Flag days aren't acceptable."

The Nouveau drivers were only merged for 2.6.33 at Torvalds's request—or demand—and they were put into the staging tree. The staging tree configuration option clearly spells out the instability of user-space interfaces: "Please note that these drivers are under heavy development, may or may not work, and may contain userspace interfaces that most likely will be changed in the near future.". So several kernel hackers were clearly confused by Torvalds's outburst. Jesse Barnes put it this way:

Whoa, so breaking ABI in staging drivers isn't ok? Lots of other staging drivers are shipped by distros with compatible userspaces, but I thought the whole point of staging was to fix up ABIs before they became mainstream and had backwards compat guarantees, meaning that breakage was to be expected?

Yes, it sucks, but what else should the nouveau developers have done? They didn't want to push nouveau into mainline because they weren't happy with the ABI yet, but it ended up getting pushed anyway as a staging driver at your request, and now they're stuck? Sorry this whole thing is a bit of a wtf...

But Torvalds doesn't disagree that the interface needs changing, he is just unhappy with the way it was done. Because the newer libdrm is not available for Fedora 12, he can't test it:

I'm not going to release a kernel that I can't test. So if I can't get a libdrm that works in my F12 environment, I will _have_ to revert that patch that you asked me to merge.

It is not just Torvalds who can't test it, of course, so he would like to see something done that will enable Fedora users to test and bisect kernels. The Nouveau developers don't want to maintain multiple interfaces, and the Fedora (and other distribution) developers don't want to have to test multiple versions of the DRM library. As Red Hat's Nouveau developer Ben Skeggs put it: "we have no intention of keeping crusty APIs around when they aren't what we require."

Torvalds would like to see a way for the various libdrms to co-exist, preferably with the X server choosing the right one at runtime. As he notes, the server has the information and, if multiple libraries are installed, the right one is only a dlopen() away:

Who was the less-than-rocket-scientist that decided that the right thing to do was to "check the kernel DRM version support, and exit with an error if it doesn't match"?

See what I'm saying? What I care about is that right now, it's impossible to switch kernels on a particular setup. That makes it effectively impossible to test new kernels sanely. And that really is a _technical_ problem.

In the end, Airlie helped him get both of the proper libraries installed on his system, with a symbolic link to (manually) choose between them. That was enough to allow testing of the kernel, thus Torvalds didn't revert the Nouveau patch in question. But there is a larger question here: When should a user-space interface be allowed to change, and, just how should it be done?

The Nouveau developers seem rather unhappy that Torvalds and others are trying to change their development model, at least partially because they never requested that Nouveau be merged. But Torvalds is not really pushing the Nouveau developers so much as he is pushing the distributor who shipped Nouveau to handle these kinds of problems. In his opinion, once a major distributor has shipped a library/kernel combination that worked, it is responsible for ensuring that it continues to work, especially for those who might want to run newer kernels.

The problem for testers exists because the distribution, in this case Fedora, shipped the driver before getting it into the upstream kernel, which violates the "upstream first" principle. Torvalds makes it clear that merging the code didn't cause the problem, shipping it did:

So the watershed moment was _never_ the "Linus merged it". The watershed moment was always "Fedora started shipping it". That's when the problems with a standard upstream kernel started.

Alan Cox disagrees, even quoting Torvalds from 2004 back at himself, because the Nouveau developers are just developing the way they always have; it's not their fault that the code was shipped and is now upstream:

Someone who never made a commitment to stability decided to do the logical thing. They deleted all the old broken interfaces, they cleaned up their ioctls numbering and they tided up afterwards. I read it as the action of someone who simply doesnt acknowledge that you have a right to control their development and is continuing to work in the way they intended.

But the consensus, at least among those who aren't graphics driver developers, seems to be that user-space interfaces should only be phased out gradually. That gives users and distributions plenty of time to gracefully handle the interface change. That is essentially how mainline interface changes are done; even though user-space interfaces are supposed to be maintained forever, they sometimes do change—after a long deprecation period. In fact, Ingo Molnar claimed that breaking an ABI often leads to projects that either die on the vine or do not achieve the success that they could:

I have _never_ seen a situation where in hindsight breaking the ABI of a widely deployed project could be considered 'good', for just about any sane definition of 'good'.

It's really that simple IMO. There's very few unconditional rules in OSS, but this is one of them.

Ted Ts'o sees handling interface changes gracefully as part of being a conscientious member of the community. If developers don't want to work that way, they shouldn't get their code included into distributions:

You say you don't want to do that? Then keep it to your self and don't get it dropped into popular distributions like Fedora or Ubuntu. You want a larger pool of testers? Great! The price you need to pay for that is to be able to do some kind of of ABI versioning so that you don't have "drop dead flag days".

Had this occurred with a different driver, say for an obscure WiFi device, it is likely there would have been less, or no, outcry. Because X is such an important, visible part of a user's experience, as well as an essential tool for testers, breaking it is difficult to hide. Torvalds has always pushed for more testing of the latest mainline kernels, so it shouldn't come as a huge surprise that he was less than happy with what happened here.

This situation has cropped up in various guises along the way. While developers would like to believe they can control when an ABI falls under the compatibility guarantee, that really is almost never the case. Once the interface gets merged, and user space starts to use it, there will be pressure to maintain it. It makes for a more difficult development environment in some ways, but the benefit for users is large.

Comments (3 posted)

4K-sector drives and Linux

By Jonathan Corbet
March 9, 2010
Almost exactly one year ago, LWN examined the problem of 4K-sector drives and the reasons for their existence. In short, going to 4KB physical sectors allows drive manufacturers to increase storage density, always welcome in that competitive market. Recently, there have been a number of reports that Linux is not ready to work with these drives; kernel developer Tejun Heo even posted an extensive, worth-reading summary stating that "4 KiB logical sector support is broken in both the kernel and partitioners." As the subsequent discussion revealed, though, the truth of the matter is that we're not quite that badly prepared.

Linux is fully prepared for a change in the size of physical sectors on a storage device, and has been for a long time. The block layer was written with an avoidance of hardwired sector sizes in mind. Sector counts and offsets are indeed managed as 512-byte units at that level of the kernel, but the block layer is careful to perform all I/O in units of the correct size. So, one would hope, everything would Just Work.

But, as Tejun's document notes, "unfortunately, there are complications." These complications result from the fact that the rest of the world is not prepared to deal with anything other than 512-byte sectors, starting with the BIOS found on almost all systems. In fact, a BIOS which can boot from a 4K-sector drive is an exceedingly rare item - if, indeed, it exists at all. Fixing the BIOS is evidently harder than one might think, and, evidently, there is little motivation to do so. Martin Petersen, who has done much of the work around supporting these drives in Linux, noted:

Part of the hesitation to work on booting off of 4 KB lbs drives is motivated by a general trend in the industry to move boot functionality to SSD. There are 4 KB LBS SSDs out there but in general the industry is sticking to ATA for local boot.

The problem does not just exist at the BIOS level: bootloaders (whether they are Linux-oriented or not) are not set up to handle larger sectors; neither are partitioning tools, not to mention a wide variety of other operating systems. Something must be done to enable 4K-sector drives to work with all of this software.

That something, of course, is to interpose a mapping layer in the middle. So most 4K-sector drives will implement separate logical and physical sector sizes, with the logical size - the one presented to the host computer - remaining 512 bytes. The system can then pretend that it's dealing with the same kind of hardware it has always dealt with, and everything just works as desired.

Except that, naturally enough, there are complications. A 512-byte sector written to a 4K-sector drive will now force the drive to perform a read-modify-write cycle to avoid losing the data in the rest of the sector. That slows things down, of course, and also increases the risk of data loss should something go wrong in the middle. To avoid this kind of problem, the operating system should do transfers that are a multiple of the physical sector size whenever possible. But, to do that, it must know the physical sector size. As it happens, that information has been made available; the kernel makes use of this information internally and exports it via sysfs.

It is not quite that simple, though. The Linux kernel can go out of its way to use the physical sector size, and to align all transfers on 4KB boundaries from the beginning of the partition. But that goes badly wrong if the partition itself is not properly aligned; in this case, every carefully-arranged 4KB block will overlap two physical sectors - hardly an optimal outcome.

As it happens, badly-aligned partitions are not just common; they are the norm. Consider an example: your editor was a lucky recipient of an Intel solid-state drive at the Kernel Summit which was quickly plugged into his system and partitioned for use. It has been a great move: git repositories on an SSD are much nicer to work with. A quick look at the partition table, though, shows this:

Disk /dev/sda: 80.0 GB, 80026361856 bytes
255 heads, 63 sectors/track, 9729 cylinders, total 156301488 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x5361058c

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1              63    52452224    26226081   83  Linux

Note that fdisk, despite having been taken out of the "DOS compatibility" mode, is displaying the drive dimensions in units of heads and cylinders. Needless to say, this device has neither; even on rotating media, those numbers are entirely fictional; they are a legacy from a dark time before Linux even existed. But that legacy is still making life difficult now.

Once upon a time, it was determined that 63 (512-byte) sectors was far more than anybody would be able to fit into a single disk track. Since track-aligned I/O is faster on a rotating drive, it made sense to align partitions so that the data began at the beginning of a track. So, traditionally, the first partition on a drive begins at (logical) sector 63, the last sector of the first track. That sector holds the boot block; any filesystem stored on the partition will follow at the beginning of the next track. That placement, of course, misaligns the filesystem with regard to any physical sector size larger than 512 bytes; logical sector 64 (the first data sector in the partition) will be placed at the end of a 4K physical sector. Any subsequent partitions on the device will almost certainly be misaligned in the same way.

One might argue that the right thing to do is to simply ditch this particular practice and align partitions properly; it should not be all that hard to teach partitioning tools about physical sector sizes. This can certainly be done. The tools have been slow to catch on, but a suitably motivated system administrator can usually convince them to place partitions sensibly even now. So weird alignments should not be an insurmountable problem.

Unfortunately, there are complications. It would appear that Windows XP not only expects misaligned partitions; it actually will not function properly without them. One simply cannot run XP on a device which has been properly partitioned for 4K physical sector sizes. To cope with that, drive manufacturers have introduced an even worse hack: shifting all 512-byte logical sectors forward by one, so that logical sector 64 lands at the beginning of a physical sector. So any partitioning tool which wants to lay things out properly must know where the origin of the device actually is - and not all devices are entirely forthcoming with that information.

With luck, the off-by-one problem will go away before it becomes a big issue. As James Bottomley put it: "...fortunately very few of these have been seen in the wild and we're hopeful they can be shot before they breed." But that doesn't fix the problem with the alignment of partitions for use by XP. Later versions of Windows need not concern themselves with this problem, since they rarely coexist with XP (and Windows has never been greatly concerned about coexistence with other systems in general). Linux, though, may well be installed on the same drive as XP; that leads to differing alignment requirements for different partitions. Making that just work is not going to be fun.

Martin suggests that it might be best to just ignore the XP issue:

With regards to XP compatibility I don't think we should go too much out of our way to accommodate it. XP has been disowned by its master and I think virtualization will take care of the rest.

It may well be that there will not be a significant number of XP installations on new-generation storage devices, but failure to support XP may still create some misery in some quarters.

A related issue pointed out by Tejun is that the DOS partition format, which is still widely used, tops out at 2TB, which just does not seem all that large anymore. Using 4K logical sectors in the partition table can extend that limit as far as 16TB, but, again, that requires cooperation from the BIOS - and it still does not seem all that large. The long-term solution would appear to be moving to a partition format like GPT, but that is not likely to be an easy migration.

In summary: Linux is not all that badly placed to support 4K-sector drives, especially when there is no need to share a drive with older operating systems. There is still work required at the tools level to make that support work optimally without the need for low-level intervention by system administrators, but that is, as they say, just a matter of a bit of programming. As these drives become more widely available, we will be able to make good use of them.

Comments (30 posted)

Patches and updates

Kernel trees

Linus Torvalds Linux 2.6.34-rc1 ?

Architecture-specific

Core kernel code

Development tools

Device drivers

Filesystems and block I/O

Memory management

Virtualization and containers

Miscellaneous

Page editor: Jonathan Corbet

Distributions

News and Editorials

Rolling with Arch Linux

March 10, 2010

This article was contributed by Ivan Jelic

It's always a good time to review Arch Linux since it features a rolling release model. This means frequent upgrades, with no release dates. In other words, Arch is always in its latest version, constantly being updated in small intervals of time. That makes it perfect for reviewing, since it's fresh whenever it's being taken for a spin.

Arch is inspired by CRUX, a simple and lightweight distribution which is inspired by BSD. Arch Linux first appeared in 2002. Although it shares some ideas with CRUX, Arch was developed from scratch, with no legacy from any other distribution. Arch Linux today has a devoted community, which stays close to its founding principles. According to DistroWatch's distribution ranking, Arch is doing better than ever, making it to the top ten in 2009, where it remains so far this year.

AIF

Occasionally the Arch Linux team does release installation images with a current snapshot of the core packages, a minimal set of packages found in the core repository. So core installation images contain just the packages needed for a basic install of Arch. These, together with AIF (Arch Linux Installation Framework), take care of the installation process. It is also possible to do a network installation where everything is retrieved from the Internet during the installation process. Images are available for CD (.iso) and USB stick (.img). The latest set of installation images originate from August 2009, labeled as 2009.08.

The default installation media boot option will work in the most cases. A live installation system allows configuration of the keyboard layout and the network (making it possible to do a network install) before the actual installation is started. AIF, available as an /arch/setup executable, is a command line tool with an ncurses-based interface. The installation steps managed by AIF are not unusual for the typical GNU/Linux install. This includes partition selection.

Before the partitioning, the installation source (CD, USB or network) and the time need to be set. Arch offers automatic disk partitioning and setup, together with manual disk partitioning and/or partition selection. There is an undo option, in case something goes wrong during the partition setup. The ext4 file system is fully supported.

Arch install

Package selection is another important step in the installation process. The system offers a package group selection in the first step, followed by detailed package selection list. Hardware drivers are manually selected during this step. The Arch core includes firmware packages for most of the wireless chips used on today's computers, which is very important since the packages for the rest of the system (X Window System, desktop environments, programs) are retrieved from the network. For example, the Intel 4965 wireless chip in the test machine became fully operational only after the firmware installation. Speaking of WiFi, the wireless_tools package is available to install the necessary wireless network setup tools.

After the packages are installed, AIF proceeds to the system configuration interface. This is nothing more than a list of the configuration files which need to be edited with a default editor. The defaults can be a good starting point for the core installation, so only the last option is needed - root password setup. AIF installs GRUB, which is configured to ignore any other operating systems on the computer except Arch.

The desktop

I am reviewing Arch Linux as a desktop/workstation distribution this time. Therefore, the installation is just a first step which must be followed by additional installation and configuration for the desktop. The core system only contains basic services and the shell.

A wired network connection should work "out of the box" using DHCP. At this point a basic knowledge of Pacman, the Arch package manager, is a requirement. Fortunately, the Arch Wiki is a great place to look for the answers. All the manuals needed for the beginner worked like a charm during the test.

A few metapackage installs and system file edits later I had a functional desktop. In some cases, some of the packages are not installed automatically. For example, a functional X.org setup requires manual video and input driver installation. If a GNOME desktop is desired Hal will be installed as a dependency, but it needs to be started and configured to start on boot by hand. This is a good illustration of the Arch approach, since Hal is always a requirement with GNOME, but all X.org video drivers are not. Installation of unneeded video drivers wouldn't be clean by Arch standards.

Arch Linux with GNOME

There is no default desktop environment on Arch. Most window managers or desktops are available for installation, in very fresh versions. Freshness, at the time the article was written, means KDE SC 4.4.1, GNOME 2.28.2 and XFCE 4.6. Most of the packages come in a vanilla setup, therefore available desktop environments look and behave the same as they would if the installation was done from the source tarballs.

Arch Linux with KDE

Other popular programs are very fresh too. Firefox 3.6, Thunderbird 3.0, Pidgin 2.6.6 and OpenOffice.org 3.2 are just part of the big software collection Arch provides in its repositories. All the searches for the additional software ended successfully during the test, which included Nvidia proprietary drivers and Flash plugin.

Speed

Arch seems very fast. While there is no exact measure, the overall subjective experience during this test was highly positive. A completely functional system with all necessary system tools and services installed and running, was fast and stable 100% of the time.

Installation and setup does take some time. Reading the documentation and installation/setup tasks take quite lot of time, even for advanced GNU/Linux users, especially those who have no experience with Arch. However the Arch Wiki provides all the answers for system and package installation and setup.

The Arch Way

Arch is developed and maintained in the "Arch way". "The following five core principles comprise what is commonly referred to as the Arch Way, or the Arch Philosophy, perhaps best summarized by the acronym KISS for Keep It Simple, Stupid."

In the Arch dictionary, simple and code-correct means no automatizing or autoconfiguration, and almost no patching. Therefore, user needs to do everything related to the installation and configuration. Sometimes the user involvement goes pretty far. For example, after the Network Manager installation, it needs to be started manually and set to do so on startup. Pacman does resolve dependencies automatically, so that part does not need to be done by hand.

The benefits of "The Arch Way" are good system performance and absolute control over the installation and setup, much like the control one gets with Gentoo. It is worth investing time in Arch if you want to learn the internals of a GNU/Linux system, maintain complete control over your system, and get good performance.

Conclusion

Overall, Arch is great. First, it's great for the users who want to learn GNU/Linux by choosing packages and editing configuration files. It's great for the users who have a knowledge of GNU/Linux and want to put together the system mostly by hand. Those who want an easy install and a functional system out of the box should avoid it.

Comments (13 posted)

New Releases

Fedora 13 Alpha released

The first alpha release of Fedora 13 is out. "We need your help to make Fedora 13 the best release yet, so please take a moment of your time to download and try out the Alpha and make sure the things that are important to you are working. If you find a bug, please report it -- every bug you uncover is a chance to improve the experience for millions of Fedora users worldwide." There is a lot of new stuff in this release; see the announcement for a summary.

Full Story (comments: 17)

Distribution News

Debian GNU/Linux

Debian Project Leader Elections 2010: Call for nominations

Nominations are open for this year's Debian Project Leader election until March 11, 2010. "Prospective leaders should be familiar with the constitution, but just to review: there's a one week period when interested developers can nominate themselves and announce their platform, followed by a three week period intended for campaigning, followed by two weeks for the election itself."

Full Story (comments: none)

Fedora

Fedora Board Recap 2010-03-04

Click below for a recap of the March 4, 2010 meeting of the Fedora Advisory Board. The main topic was Release Lifecycle Proposals.

Full Story (comments: none)

Ubuntu family

Ubuntu changing its look

Ubuntu has posted a page on its new branding, representing a significant change of look for the distribution. No more brown. "We're drawn to Light because it denotes both warmth and clarity, and intrigued by the idea that 'light' is a good value in software. Good software is 'light' in the sense that it uses your resources efficiently, runs quickly, and can easily be reshaped as needed. Ubuntu represents a break with the bloatware of proprietary operating systems and an opportunity to delight to those who use computers for work and play. More and more of our communications are powered by light, and in future, our processing power will depend on our ability to work with light, too." Screenshots and more are included.

Comments (36 posted)

Ubuntu 8.04 ClamAV update

The version of ClamAV shipped with Ubuntu 8.04 LTS has reached its end-of-life. "Upstream ClamAV announced that the end of life for ClamAV versions 0.94 and earlier to be April 15, 2010. To properly support users of ClamAV in Ubuntu 8.04 LTS, this maintenance release upgrades ClamAV to 0.95.3." This advisory also applies to the corresponding versions of Kubuntu, Edubuntu, and Xubuntu.

Full Story (comments: none)

Other distributions

MeeGo: Toward Day One

Valtteri Halla, the Nokia representative on the MeeGo Technical Steering Group, has posted some information on the future of the project. "The most important question is of course about the code. We hope to move on here very quickly now. Nokia and Intel have set the target to open the MeeGo repository by the end of this month. I guess this is something that finally will signify the real 'Day One' of MeeGo project, a genuine merger of moblin and maemo. What is scheduled to be available then is the first and very raw baseline to a source and binary repository to build MeeGo trunk on Intel ATOM boards and Nokia N900."

Comments (10 posted)

New Distributions

Announcing NEOPhysis

NEOPhysis is a new distribution for the Openmoko Freerunner. "What is Neophysis? It's a sort of Linux from scratch for the Freerunner (although it could potentially run on any embedded system which runs a bit of daemons and has libraries as per the following notes), we re-thought the concept of "distro" aiming at boot speed and phone stability." The project is in the early alpha stage.

Full Story (comments: none)

Distribution Newsletters

Arch Linux Newsletter March 2010

The March 2010 issue of the Arch Linux Newsletter is out, with news from the Arch Linux community.

Comments (none posted)

DistroWatch Weekly, Issue 344

The DistroWatch Weekly for March 8, 2010 is out. "It is always nice to have a choice of operating systems to run on our desktops. The PC-BSD project has been doing marvels with FreeBSD - in the project's latest release, version 8.0, the developers have turned the predominantly server operating system into an amazingly easy-to-use desktop system that anybody can install and use. Read our first-look review to find out more. In the news section, Canonical updates Ubuntu's desktop theme, KNOPPIX releases a new version of the popular live CD, openSUSE adds the LXDE desktop to the list of options on its install media, and a project called multicd.sh delivers a script that combines several CD images into one bootable CD or DVD with a single command. Also in this issue, links to interviews with Ubuntu's Melissa Draper and KNOPPIX's Klaus Knopper, some speculation on the possible release date of Red Hat Enterprise Linux 6, and a bunch of useful shell scripts for a variety of common tasks. All this and more in this issue of DistroWatch Weekly - happy reading!"

Comments (none posted)

Fedora Weekly News 216

The Fedora Weekly News for March 1, 2010 is out. "In Announcements, we have several development items, including changes for packaging guidelines, a call for F13 translation packages rebuilds, and news on Fedora 13 Alpha RC4 decisions from last week. In news from the Fedora Planet, thoughts on UX collaboration between conferences, how to set up client and server certificates for use with Apache Qpid, and perspectives on why the IIPA's position toward Open Source is problematic and wrong. In Marketing news, an update on last week's Fedora Insight sprint, work on a Communication Matrix for the Marketing team, and detail on the past weekly meeting activities, including decisioning the F13 slogan -- "Rock It!" In Ambassador news, an event report from Dhaka, and updates on the Campus Ambassador program. In Quality Assurance news, next week's Test Day focus on webcams, lots of tasty detail from QA Team weekly meetings, and a new tool, fedora-easy-karma, which greatly asssists in the process of filing feedback on packages in updates-testing via Bodhi. Translation reviews the upcoming Fedora 13 tasks in that area, updates on the Transifex 0.80 upgrade, and new members in the Fedora Localization Project for the Russian, Traditional Chinese and Greek teams. This week's issue closes with security advisory updates from the past week for Fedora 11, 12 and 13. Read on!"

Full Story (comments: none)

The Mint Newsletter - issue 101

This issue of the Mint Newsletter covers the LXDE edition and the Helena XFCE edition and several other topics.

Comments (none posted)

openSUSE Weekly News/113

This issue of the openSUSE Weekly News covers Pavol Rusnak: Announcing Connect!, Andrew Wafaa: openSUSE & Google Summer of Code 2010, Bento-Theme implementation approach, Linux.com/Joe Brockmeier: Beginner's Guide to Nmap, and Poll: Which linux Distro do you use frequently.

Comments (none posted)

Ubuntu Weekly Newsletter #183

The Ubuntu Weekly Newsletter for March 6, 2010 is out. "In this issue we cover: Mark Shuttleworth: "Light" the new look of Ubuntu, Announcing the 10.10 Ubuntu Developer Summit, UI Freeze in place for Lucid, Developer Membership Board meeting, International Women's Day Vote, Getting Patches Upstream, The Grand App Writing Challenge Submissions, Server Bug Zapping results, Ubuntu Classroom Team presents "ClassBot", February 2010 Team Reports, and much, much more!"

Full Story (comments: none)

Distribution meetings

Debian at CeBIT

Click below for a report from the Debian booth at CeBIT. "This year we were guests at the booth of Univention, a German company basing their products upon Debian, in exhibition hall 2, near one of the main entrances of the exhibition. While I must say that we had less visitors (an overall trend at this year), the quality of questions asked was far better than previously."

Full Story (comments: none)

Newsletters and articles of interest

Innovators get Linux to boot in 1 second (EDN)

EDN reports that MontaVista Software has developed an embedded version of Linux that boots in less than a second. "In addition to designing real-time Linux, MontaVista has been working on the development of real-fast Linux, a Linux operating system that boots in less than 1 second. The team who worked on the project includes Alexander Kaliadin, Nikita Youshchenko, and Cedric Hombourger. Many on the team also worked on the MontaVista real-time Linux. "One of the first things we did years ago was to make the Linux scheduler pre-emptive and deterministic," says Hombourger. These fast-boot developments are not necessarily limited to real-time or an embedded Linux; however, they can get a conventional Linux distribution to boot in 1 second, as well."

Comments (24 posted)

The Three Giants of Linux (Linux Magazine)

Linux Magazine takes a look at the Linux distribution ecosystem. "By the time Slackware came onto the scene, there were already half a dozen Linux distributions. A few months later however, on August 16th 1993, one of the most important was about to emerge all on its own, which today takes the crown for the oldest surviving independently developed Linux distribution. Meet Debian. Debian was not a fork of any previous work, but an independent project in its own right, created by Ian Murdock. Entirely community driven, Debian remains the largest non-commercial distributor of Linux. Almost one year after the birth of Debian, in 1994 the third and final member of the most influential distributions arrived on the scene, Red Hat Linux."

Comments (none posted)

Interviews

The Linux Desktop Will Have Its Day (LinuxInsider)

LinuxInsider talks with Mark Shuttleworth. "Mark Shuttleworth: People think of Ubuntu as Linux, or Red Hat as Linux, or they think of Debian as Linux. But actually the real work gets done in many upstream communities. The distributions get a lot of credit. And our focus has been to really try to serve those upstream communities well by delivering their code to users on a very predictable schedule with the highest levels of quality and integration."

Comments (none posted)

Page editor: Rebecca Sobol

Development

Bluefish 2.0: Slim but powerful

March 10, 2010

This article was contributed by Joe 'Zonker' Brockmeier.

Long-time Linux users, especially those with a penchant for Web development, are probably familiar with the venerable Bluefish editor. The Bluefish team released 2.0 in mid-February, and it brings with it a number of subtle improvements and enhancements for managing projects, custom code, and crash recovery.

[Bluefish 2.0]

Bluefish 2.0 is released under the GPLv2, and packages are available for Debian, Ubuntu, Fedora, OpenSolaris, and AltLinux. The Bluefish team also provides a Windows port, but it lacks some of the features found in Bluefish 2.0 for the Unix family. For example, the remote file support via GVFS is absent, and external filters are not supported.

At its heart, Bluefish is a over-competent text editor. Bluefish is a first-class HTML editor, but also provides support for a number of other languages and markup formats. Bluefish supports C/C++, ASP, Ada, Java, SQL, PHP, Perl, Python, Ruby, shell scripts, Mediawiki markup, and several others. HTML, CSS, and JavaScript are where the editor really shines, however.

[Cherokee support]

Not much has changed in the 2.0 release on the surface. Bluefish 2.0 doesn't look much different than its predecessor. It adheres to the tab-based toolbars, with entries for "Standard" HTML features, fonts, tables, frames, forms, and CSS. 2.0; adds a character map so it's trivial to insert accented characters, symbols like the copyright character, and a wide range of language support. Bluefish is the editor to choose for any developer creating a site that uses the Cherokee syllabary or any of the dozens of character/symbol sets. Bluefish also gives the option of inserting characters directly, or as HTML entities.

[Bluefish 1.x]

The 2.0 Bluefish release replaces Bluefish's 1.x series custom menu with a snippets sidebar. The previous release of Bluefish had a custom menu that allowed the creation of additional dialogs that would insert user-defined code fragments and help automate tasks. For instance, a user could define a custom search and replace function that would go through a document and swap out curly quotes for straight quotes, or otherwise clean up bad text to help conform to a style guide. It was a little complex to get started with, but otherwise a useful feature.

[Snippet dialog]

The Snippets sidebar is perhaps more conveniently placed, but not as intuitive when creating new functions. To add a new item, right-click on the snippets sidebar and select "New Snippet." You'll be asked for the name of the "branch," which is actually the name of the top-level menu that will be defined. After providing the "branch" name, the dialog closes and you'll see an entry in the Snippets sidebar that does nothing. After defining the top-level menu, click on that and select New Snippet again. This time the user is allowed to define the type of entry that will be added: Another branch (sub-menu), a string to be inserted, or a search and replace pattern.

The dialogs are a bit arcane, but once one stumbles on the trick to adding new snippets, it's a bit more user-friendly than the 1.x series. It's a good feature, but perhaps not as well implemented as it could be. The primary problem is the lack of accompanying documentation. The only docs to be found on creating Snippets that seems to be available is in one of the Bluefish 2.0 movies found on the Screenshots page. The Bluefish Manual looks to be a bit outdated and only covers up to version 1.6.

If it sounds like Bluefish is not user-friendly, that's far from the case. This release packs in some really helpful features that make using Bluefish a joy. For example, the autocompletion feature. When you start typing an HTML tag or function for one of the supported languages, Bluefish will start offering possible tags or functions. Type "<a" and Bluefish will supply a context menu with the possible tags that begin with "<a" and help text that spells out what each tag is. It's not limited to HTML, it also works with CSS, JavaScript, etc. It doesn't include tags or other snippets that are user-defined, but it seems to be very comprehensive with the built-in autocompletion for supported languages.

Another friendly feature in 2.0 is automatic document recovery. If a session crashes for some reason, when re-starting Bluefish it will open all of the previous unsaved documents with all changes intact. To make sure that this worked as advertised, I used xkill to force Bluefish 2.0 to crash several times because it was uncooperatively stable during testing, and forced my hand to test this feature. The only problem is that Bluefish only recovers documents that were unsaved. Documents that were saved when a Bluefish session crashed are not automatically reopened. But, since Bluefish doesn't have a habit of crashing, it shouldn't be an issue very often.

For tasks that require working on multiple files at the same time, Bluefish 2.0 has support for saving the group of files as a project. When files are saved together as a project, users can open one or fifty (or more) files at the same time and keep all of a project's work together as a group. Project support isn't new to 2.0, but it does advance the feature by letting users set many more preferences for the project. For instance, users can specify on a per-project basis the template to use, the default MIME type for new files, tab width in the documents, and if a project should support block folding.

Bluefish supports syncing files between local directories or from a local directory to a remote host over SFTP, FTP, HTTP, HTTPS, WebDAV, or CIFS — depending on which GVFS virtual file systems are supported on your system. This is slightly inconvenient, as it requires mounting the remote filesystem through Nautilus before being able to access it in Bluefish. Once it's set up, if you have a project configured it will store the configuration and let you sync the local and remote files with just a few clicks. So it's only necessary to set this up once when it's stored as part of a project.

Like any good Unix tool, Bluefish plays well with others. Files in Bluefish can be processed through commands and filters like HTML Tidy, make, xmllint, sort, uniq, or other scripts or tools specified by the user. Additional tools can be added via Bluefish's preferences. It's possible to filter just a section of a file, or the entire file that's being edited.

The Bluefish team calls the editor a "what you see is what you need," tool — as opposed to What You See Is What You Get (WYSIWYG) editors that attempt to hide some of the complexity of developing a Web site. Bluefish gives users bare-metal access to the toolset without any frills. It does, however, give the ability to feed pages to Firefox or other browsers for rendering so that it's possible to see how a site is coming together.

For users who know their way around the languages and markup they'll be using, Bluefish is a really useful tool that helps support the user's expertise and makes it easy to pull together assorted free software tools for developing Web sites. It might not be very well-suited for users who are accustomed to heavy duty tools like DreamWeaver or even Kompozer that provide WYSIWYG editing and do a lot of behind-the-scenes work for the user. The trade-off is that Bluefish provides a great deal of control and flexibility and is likely to be preferred by users who have advanced Web development chops already.

Comments (2 posted)

Brief items

Apache 2.2.15 released

Version 2.2.15 of the Apache HTTPD server is out. "Notably, this release was updated to reflect the OpenSSL Project's release 0.9.8m of the openssl library, and addresses CVE-2009-3555 (cve.mitre.org), the TLS renegotiation prefix injection attack. This release further addresses the issues CVE-2010-0408, CVE-2010-0425 and CVE-2010-0434 within mod_proxy_ajp, mod_isapi and mod_headers respectively."

Full Story (comments: 1)

Mercurial 1.5 released

A new major release of the Mercurial source code management system is out. New features can be seen on the Mercurial "what's new" page; they include some new branching options, more flexible importing of patches, XML log templates, and more. The download directory contains the source.

Comments (10 posted)

Open Clip Art Library 2.0

[Cougar] Version 2.0 of the Open Clip Art library is available. "Open Clip Art Library now has over 26,000 original and remixed high quality scalable vector graphic (SVG) files that have been produced by over 1,200 creative artists! March 2010 marks both Open Clip Art's 6th anniversary and 1 year since last spring's launch of version 0.19. The project's launch is so massive, the project jumped its release number to 2.0."

Full Story (comments: none)

OpenSSH 5.4 released

The OpenSSH 5.4 release is out, with a number of new features; these include a new certificate format, a "netcat mode," a key revocation operation, better multiplexing support, and strengthened encryption. This release also removes disables (by default) support for version 1 of the SSH protocol - a change which few users should notice at this point.

Full Story (comments: 23)

PowerDNS Recursor 3.2

Version 3.2 of the PowerDNS recursor (DNS resolver) has been announced. The bulk of the changes would appear to be aimed at improved performance: "In practical numbers, over 40,000 queries/second sustained performance has now been measured by a third party, with a 100.0% packet response rate. This means that the needs of around 400,000 residential connections can now be met by a single commodity server."

Comments (1 posted)

Renoise 2.5

[Renoise] Version 2.5 of the Renoise music production environment is out. New features include a "pattern matrix," cross-track routing, a better MIDI mapping module, lots of new internal effects, and more; see the "what's new" page for details.

Full Story (comments: none)

StatusNet 0.9.0 released

After 8 months of development, the open microblogging system StatusNet has released version 0.9.0. StatusNet is the software behind the identi.ca microblogging service. The new version has lots of new features including support for the OStatus distributed status update standard, support for geolocation, no fixed message size (though 140 characters is still the default), web-based administration, a moderation system, and much more. "Under the covers, the software has a vastly improved plugin and extension mechanism that makes writing powerful and flexible additions to the core functionality much easier."

Comments (none posted)

Transifex v0.8 "Magneto" has been released

Transifex, the "open translation platform" has released version 0.8, codenamed "Magneto". There are lots of new features in the release including the addition of translation teams and reviews, a timeline/history view, better notification support, and more. "Transifex is a localization platform that gives translators a simple yet featureful web interface to manage translations for multiple remotely-hosted projects. Files to be translated can be translated straight from the user's browser or retrieved for offline translation, and various translation statistics can be read at a glance. Popular projects using Transifex include the Fedora Project, Moblin, XFCE and LXDE." Click below for the full announcement.

Full Story (comments: 9)

Twisted 10.0.0 released

Version 10.0.0 of the Twisted web framework has been released. It features improved documentation, performance improvements, and a lot of fixes; "It's stable, backwards compatible, well tested and in every way an improvement."

Full Story (comments: none)

Newsletters and articles

Newsletters published in the last week

Comments (1 posted)

Getting Loopy: Performance Loopers For Linux Musicians (Linux Journal)

Over at Linux Journal, Dave Phillips continues his adventures in Linux audio with a look at audio loopers for Linux. "Performance loopers are machines that record an audio signal and capture it to a buffer for use as a repeating loop over which a performer improvises a new musical line or even another loop. A performance looper records multiple loops, thus giving the user an opportunity to compose additively in realtime."

Comments (none posted)

GNOME and KDE: Seven Attractions in Each (Datamation)

Bruce Byfield takes a look at innovations in GNOME and KDE. "Of course, GNOME and KDE have long had features that Windows lacked, such as multiple desktops and finer controls for customizing the user experience. However, in the last few years, both major free desktops have added features that show not only an interest in usability, but, at times, an effort to anticipate what users might actually want. The focus is by no means consistent, yet scattered here and there are features that can make any user glad that they're using a open source desktop."

Comments (4 posted)

Try the Linux desktop of the future (TuxRadar)

TuxRadar takes a look at several desktops and applications. "For the tinkerers and testers, 2010 is shaping up to be a perfect year. Almost every desktop and application we can think of is going to have a major release, and while release dates and roadmaps always have to be taken with a pinch of salt, many of these projects have built technology and enhancements you can play with now. We've selected the few we think are worth keeping an eye on and that can be installed easily, but Linux is littered with applications that are evolving all the time, so we've also tried to guess what the next big things might be."

Comments (36 posted)

Page editor: Jonathan Corbet

Announcements

Commercial announcements

Magnatune sends check to GNOME Foundation thanks to Rhythmbox

John Buckman, founder and owner of Magnatune, writes about sending money to the GNOME foundation based on 10% of the sales via Rhythmbox's Magnatune plugin. "Also FYI this means that RB has raised $3579.50 for independent musicians (because we pay out 50% of what comes in to musicians, and the 10% RB payout is coming out of Magnatune's half) [...] What this means is that I've now sent a check for $614.20 to GNOME Foundation." He also notes that Ubuntu has changed the referrer in the Rhythmbox plugin so that the $1017 in sales via that channel will instead be credited to Ubuntu. Also see the related article from the March 4th edition of LWN. (Thanks to Don Marti).

Comments (7 posted)

Articles of interest

Greenschool Motorcycles develops Linux-powered electric motorcycle (Ecofriend)

Ecofriend takes a look at a Linux-powered electric motorcycle. "The motorcycle sports a touchscreen dash powered by Ubuntu that offers stats about the bike's performance and GPS navigation as well. The one-of-a-kind motorcycle is based around a Honda chassis that has its tail chopped and engine replaced with a pack of nickel-metal batteries for a completely silent ride."

Comments (none posted)

Legal Announcements

European Parliament pushes back on ACTA

Swedish MEP Christian Engström reports that the European Parliament has passed a resolution coming out against the secretive ACTA copyright treaty negotiations and demanding transparency in the process. The vote was rather definitive: 633 for, 13 against. "At last, the elected representatives in the parliament have sent a strong message. We have shown that we do not accept secrecy. We have shown that we are prepared to stand up for a free internet open to everybody."

Comments (9 posted)

Meanwhile, back in Utah...

The SCO case has long since dropped off the radar for most. It is worth noting, though, that the Novell "slander of title" trial is now underway in Utah. Groklaw has detailed coverage of the testimony thus far. "Why did Novell slander SCO's title? Because of Linux. Linux started as a hobbyist tool. It's open source; 'nobody can be completely sure where the code comes from'. Starting around 2000, IBM inserted into Linux stuff that belonged to SCO. SCO sued, and started their licensing program (SCOsource). Novell stated that SCO doesn't have the copyrights and can't sue IBM."

Comments (16 posted)

Microsoft Signs Linux Patent Agreement With I-O Data (ITProPortal)

ITProPortal reports that Japan-based I-O Data Device has signed a patent cross-licensing deal with Microsoft. "The software maker asserted that the network attached storage devices from I-O Data Device use Linux-based technologies that come under the "patent covenants". David Kaefer, intellectual property chief at Microsoft, said in a statement: "We're pleased to reach this agreement with I-O Data"."

Comments (16 posted)

New Books

"HTML & CSS: The Good Parts" and "RESTful Web Services Cookbook" -- New from O'Reilly

"HTML & CSS: The Good Parts" and "RESTful Web Services Cookbook" have been released by O'Reilly.

Full Story (comments: none)

Blog Postings

Andy Updegrove: Elliott Associates and Novell: All About a Game of Cat and Mouse

Andy Updegrove looks at the next steps in Elliott Associates' bid for Novell. He looks at what a tender offer is, what Elliott's and the Novell board's strategy may be going forward, and what other bidders might bring to the table. "There are two things to watch for at this point: the first is whether Novell's board decides to enter into negotiations with Elliott, or to rebuff the offer (you saw Microsoft and Yahoo go through this dance not so long ago). And the second is whether other bidders enter the scene. Such a bidder could be solicited by Novell's management and board (a 'White Knight'), because they don't like the looks of the Elliott bid and what may come afterwards, or there could be further unsolicited bids."

Comments (5 posted)

Schwartz: Good Artists Copy, Great Artists Steal

Jonathan Schwartz writes about patent attacks, and Apple's attack on Android in particular. "Having watched this movie play out many times, suing a competitor typically makes them more relevant, not less. Developers I know aren’t getting less interested in Google’s Android platform, they’re getting more interested - Apple’s actions are enhancing that interest." He also says that Microsoft tried to shake down Sun with patent claims on OpenOffice.org.

Comments (9 posted)

Simon Phipps: Last Day At Sun

Simon Phipps, Chief Open Source Officer at Sun, reminisces about some achievements during his tenure. "Got some of the most important software in the computer industry released under Free licenses that guarantee software freedom for people who rely on them, regardless of who owns the copyrights. Unix, Java, key elements of Linux, the SPARC chip and much more have been liberated."

Comments (3 posted)

Calls for Presentations

Call for Papers: EC2ND 2010

The Call for Papers for the sixth European Conference on Computer Network Defense (EC2ND 2010) is open until July 2, 2010. The conference will be held in Berlin, Germany, October 28-29, 2010. "EC2ND 2010 specifically encourages submissions presenting work at an early stage with the intention to act as a discussion forum for innovative security research. While our goal is to solicit ideas that are not completely worked out, and might have challenging and interesting open questions, we expect submissions to be supported by some evidence of feasibility or preliminary quantitative results."

Full Story (comments: none)

Upcoming Events

LibrePlanet 2010 conference to feature Women's Caucus

The LibrePlanet conference, being held March 19-21 in Cambridge, Massachusetts, will be featuring a day-long Women's Caucus on Sunday March 21st. That track will be focusing on finding concrete ways to increase women's participation in free software, including a panel on recruiting and retaining women, a presentation on mentoring, and a workshop on how non-coders can take up critical roles in free software projects. In addition, LibrePlanet has keynotes from FSF founder Richard Stallman and EFF founder John Gilmore. More information can be found on the web sites or in the schedule.

Comments (23 posted)

Texas Linux Fest announces 2010 program

Texas Linux Fest has announced the initial list of speakers and presentations for its inaugural event. Keynote speakers include Joe "Zonker" Brockmeier and Randal L. Schwartz, with additional presentations by Linux, free software, and open source experts such as Jon "maddog" Hall, Amber Graner, Bradley Kuhn, and Max Spevack. The event will take place on Saturday, April 10th, in Austin Texas. Registration is available online. The complete list of talks is available as well.

Comments (none posted)

PyCon ITALY

The call for papers for PyCon Italy ended March 10, 2010 but community voting is open until March 18. The conference will be held in Firenze, Italy, May 7-9, 2010.

Full Story (comments: none)

10th Python Game Programming Challenge

The 10th Python Game Programming Challenge (PyWeek) will run March 28 - April 4, 2010. "The PyWeek challenge: Invites entrants to write a game in one week from scratch either as an individual or in a team."

Full Story (comments: none)

Events: March 18, 2010 to May 17, 2010

The following event listing is taken from the LWN.net Calendar.

Date(s)EventLocation
March 13
March 19
DebCamp in Thailand Khon Kaen, Thailand
March 15
March 18
Cloud Connect 2010 Santa Clara, CA, USA
March 16
March 18
Salon Linux 2010 Paris, France
March 17
March 18
Commons, Users, Service Providers Hannover, Germany
March 19
March 21
Panama MiniDebConf 2010 Panama City, Panama
March 19
March 21
Libre Planet 2010 Cambridge, MA, USA
March 19
March 20
Flourish 2010 Open Source Conference Chicago, IL, USA
March 22
March 26
CanSecWest Vancouver 2010 Vancouver, BC, Canada
March 22 OpenClinica Global Conference 2010 Bethesda, MD, USA
March 23
March 25
UKUUG Spring 2010 Conference Manchester, UK
March 25
March 28
PostgreSQL Conference East 2010 Philadelphia, PA, USA
March 26
March 28
Ubuntu Global Jam Online, World
March 30
April 1
Where 2.0 Conference San Jose, CA, USA
April 9
April 11
Spanish DebConf Coruña, Spain
April 10 Texas Linux Fest Austin, TX, USA
April 12
April 15
MySQL Conference & Expo 2010 Santa Clara, CA, USA
April 12
April 14
Embedded Linux Conference San Francisco, CA, USA
April 14
April 16
Linux Foundation Collaboration Summit San Francisco, USA
April 14
April 16
Lustre User Group 2010 Aptos, California, USA
April 16
April 17
R/Finance 2010 Conference - 2nd Annual Chicago, IL, US
April 16 Drizzle Developer Day Santa Clara, CA, United States
April 23
April 25
FOSS Nigeria 2010 Kano, Nigeria
April 23
April 25
QuahogCon 2010 Providence, RI, USA
April 24
April 25
OSDC.TW 2010 Taipei, Taiwan
April 24
April 25
BarCamb 3 Cambridge, UK
April 24 Festival Latinoamericano de Instalación de Software Libre Many, Many
April 24
April 25
Fosscomm 2010 Thessaloniki, Greece
April 24
April 25
LinuxFest Northwest Bellingham WA, USA
April 24 Open Knowledge Conference 2010 London, UK
April 24
April 26
First International Workshop on Free/Open Source Software Technologies Riyadh, Saudi Arabia
April 25
April 29
Interop Las Vegas Las Vegas, NV, USA
April 28
April 29
Xen Summit North America at AMD Sunnyvale, CA, USA
April 29 Patents and Free and Open Source Software Boulder, CO, USA
May 1
May 2
OggCamp Liverpool, England
May 1
May 4
Linux Audio Conference Utrecht, NL
May 1
May 2
Devops Down Under Sydney, Australia
May 3
May 7
SambaXP 2010 Göttingen, Germany
May 3
May 6
Web 2.0 Expo San Francisco San Francisco, CA, USA
May 6 NLUUG spring conference: System Administration Ede, The Netherlands
May 7
May 9
Pycon Italy Firenze, Italy
May 7
May 8
Professional IT Community Conference New Brunswick, NJ, USA
May 10
May 14
Ubuntu Developer Summit Brussels, Belgium

If your event does not appear here, please tell us about it.

Event Reports

Happenings: FOSS at CeBIT 2010 (The H)

The H covers the CeBIT Open Source Forum. "The CeBIT Open Source Forum, a prominent feature in the Open Source area of Hall 2, featured several lectures, demonstrations and keynote speeches on several topics, from Open Source in data centres and security, to web browsers, mobility and multimedia. The H attended several of the Open Source Forum sessions, including the introduction of the latest 6.3 release of the popular Knoppix Live Linux distribution by Knoppix creator Klaus Knopper."

Comments (none posted)

Page editor: Rebecca Sobol


Copyright © 2010, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds