LWN.net Weekly Edition for April 30, 2009
ELC2009: Visualizing memory usage with smem
One of the more frustrating things to try and figure out on Linux systems is how much memory is actually being used by a process. The ps command offers something of a view into memory usage, but adding up the numbers for various types of memory never yields a sensible result. It is against this backdrop that Matt Mackall presented his smem tool at this year's Embedded Linux Conference.
There is an "accounting problem
" when users try to look at the
memory usage in their systems, according to Mackall. The kernel saves lots
of memory by sharing various pages between processes, but then when it
reports the memory usage, it counts these shared pages multiple times. The
kernel will
also allocate more memory than is actually available, "in the belief
that it won't be used
". This means that users and developers can't
get a good sense of how the memory is used which leads them to "just
throw more memory at the problem
".
In 2007, Mackall attacked the
problem from the kernel side by creating a set of patches that
implemented the pagemap file for each process in /proc.
This binary file "exposes the mapping from virtual to
physical
" memory, which can be used to get a better look at memory
usage.
He also created some user space tools to read the pagemap files
(along with the related /proc/kpagemap for the kernel). As part
of that, he "developed a pair of concepts to give meaningful
measures
" to memory usage.
One of those measures is proportional set size (PSS) which
represents a process's "fair share
" of shared pages. If a
page is shared by five processes, each gets one-fifth of a page added to
its PSS. The other measure is the unique set size (USS) which is the
memory devoted exclusively to the process—how much would be
returned to the system if that process were killed.
He then submitted the pagemap code for inclusion into the mainline. As part of
that process, he got "lots of help
" from various folks, added
a direct PSS calculation, and redesigned the code and its interface. Linus
Torvalds was not very impressed, and called the code "crap
",
but Mackall was able to convince him to include it by listing all of
the people that had assisted as proof that it was a desired feature.
Unfortunately, the changes that were made to pagemap on its way into the
mainline broke all of the
user-space tools he had written and no one else released any tools based on
pagemap.
![[smem bar chart]](https://static.lwn.net/images/smem_bar_sm.png)
So, now, in "take 2
", Mackall is trying to "write a useful
tool and hope it catches on
". The idea behind smem is to
integrate information from multiple sources to provide useful memory usage
information for developers, administrators, and users. In addition to the
expected textual output, Mackall included visualization aids in the form of
pie and bar charts.
With that introduction and history out of the way, Mackall went on to demonstrate the smem program. At its simplest, without any arguments, it produces a list of processes running on the system showing the process id, user, and command, along with four measures of memory used for each. Those measures are the amount of swap, USS, PSS, and resident set size (RSS), with the list being sorted by PSS. But, as Mackall showed, that output can be rearranged, sorted, and filtered by a variety of parameters.
In addition to looking at memory from the perspective of processes,
smem can look at memory usage by mapping or user, and all three
can be used in regular expression filters. As he was showing various
options, Mackall commented on a few
programs running on his laptop, noting that gweather used 5M for "32
square pixels on the screen
", and that tomboy is "useful, but
I'm not sure it's 6.9M of useful
".
Since the target audience was embedded developers—and conference sponsor
CE Linux Forum funded the work—Mackall turned to describing ways to
use smem in embedded environments. The program itself is a Python
application, which is "not that huge, but not small
", so
"[you] don't want to run it on your phone
". What is needed is
a way to capture the data, so that it can be pulled over to another machine
to "slice and dice it
" there.
To that end, smem will read a tar file that has been collected
from the /proc filesystem on the target machine. Mackall has
created a simple script to grab the relevant pieces from /proc and
create a .tgz file.
Mackall also demonstrated a system-wide view of memory that would be useful for embedded developers who are trying to size the memory requirements for their device. By passing arguments that give the amount of installed memory, along with the path to an uncompressed, unstripped kernel image, smem can produce output like:
$ ./smem -R 2G -K ~/linux-2.6/arch/x86/boot/compressed/vmlinux -k -w -t Area Used Cache Noncache firmware/hardware 35.2M 0 35.2M kernel image 6.1M 0 6.1M kernel dynamic memory 1.5G 1.3G 189.6M userspace memory 283.5M 85.8M 197.7M free memory 188.7M 188.7M 0 ---------------------------------------------------------- 5 2.0G 1.6G 428.6MThis shows that with the current workload on this machine, 428M of memory is required. If this workload is known to be fixed, 512M of RAM could reliably be specified for the system.
![[smem pie chart]](https://static.lwn.net/images/smem_pie_sm.png)
All of the smem output can be converted to rudimentary pie and bar charts, which can be saved in a variety of formats (PNG, SVG, JPG, EPS, and more). As Mackall explained, there are still lots of tweaks to be made to the output, but it is basically functional and allows some interaction (zooming in for example).
A better GUI is one of things on the wish list for further smem development. First off, Mackall would like to get some users for the tool that are reporting bugs and hopefully providing patches as well—interested folks are directed at the download page or the project page for additional info. In addition, better capture tools (capturing via TCP for example), adding more sources of data (CPU usage, dirty memory, ...), adding support for better data from the kernel, and improved visualization are all things he would like to see added. It is functional and useful now, but could become something far better down the road.
Shell and Zeitgeist: the future of GNOME?
The announcement a few weeks ago of the preliminary plans for GNOME 3.0 catapulted the GNOME Shell and GNOME Zeitgeist into the spotlight. Previously little-known, these programs are now identified as the basis of a new user experience in GNOME 3.0. Meanwhile, both are in their early stages, and few have tried them, with the result that they are surrounded by question marks.
What exactly are these programs? What vision do they share in common? Most importantly of all, are they capable of bearing the expectations placed upon them? Any answers to these questions must be tentative, because both projects are in rapid development, and certain to change dramatically by the time GNOME 3.0 is released. All the same, those in search of preliminary answers can find them with a bit of quick compiling.
The GNOME Shell
![[GNOME Shell]](https://static.lwn.net/images/gnome-shell_sm.png)
The GNOME Shell is now intended as the replacement for the current panel, window manager, and desktop. The project site gives detailed instructions for building the latest version of the application. These are relatively straightforward, although you might need to add ~/bin to your path in order to complete the compile. You should also know that the instructions apparently assume that you are using Metacity, the current version of GNOME's default window manager, since they do not work with any other.
After compiling, you can install Xephyr, a nested X server, to run the GNOME Shell in a window on your current desktop. Alternatively, you can temporarily replace Metacity with the GNOME Shell, following the instructions provided by the project. In my experience, using Xephyr is more likely to be successful.
However you start GNOME Shell, its differences from the GNOME 2 series of releases is immediately obvious. Not only the layout but the logic with which you use it is radically different from any GNOME desktop you have ever seen.
Across the top is a simplified panel, with the time and user on the right and a button marked "Activities" on the left. It contains no applets, menu, or system notification, and the taskbar is on a separate panel on the bottom.
The Activities button is the key to the GNOME Shell. As in KDE 4, in the GNOME Shell, "activities" refers to virtual workspaces, and that term was selected to indicate how to use them. In fact, when you start the GNOME Shell, you are looking at a full-screen workspace with the applications xeyes, xlogo, and xterm on it. Click the Activities button, and the workspace shrinks to reveal the complete desktop.
That desktop is as simple as the panel. On the left is a list of recently used applications that can be expanded by clicking the link marked "More". Recent documents have a similar arrangement below. Each expands into a complete list in a second column of menu items if necessary, with multiple pages.
To the right are large thumbnails of available workspaces. These thumbnails change size as their number increases or decreases, or a menu expands into a second column. When you select an application or document, it opens full-screen. Click the Activities button, and it repositions itself as a thumbnail on the current activity, sized and arranged so as not to overlap with anything else on the activity. If you want to use a thumbnailed application, you either click on it or on its taskbar listing to run it full-sized. In effect, workspaces are launchpads for applications, rather than places that you actually work upon.
As a desktop, the GNOME Shell is extremely economical with space, and well-suited for giving the currently active application a maximum amount of space. However, if monitor space is not your concern, then the GNOME Shell can quickly become irritating. You are continually clicking to expose one item and hide another. Nor is the user experience helped by the fact that you currently have to make frequently wide sweeps with the mouse up to the Activities button, although no doubt keybindings will eventually remove this annoyance.
Nor is there any easy way to work with two items side by side (although you can do so from the taskbar), nor to track the activity that an application is performing without making it active, nor to jump to a particular activity in a single click. These limitations may be reduced or eliminated later, but, for now, they give the GNOME Shell the appearance of an interface intended for mobile devices, where such features are less often needed.
The GNOME Shell may put the desktop into a strong position for the future by providing a common interface for all the platforms it might be installed upon. Given the rapid growth of mobile devices, having them as the main basis for interface design may be an inevitable evolution. However, it risks short-changing workstation users, whose computing can be more demanding than that of mobile users.
GNOME Zeitgeist
![[GNOME Zeitgeist]](https://static.lwn.net/images/zeitgeist_sm.png)
GNOME Zeitgeist is reminiscent of Nemo, in that both replace standard file managers based on the directory tree and the desktop with ones based upon a calendar and other criteria. Both seem to assume that users do not want to know where their files are, or to hunt for them visually — they just want their files when they need them. What you think of GNOME Zeitgeist will probably depend on how much you agree with that assumption.
Unlike the case with the GNOME shell, the Zeitgeist project offers little assistance to downloaders. Fortunately, all you need to do is install Bazaar Version Control, and run the command bzr branch lp:gnome-zeitgeist while having an Internet connection to download.
Once downloaded, there is no need to compile. Instead, just go to the download directory and enter sh ./zeitgeist-daemon.sh to start the service (probably in a separate window or in the background), followed by sh ./zeitgeist-journal.sh to run the main graphical interface.
GNOME Zeitgeist opens on a three day calendar, showing yesterday, today, and tomorrow, and a list of files accessed on each day. This is the view offered when you click the "Recent" icon in the toolbar. You can also click the "Older" or "Newer" icons to change the dates in the three-pane display, or the "Calendar" to change to the view to one appropriate for a particular date.
Other ways of viewing files include Bookmarks, Tags, and Filters for file types, all of which are available in at least one existing file manager, although not with the same ease of use as in GNOME Zeitgeist.
If you return to the download directory, you will also find two additional pieces of GNOME Zeitgeist that have yet to be integrated into the main interface: zeitgeist-timeline.sh, which looks as though it presents a longer, alternative view of files created each day, and zeitgeist-project.sh, which presumably groups related files together. Other criteria for finding files, such as by location, are due to be added later.
As a collection of features in a traditional file manager, Zeitgeist would be a welcome enhancement. However, having Zeitgeist as a default file manager raises numerous questions. Is its assumption of the average users' preferences correct? Or will it create another barrier between desktop users and the command line by promoting a different concept of how files are accessed? Would users be better off if they were encouraged to organize their files, instead of just dumping them in their home directories?
From one perspective, GNOME Zeitgeist might be seen as the equivalent of a word processor that favors manual formatting over the creation of styles — as an application that encourages sloppy computer habits. Others, however, might argue that such programs are simply being realistic about users' work habits.
Pain or paradise?
Neither the GNOME Shell nor GNOME Zeitgeist should be judged on speed or looks yet. Both projects are still at the stage of adding functionality. However, enough functionality exists in both that a few preliminary comments are possible.
First, even together, the GNOME Shell and GNOME Zeitgeist seem slight to build an entire new desktop around. Although each is an interesting innovation, are the two enough to "revamp" the user experience, as the announcement of GNOME 3.0 promises? So far, it is uncertain that they are. Moreover, each is primarily a change at the interface level. To what extent either will require other GNOME applications to be rewritten, and to what extent GNOME's back end libraries will need to be overhauled is still being determined. So far, the news about GNOME 3.0 plans suggests that the rewriting of the backend may be fairly minimal.
Just as importantly, whether the two will create a common experience is still up in the air. So far, the two application seem to be proceeding along different lines of thought about usability. In particular, while the GNOME Shell is all about economical use of desktop space, GNOME Zeitgeist works best in a large window. And while the GNOME Shell radically changes how users interact with the desktop, GNOME Zeitgeist's interface is much more like the applications to which they are accustomed. At some point, there will probably have to be an agreement on standard designs if the two are going to integrate well.
Finally, while few would claim that the user experience on any computer desktop is perfected, will users accept such radical rethinking? Both projects are attempting to make the user experience easier, but both depart strongly from everything that users have become accustomed to over the last two decades. Considering that KDE 4.0 was roughly received, despite the fact that it was an evolution of the existing desktop, not a complete departure, GNOME 3.0 may run the risk of provoking its own user revolt.
Of course, these are early days, and the validity or absurdity of such concerns will become clearer as both projects progress. How GNOME 3.0 is marketed and documented will also affect its reception. But, so far, the GNOME Shell and GNOME Zeitgeist arouse as much apprehension for GNOME 3.0 as hope. We'll have to wait to see which was more justified.
Eucalyptus: running a private cloud on Ubuntu
While Ubuntu has been able to attract an impressive "market share" as a GNU/Linux distribution in just a couple of years, this success has been limited mainly to the desktop. Canonical has made it clear that it has ambitions for the server market, but at the moment Ubuntu Server Edition does not stand apart from enterprise distributions like Red Hat Enterprise Linux and SUSE Linux Enterprise. With the newest edition, Ubuntu 9.04 ("Jaunty Jackalope"), Canonical has tried to outsmart its competitors by focusing on cloud computing.
"Cloud computing" comes down to providing computing resources as a dynamically scalable service. In practice this means that customers rent computers to run their own software. The best known cloud computing system is Amazon Elastic Compute Cloud (EC2). Customers can create virtual machines (which EC2 calls server instances) and run them on Amazon's servers. They are charged for each hour the virtual machine runs, and for the bandwidth used. Amazon distributes a set of (proprietary) EC2 tools to manage a cloud. With these command line programs, users can create, launch or terminate virtual servers, as well as any other imaginable task.
The most innovative part in Ubuntu 9.04 is found in the Eucalyptus project, which brings an Amazon EC2-style private cloud within the reach of every Ubuntu 9.04 user. At the moment it's still a technology preview, which will not be considered production-ready until Ubuntu 9.10 later this year. Eucalyptus makes it possible to investigate cloud possibilities inside a company, without the need to deploy the applications on external servers at Amazon. Because Eucalyptus is interface-compatible with the EC2 APIs, the same EC2 tools can be used. This means that working with virtual machines on Eucalyptus is almost identical to working with virtual machines on Amazon EC2, and a company wanting to use cloud computing on Ubuntu has the choice between Ubuntu Server on Amazon EC2 and Ubuntu Server on Eucalyptus, what Canonical is calling Ubuntu Enterprise Cloud.
The Eucalyptus project is important because it 'frees' cloud computing. Traditionally, cloud computing systems have been the playground of large companies from Google, Amazon and IBM to Microsoft. Vendor lock-in is a serious issue in this emerging market. However, with Eucalyptus there is a technology which allows anyone to set up their own cloud system on their own hardware. The framework essentially implements what is commonly referred to as "Infrastructure as a Service": a system with the ability to run and control collections of virtual machine instances deployed across a variety of physical servers.
Development of Eucalyptus
The name Eucalyptus is an acronym for "Elastic Utility Computing Architecture for Linking Your Programs To Useful Systems". The software was originally a research project in the Computer Science Department at the University of California, Santa Barbara (UCSB). The research question that was investigated concerned the combined use of U.S. National Science Foundation supercomputers, university research machines and the public clouds for large-scale science applications. Through that project, called VGrADS, the researchers designed and coded what has been released as Eucalyptus 1.0 in May 2008. This gave them an environment to host a cloud for themselves.
At that time, the only commercial cloud was Amazon's EC2. The researchers ported their grid system to Amazon EC2, and at the same time built Eucalyptus as an EC2-compatible cloud system. Because the researchers didn't have the resources to support more than one cloud API, Eucalyptus has been written as a drop-in replacement for EC2, such that a system running on EC2 could also run on Eucalyptus in the same way. The developers have only read Amazon's API specifications that are published free of charge. Thus, internally, Eucalyptus works completely differently than EC2, but it faithfully reproduces EC2 functionality.
While Eucalyptus was designed as a tool to support research, the developers clearly saw that it had potential in the non-academic world too. Therefore, they released the code as open source under a BSD license. However, for the moment, the developers are restricting external contributions to bug fixes, because they want to keep the code base stable in this early phase of development. According to project lead Rich Wolski, this policy will change during this year:
There was a close collaboration between the Eucalyptus and Ubuntu developers to get Eucalyptus in Ubuntu 9.04. For example, originally Eucalyptus used Xen as its virtualization platform, but because Ubuntu favors KVM it has been integrated with KVM in Ubuntu. However, Eucalyptus isn't tied to Ubuntu or KVM. As Rich Wolski says:
Creating your own private cloud with Eucalyptus
So how does Eucalyptus work? It's essentially a set of web services: the user makes a request to a front-end web service, the cloud controller. If the request is for storage, it is forwarded to Walrus, the storage front-end that is compatible with Amazon S3. The storage request is then forwarded to storage controllers running at the cluster level. If the request to the cloud controller is not about storage, it is forwarded to web services on the cluster level and next to the individual compute nodes.
Eucalyptus consists of three parts, which come in Ubuntu as three packages:
- eucalyptus-cloud: the cloud controller, implementing the EC2 and S3 APIs. A Eucalyptus system needs only one cloud controller.
- eucalyptus-cc: the cluster controller, which is the master server and implements the virtual network. A Eucalyptus system normally needs only one cluster controller.
- eucalyptus-nc: the node controller, which controls the KVM hypervisor and manages the virtual machines on a node. Each physical server in the cloud needs a node controller.
The three components can also be installed on one computer. This can be done for example if one wants to evaluate Eucalyptus on an Ubuntu 9.04 system for the first time.
Installing and deploying Eucalyptus on Ubuntu 9.04 is still somewhat complicated, but the Ubuntu community documentation is an excellent guide for the installation. The user trying to install Eucalyptus will definitely meet some rough edges. For example, he can add a cluster to the cloud in the web interface, but adding nodes has to be done on the command line. And the EC2 tools bundled with Ubuntu 9.04 are not compatible with Eucalyptus, hence users have to download another version of the EC2 tools manually. Moreover, while trying to set up a Eucalyptus system on a fresh Ubuntu 9.04 install, your author discovered that Eucalyptus is extremely sensitive to the virtual or physical network setup. If something is wrong with the network, the error messages of Eucalyptus and the EC2 tools are not helpful. And so on; one assumes these difficulties will be ironed out over time.
Ubuntu in the cloud
Ubuntu's cloud computing possibilities don't stop with Eucalyptus. For the last few months, Canonical has had Ubuntu machine images for Amazon EC2 in beta, and, last week, the Ubuntu EC2 team announced the availability of public Ubuntu EC2 images for the 8.10 and 8.04 (LTS) releases. This provides a stable Ubuntu platform that allows users to run their applications in an EC2 environment. Meanwhile, the Ubuntu EC2 team is working on EC2 images for 9.04.
It's interesting to note that an Amazon machine image can be converted to a Eucalyptus machine image, even if Amazon is using Xen and Eucalyptus on Ubuntu is using KVM. The key difference is that Xen accepts an ext3 filesystem for use as a root filesystem, while KVM expects a disk image. The Eucalyptus developers have some internal tools for making this conversion and are using it frequently during the development and Q/A for each release. According to Wolski, they are planning to add the tools to a future Eucalyptus release. For now, converting images requires a little bit of an understanding of the different requirements each hypervisor has.
Canonical and Eucalyptus have worked together to make it as easy as possible to set up and manage a private cloud. Therefore, the Eucalyptus web interface has a button to register for a RightScale account, which is available in a free developer edition to try it out, and some pay editions with extra features. By following the link on the configuration web page of the Eucalyptus web interface, the user is ready to manage a cloud from within a RightScale dashboard. Users can see their virtual machine instances on their private cloud or on EC2 in one dashboard.
User management is also included, for example a user can be allowed to launch his own cloud server on someone else's cloud. RightScale is now working with Canonical to ensure that the official Ubuntu 9.04 Amazon Machine Images will work out-of-the-box with RightScale. According to the official RightScale blog, this will work out as following:
An interesting note: RightScale first focused on CentOS, but switched to Ubuntu as its primary supported distribution because of Canonical's cloud plans.
Conclusion
When Ubuntu Server first appeared, a lot of people didn't believe it could be a real competitor in the enterprise Linux market. However, with Ubuntu 9.04 a clear focus is arising. Canonical wants to do for cloud computing just the same thing it has done with its desktop operating system: make it work out-of-the-box and make it easy to deploy. The collaboration between Canonical, Amazon, Eucalyptus and RightScale is an important step in this direction. While working with Eucalyptus in Ubuntu 9.04 has still its rough edges, it's interesting to preview this flexible technology that will hopefully be mature in Ubuntu 9.10 at the end of the year. The name "Karmic Koala" for the 9.10 release at least gives a nice insight in what a core role Eucalyptus will play in Ubuntu Server.
A couple of new LWN features
We recently found time for a bit of site-code hacking, resulting in a couple of new features. First: our long, dark period as the only site on the net without a Twitter feed has now come to an end. Interested parties can follow article posts on either Twitter or identi.ca. We are just beginning to experiment with these channels; please let us know if you have any ideas for how we can use them better.Meanwhile, as the comment volume increases, keeping up with new comments has gotten harder. We're pondering a number of changes to help in that regard. But one thing which has been implemented is the (subscriber-only) page at http://lwn.net/Comments/unread. This page will display all comments posted on LWN since the last time you visited it (it shows comments for 24 hours on the first visit), organized for readability. The page still has a couple of rough edges, but it's useful now. Again, comments are welcome.
Security
Linux ASLR vulnerabilities
A recent LWN comment thread—which unfortunately descended into flames and rudeness—had a post with some interesting pointers to recent security research on Linux address space layout randomization (ASLR). Both look to be plausible attacks against ASLR, and have not yet been addressed by the kernel hackers. Perhaps worse than that, though, is that these kinds of problems are evidently not being reported to linux-kernel (or other kernel security channels), or not being acted on. Over the years, the interaction of security researchers and kernel hackers has often been contentious, to the point where some security researchers may not be reporting the Linux flaws they find via the usual channels.
ASLR is a technique used to thwart buffer overflow vulnerabilities in user applications by randomizing the location of various pieces of the application's address space. Libraries, the heap and stack, as well as the executable code for a process are placed at random addresses so that attacker programs have a much more difficult time exploiting a buffer overflow. Without the use of ASLR, an attack could use hardcoded addresses of known locations in a process's address space (e.g. specific library functions) to perform its nefarious deeds.
It is important that attacker programs be unable to see—or figure out—the memory layout for other processes in the system. Attackers who can gain that information could then use any buffer overflows they know of for that program with all of the addresses they need. For that reason, /proc/pid/maps (a file that describes the address space for process id pid) only contains data when read by the owner of that pid—or someone who can ptrace() it. A recent advisory about memcache and memcacheDB divulging that information, unauthenticated over the network should be worrisome for just this reason.
The decision to stop allowing anyone to read the maps file came about in 2.6.22, long after ASLR was added in 2.6.12. Based on a presentation [PDF] at this year's CanSecWest conference, there is still enough information being leaked from /proc files to be able to determine the address space layout for a program.
The /proc/pid/stat file reports the value of the instruction and stack pointers of the process, and the /proc/pid/wchan file reports its "wait channel", which is the function in which the process is currently blocked. Using that information, possibly sampled multiple times, along with a map of the instruction boundaries of the executable, Julien Tinnes and Tavis Ormandy were able to bypass ASLR.
The second flaw in ASLR was presented at Black Hat Europe by Hagen Fritsch. A whitepaper [PDF] describing the flaw is instructive. Essentially, the random number generator (RNG) used to create the addresses for ASLR is flawed, allowing those values to be correctly calculated up to two minutes after a target process has been run.
There is clearly a disconnect between the comment in the get_random_int() function (which uses the IP RNG secure_ip_id()) and the implementation of re-keying the RNG in drivers/char/random.c. The former claims that it gets re-keyed every second, but the REKEY_INTERVAL in the random driver is five minutes. If ASLR requires the RNG to re-key every second, a different function should be used. But, there is an additional problem.
The secure_ip_id() function takes one argument which it mixes with the key in order to generate the random number. get_random_int() passes the sum of the pid and the internal kernel counter jiffies as that parameter. For a period of five minutes, if the attacker can arrange for the same sum to be passed in, they will get the same value as the target process did. That can happen in one of two ways: either by calling execve() on the desired target within one jiffy of when the attack process started—a rather difficult thing to arrange for a number of reasons—or by calling execve() when pid + jiffies is the same as it was for the target process.
An attacker process can spawn children until it gets a desired pid, then wait for jiffies to reach a value where the sum is the same. Even though the absolute value of jiffies is not known outside of the kernel, various calculations on the difference in jiffie values can be used to narrow down the search. Once again, the /proc/pid/stat file can come into play here, by providing a start time for the target process with a granularity typically 2.5 times that of jiffies (10ms vs. 4ms).
In addition, Fritsch notes that IP sequence numbers may be leaking information that could be used to assist in this attack because it uses the same RNG with the five minute re-key time. He has not looked at whether that is the case.
These two vulnerabilities are fairly substantial and should certainly be fixed. It would seem fairly straightforward to limit access to the /proc files based on the same ptrace() test used for maps. The RNG flaw is more subtle and probably requires a fair amount of thought, but it is clear that the randomness provided is insufficient, at least for ASLR.
Another report that came out of the comment thread demonstrates a misclassification of security flaws that tends to be very annoying to the security community. Misclassifying remotely exploitable flaws as a "denial of service" (due to a kernel crash) is a fairly common thing for distributions and others (knowingly or not) to do. As the blog posting indicates, it irritates some researchers:
That particular vulnerability is long fixed in the kernel, but the whole posting is worth a read for those interested in how a kernel buffer overflow can become a remote root exploit (even bypassing SELinux). It is also indicative of the frustration that some in the security community feel about Linux security. For good or ill, Linux security is not well regarded in that community, to the point where it appears that some, possibly large, amount of Linux kernel security research is not being communicated to the kernel community. Perhaps that communication is occurring but is just "flying under the radar"—something that frequently happens with security discussions—as it would be a tragedy to think that known vulnerabilities are just falling through the cracks.
New vulnerabilities
acpid: denial of service
Package(s): | acpid | CVE #(s): | CVE-2009-0798 | ||||||||||||||||||||||||||||||||||||
Created: | April 28, 2009 | Updated: | December 7, 2009 | ||||||||||||||||||||||||||||||||||||
Description: | From the Ubuntu advisory: It was discovered that acpid did not properly handle a large number of connections. A local user could exploit this and monopolize CPU resources, leading to a denial of service. | ||||||||||||||||||||||||||||||||||||||
Alerts: |
|
apt: incorrect signature checking
Package(s): | apt | CVE #(s): | CVE-2009-1358 | ||||
Created: | April 27, 2009 | Updated: | April 29, 2009 | ||||
Description: | From the Debian advisory: CVE-2009-1358: A repository that has been signed with an expired or revoked OpenPGP key would still be considered valid by APT. | ||||||
Alerts: |
|
clamav: multiple vulnerabilities
Package(s): | clamav | CVE #(s): | CVE-2009-1241 CVE-2009-1371 CVE-2009-1372 | ||||||||||||
Created: | April 24, 2009 | Updated: | December 8, 2009 | ||||||||||||
Description: | From the Mandriva advisory:
Unspecified vulnerability in ClamAV before 0.95 allows remote attackers to bypass detection of malware via a modified RAR archive. CVE-2009-1241 The CLI_ISCONTAINED macro in libclamav/others.h in ClamAV before 0.95.1 allows remote attackers to cause a denial of service (application crash) via a malformed file with UPack encoding. CVE-2009-1371 Stack-based buffer overflow in the cli_url_canon function in libclamav/phishcheck.c in ClamAV before 0.95.1 allows remote attackers to cause a denial of service (application crash) and possibly execute arbitrary code via a crafted URL. CVE-2009-1372 | ||||||||||||||
Alerts: |
|
firefox: arbitrary code execution
Package(s): | firefox | CVE #(s): | CVE-2009-1313 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Created: | April 28, 2009 | Updated: | May 13, 2009 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Description: | From the Red Hat advisory: A flaw was found in the processing of malformed web content. A web page containing malicious content could cause Firefox to crash or, potentially, execute arbitrary code as the user running Firefox. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Alerts: |
|
freetype: arbitrary code execution
Package(s): | freetype | CVE #(s): | CVE-2009-0946 | ||||||||||||||||||||||||||||||||||||||||||||||||||||
Created: | April 28, 2009 | Updated: | December 7, 2009 | ||||||||||||||||||||||||||||||||||||||||||||||||||||
Description: | From the Ubuntu advisory: Tavis Ormandy discovered that FreeType did not correctly handle certain large values in font files. If a user were tricked into using a specially crafted font file, a remote attacker could execute arbitrary code with user privileges. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||
Alerts: |
|
libdbd-pg-perl: multiple vulnerabilities
Package(s): | libdbd-pg-perl | CVE #(s): | CVE-2009-0663 CVE-2009-1341 | ||||||||||||||||||||||||||||
Created: | April 29, 2009 | Updated: | December 28, 2009 | ||||||||||||||||||||||||||||
Description: | The libdbd-pg-perl package suffers from a buffer overflow vulnerability (CVE-2009-0663) and a memory leak (CVE-2009-1341) which could enable denial-of-service attacks. | ||||||||||||||||||||||||||||||
Alerts: |
|
libmodplug: integer overflow
Package(s): | libmodplug | CVE #(s): | CVE-2009-1438 | ||||||||||||||||||||||||||||||||||||
Created: | April 28, 2009 | Updated: | December 4, 2009 | ||||||||||||||||||||||||||||||||||||
Description: | From the CVE entry: Integer overflow in the CSoundFile::ReadMed function (src/load_med.cpp) in libmodplug before 0.8.6, as used in gstreamer-plugins and other products, allows context-dependent attackers to execute arbitrary code via a MED file with a crafted (1) song comment or (2) song name, which triggers a heap-based buffer overflow. | ||||||||||||||||||||||||||||||||||||||
Alerts: |
|
mahara: insufficient input sanitization
Package(s): | mahara | CVE #(s): | CVE-2009-0664 | ||||
Created: | April 23, 2009 | Updated: | April 29, 2009 | ||||
Description: | Mahara has an insufficient input sanitization vulnerability. From the Debian alert: It was discovered that mahara, an electronic portfolio, weblog, and resume builder, is prone to cross-site scripting (XSS) attacks because of missing input sanitization of the introduction text field in user profiles and any text field in a user view. | ||||||
Alerts: |
|
mod_jk: information disclosure
Package(s): | mod_jk | CVE #(s): | CVE-2008-5519 | ||||||||||||||||||||||||
Created: | April 24, 2009 | Updated: | January 12, 2010 | ||||||||||||||||||||||||
Description: | From the Red Hat advisory: An information disclosure flaw was found in mod_jk. In certain situations, if a faulty client set the "Content-Length" header without providing data, or if a user sent repeated requests very quickly, one user may view a response intended for another user. | ||||||||||||||||||||||||||
Alerts: |
|
mysql: cross-site scripting
Package(s): | mysql | CVE #(s): | CVE-2008-4456 | ||||||||||||||||||||||||||||||||||||||||||||||||
Created: | April 29, 2009 | Updated: | March 8, 2010 | ||||||||||||||||||||||||||||||||||||||||||||||||
Description: | From the Debian advisory: Thomas Henlich reported that the MySQL commandline client application did not encode HTML special characters when run in HTML output mode (that is, "mysql --html ..."). This could potentially lead to cross-site scripting or unintended script privilege escalation if the resulting output is viewed in a browser or incorporated into a web site. | ||||||||||||||||||||||||||||||||||||||||||||||||||
Alerts: |
|
prewikka: world readable password
Package(s): | prewikka | CVE #(s): | |||||||||
Created: | April 28, 2009 | Updated: | April 29, 2009 | ||||||||
Description: | From the Fedora advisory: The permissions on the prewikka.conf file are world readable and contain the sql database password used by prewikka. This update makes it readable just by the apache group. | ||||||||||
Alerts: |
|
Page editor: Jake Edge
Kernel development
Brief items
Kernel release status
The current 2.6 development kernel remains 2.6.30-rc3; no new prepatches have been released over the last week. Changes continue to flow into the mainline repository; they are almost all fixes, but there was also the restoration of Tux as the kernel mascot.
The current stable 2.6 kernel is 2.6.29.2, released on April 27 with
about 100 patches. "There are a lot of fixes in this release
touching all over the tree. At least a few have possible security impact
(e.g. af_rose, agp, capability fs_mask, splice/ocfs2). As usual, you're
encouraged to upgrade.
"
Kernel development news
Quotes of the week
But then netfilter was blessed with RCU and the performance was divine, but alas there were those that suffered for trying to replace their many rules one at a time.
So now RCU must be vanquished from the scene, and better chastity belts be placed upon this valuable asset most dear. The locks that were but one are now replaced by one per suitor.
The repair was made after much discussion involving Eric the wise, and Linus the foul. With flowers springing up amid the thorns some peace has finally prevailed and all is soothed.
USB and fast booting
The changes that are being made for a faster-booting Linux have generally been welcomed, but when they lead to an apparent regression, complaints will be heard. That situation arose recently when Jeff Garzik reported a regression that caused one of his systems to no longer boot. Because of some changes made to the way USB initializes, the system no longer recognized his disks in time to mount the root filesystem from them. As it turns out, the problem is not limited to disks, nor is it new; it is a longstanding race condition that previously was being "won" by most hardware, but that same hardware is often losing the race now.
Garzik had bisected the problem to a particular commit made back in September of 2008. Instead of sleeping for 100ms as part of the initialization of each USB root hub, the new code uses the delayed work mechanism to schedule the next initialization step 100ms in the future. For kernels which had the USB code built-in, this would allow the boot thread to do other work, rather than block waiting for these delays. It had a rather positive impact on boot speed, with patch author Alan Stern reporting:
From Garzik's perspective, the problem is that this system booted
successfully with every kernel version until 2.6.28. The immediate
suggestion was to use the rootdelay kernel boot option which will
delay the boot process for the given number of seconds before trying to
mount the root filesystem. That did not sit very well with Garzik, and he
asked: "When did regressions become
an acceptable tradeoff for speed?
"
As it turns out, Garzik had just been "lucky" before, he could have run
into this problem on earlier kernels with different hardware as Greg
Kroah-Hartman points out: "What
happens when you buy a new box with more USB host controllers and a
faster processor? Same problem.
" The underlying issue is specific
to USB, as the old initialization waited 100ms per USB bus (i.e. root hub)
synchronously, so a system with five hubs would effectively wait 500ms for
the first to initialize and enumerate the devices attached. The new code
does those same initializations in parallel.
While it is relatively uncommon to have USB root filesystems, it is far from unheard of. Embedded systems are a fairly likely candidate, due to cost and form factor issues, as Alan Cox explained. Multiple distributions also have support for running completely from a USB device, typically a USB flash drive.
But, as Garzik and others point out, users that upgrade their kernels (or
distributions who do so), but don't add in a rootdelay option,
risk having systems that cannot boot. USB is fundamentally different than
other buses, however, because there is no way to know when the enumeration
of devices on a particular hub has been completed. Mark Lord questioned the explanation, noting:
"SATA drives also take variable amounts of time to 'show up' at
boot.
" But as Arjan van de Ven explained, there is a significant difference:
It turns out that the same problem in a slightly different guise shows up for embedded devices that use USB consoles. David VomLehn has been working on a patch to wait for USB consoles to become available. Because embedded devices often have USB consoles, but only for development and debugging, a long delay waiting for a console that is unlikely to show up in the majority of cases is undesirable. But, because it is impossible to know that all USB devices have reported in, some kind of delay is inevitable. VomLehn's mechanism would delay up until a timeout specified in the kernel boot parameters, but, unlike rootdelay, would wake up early as soon as a console device was detected.
As VomLehn notes, the problem goes even further than that, affecting USB network devices needed at boot time as well. Discussion on various versions of his patch also pointed out that similar problems exist for other buses. As boot parallelization gets better—and more pervasive—more of these kinds of problems are going to be discovered. A more general solution for devices required at boot time needs to be found as van de Ven describes:
For root fs there's some options, and I have patches to basically retry on fail. (The patches have a bug and I don't have time to solve it this week, so I'm not submitting them) For other devices it is hard. Realistically we need hotplug to work well enough so that when a device shows up, we can just hook it up when it does.
So far, the problems have just been identified and discussed. Workarounds like rootdelay have been mentioned, but that only "solves" one facet of the problem. Distributions are, or will be, shipping 2.6.29 kernels in their upcoming releases, one hopes they have already dealt with the issue or there may be a number of rather puzzled users with systems that don't boot. It would seem important to address the problems, at least for USB storage, as part of 2.6.31.
On the value of static tracepoints
As has been well publicized by now, the Linux kernel lacks the sort of tracing features which can be found in certain other Unix-like kernels. That gap is not the result of a want of trying. In the past, developers trying to put tracing infrastructure into the kernel have often run into a number of challenges, including opposition from their colleagues who do not see the value of that infrastructure and resent its perceived overhead. More recently, it would seem that the increased interest in tracing has helped developers to overcome some of those objections; an ongoing discussion shows, though, that concerns about tracing are still alive and have the potential to derail the addition of tracing facilities to the kernel.Sun's DTrace is famously a dynamic tracing facility, meaning that it can be used to insert tracepoints at (almost) any location in the kernel. But the Solaris kernel also comes with an extensive and well-documented set of static tracepoints which can be activated by name. These tracepoints have been placed at carefully-considered locations which facilitate investigations into what the kernel is actually doing. Many real-world DTrace scripts need only the static tracepoints and do no dynamic tracepoint insertion at all.
There is clear value in these static tracepoints. They represent the wisdom of the developers who (presumably) are the most familiar with each kernel subsystem. System administrators can use them to extract a great deal of useful information without having to know the code in question. Properly-placed static tracepoints bring a significant amount of transparency to the kernel. As tracing capabilities in Linux improve, developers naturally want to provide a similar set of static tracepoints. The fact that static tracing is reasonably well supported (via FTrace) in mainline kernels - with more extensive support available via SystemTap and LTTng - also encourages the creation of static tracepoints. As a result, there have been recent patches adding tracepoints to workqueues and some core memory management functions, among others.
Digression: static tracepoints
As an aside, it's worth looking at the form these tracepoints take; the design of Linux tracepoints gives a perspective on the problems they were intended to solve. As an example, consider the following tracepoints for the memory management code which reports on page allocations. The declaration of the tracepoint looks like this:
#include <linux/tracepoint.h> TRACE_EVENT(mm_page_allocation, TP_PROTO(unsigned long pfn, unsigned long free), TP_ARGS(pfn, free), TP_STRUCT__entry( __field(unsigned long, pfn) __field(unsigned long, free) ), TP_fast_assign( __entry->pfn = pfn; __entry->free = free; ), TP_printk("pfn=%lx zone_free=%ld", __entry->pfn, __entry->free) );
That seems like a lot of boilerplate for what is, in a sense, a switchable printk() call. But, naturally, there is a reason for each piece. The TRACE_EVENT() macro declares a tracepoint - this one is called mm_page_allocation - but does not yet instantiate it in the code. The tracepoint has arguments which are passed to at its actual instantiation (which we'll get to below); they are declared fully in the TP_PROTO() macro and named in the TP_ARGS() macro. Essentially, TP_PROTO() provides a function prototype for the tracepoint, while TP_ARGS() looks like a call to that tracepoint.
These values are enough to let the programmer place a tracepoint in the code with a line like:
trace_mm_page_allocation(page_to_pfn(page), zone_page_state(zone, NR_FREE_PAGES));
This tracepoint is really just a known point in the code which can have, at run time, one or more function pointers stored into it by in-kernel tracing utilities like SystemTap or Ftrace. When the tracepoint is enabled, any functions stored there will be called with the given arguments. In this case, enabling the tracepoint will result in calls whenever a page is allocated; those calls will receive the page frame number of the allocated page and the number of free pages remaining as parameters.
As can be seen in the declaration above, there's more to the tracepoint than those arguments; the rest of the information in the tracepoint declaration is used by the Ftrace subsystem. Ftrace has a couple of seemingly conflicting goals; it wants to be able to quickly enable human-readable output from a tracepoint with no external tools, but the Ftrace developers also want to be able to export trace data from the kernel quickly, without the overhead of encoding it first. And that's where the remaining arguments to TRACE_EVENT() come in.
When properly defined (the magic exists in a bunch of header files under kernel/trace), TP_STRUCT__entry() adds extra fields to the structure which represent the tracepoint; those fields should be capable of holding the binary parameters associated with the tracepoint. The TP_fast_assign() macro provides the code needed to copy the relevant data into that structure. That data can, with some changes merged for 2.6.30, be exported directly to user space in binary format. But, if the user just wants to see formatted information, the TP_printk() macro gives the format string and arguments needed to make that happen.
The end result is that defining a tracepoint takes a small amount of work, but using it thereafter is relatively easy. With Ftrace, it's a simple matter of accessing a couple of debugfs files. But other tools, including LTTng and SystemTap, are also able to make use of these tracepoints.
The disagreement
Given all the talk about tracing in recent years, there is clearly demand for this sort of facility in the kernel. So one might think that adding tracepoints would be uncontroversial. But, naturally enough, it's not that simple.
The first objection that usually arises has to do with the performance
impact of tracepoints, which are often placed in the most
performance-critical code paths in the kernel. That is, after all, where
the real action happens. So adding an unconditional function call to
implement a tracepoint is out of the question; even putting an if
test around it is problematic. After literally years of work, the
developers came up with a scheme involving run-time code patching that
reduces the performance cost of an inactive tracepoint to, for all
practical purposes, zero. Even the most performance-conscious developers
have stopped fretting about this particular issue. But, of course, there
are others.
A tracepoint exists to make specific kernel information available to user space. So, in some real sense, it becomes part of the kernel ABI. As an ABI feature, a tracepoint becomes set in stone once it's shipped in a stable kernel. There is not a universal agreement on the immutability of kernel tracepoints, but the simple fact is that, once these tracepoints become established and prove their usefulness, changing them will cause user-space tracing tools to break. That means that, even if tracepoints are not seen as a stable ABI the way system calls are, there will still be considerable resistance to changing them.
Keeping tracepoints stable when the code around them changes will be a challenge. A substantial subset of the developer community will probably never use those tracepoints, so they will tend to be unaware of them and will not notice when they break. But even a developer who is trying to keep tracepoints stable is going to run into trouble when the code evolves to the point that the original tracepoint no longer makes sense. One can imagine all kinds of cruft being added so that a set of tracepoints gives the illusion of a very different set of decisions than is being made in a future kernel; one can also imagine the hostile reception any such code will find.
The maintenance burden associated with tracepoints is the reason behind Andrew Morton's opposition to their addition. With regard to the workqueue tracepoints, Andrew said:
We keep on adding all these fancy debug gizmos to the core kernel which look like they will be used by one person, once. If that!
Needless to say, the tracing developers see the code as being more widely useful than that. Frederic Weisbecker gave a detailed description of the sort of debugging which can be done with the workqueue tracepoints. Ingo Molnar's response appears to be an attempt to hold up the addition of other types of kernel instrumentation until the tracepoint issue is resolved. Andrew remains unconvinced, though; it seems he would rather see much of this work done with dynamic tracing tools instead.
As of this writing, that's where things stand. If these tracepoints do not get into the mainline, it is hard to see developers going out and creating others in the future. So Linux could end up without a set of well-defined static tracepoints for a long time yet - though it would not be surprising to see the enterprise Linux vendors adding some to their own kernels. Perhaps that is the outcome that the development community as a whole wants, but it's not clear that this feeling is universal at this time. If, instead, Linux is going to end up with a reasonable set of tracepoints, the development community will need to come to some sort of consensus on which kinds of tracing instrumentation is acceptable.
KSM tries again
Back in November, LWN examined the KSM (kernel shared memory) patch. KSM was written to address a problem encountered on systems running virtualized guests: such systems can have large numbers of pages holding identical data, but the kernel has no way to let guests share those pages. The KSM code scans through memory to find pairs of pages holding identical data; when such pairs are found, they are merged into a single page mapped into both locations. The pages are marked copy-on-write, so the kernel will automatically separate them again should one process modify its data.There were some concerns about the intended purpose of this patch, but it was soon made clear that KSM can help the system to save significant amounts of memory. But KSM was not merged as a result of two other problems. One of them, discussed mostly behind closed doors, appears to be concerns about the use of SHA1 hashes to compare pages. If an attacker could create hash collisions, he might just be able to inject his own data (or code) into processes belonging to other users and/or virtual machines. The other problem had to do with a different kind of attacker: VMWare holds a patent to an algorithm which looks quite similar to the method used by the early KSM patches. There is evidence that this patent could be overturned on prior art, but that is still a battle that nobody wants to fight.
KSM disappeared from view for a while after those issues came to light, but, more recently, new versions of the KSM patches have been posted for review. A quick look at the code makes it clear that both of these concerns have been addressed - and, in fact, that the KSM developers were able to kill both birds with the same stone. It's all a matter of doing away with the hashing of pages.
Patent 6,789,156 is not exactly light reading; it has a full 74 claims. Most of the independent claims have one thing in common, though: they include the calculation of a hash value to find identical pages in the system. If the KSM code were to avoid hashing pages, those claims of the patent would clearly not read against it. And, as described above, the use of hashing also created some security worries. So it must have seemed clear to the KSM developers (and any lawyers they may have talked to) that the hash needed to go.
The current KSM patches have replaced the hash table with two separate red-black trees. Pages tracked by KSM are initially stored in the "unstable tree"; the term "unstable" means that KSM considers their contents to be volatile. Placement in the tree is determined by a simple memcmp() of the page's contents; essentially, the page is treated as containing a huge number and sorted accordingly. The unstable tree is suitable for finding pages with duplicate contents; a relatively quick traversal of the tree will turn up the only candidates.
It's worth noting that KSM does not place every page it scans in the unstable tree. If the contents of a page change over the course of one memory scanning cycle, the page will not really be a good candidate for sharing anyway. So pages which are seen to change are not represented in the unstable tree. The unstable tree is also dumped and rebuilt from the beginning after each scan cycle. That deals with the problem of pages which, as a result of modifications, find themselves in the wrong location in the tree. The nature of red-black trees means that search and insertion operations are almost the same thing, so there is little real cost to rebuilding the unstable tree from the beginning every time.
The other pages which are not found in the unstable tree are those which have actually been merged with duplicates. Since shared pages are marked read-only, KSM knows that their contents cannot change. Those pages are put into a separate "stable tree." The stable tree is also a red-black tree, but, since pages cannot become misplaced there, it need not be rebuilt regularly. Once a page goes into the stable tree, it stays there until all users have either modified or unmapped it.
The resulting system clearly works. Dropping the hash may impose a cost in the form of slightly higher CPU and memory use; there have been no benchmarks posted which would show the difference. But there is no cost on systems which do not use KSM at all, and, in any case, avoiding the potential problems associated with using hash tables to identify pages with identical contents will be worth the cost. At this point, comments on the KSM code are mostly concerned with relatively small details. It could well be that this code will be ready for inclusion into the 2.6.31 kernel.
(Postscript: above, your editor noted that "most" of the independent claims in the VMWare patent required the use of a hashing mechanism. There are, in fact, a few claims which lack that requirement, but they replace it with one or two others. Some claims cover the use of copy-on-write pages, but they all explicitly say that this technique is used on pages with a "relatively high probability of impending modification." But there is little point in sharing such pages at all; KSM, by leaving them out of the unstable tree, ignores those pages entirely. The remaining claims describe partitioning memory into "classes," which is not done in KSM.)
Patches and updates
Kernel trees
Architecture-specific
Build system
Core kernel code
Development tools
Device drivers
Documentation
Filesystems and block I/O
Networking
Virtualization and containers
Benchmarks and bugs
Miscellaneous
Page editor: Jonathan Corbet
Distributions
News and Editorials
Can you hear me now?
The Fedora 11 preview release announcement went out on April 28. Around the world, Fedora users responded by downloading, testing, pondering the ext4 filesystem, and generally feeling a little "jaunty" themselves. One Fedora developer, though, had a moderately strange response which might be a little hard to understand out of its full context:
What do you mean, it won't go any louder? The _last_ release used to go louder.
Anybody who has been sufficiently distant from the disturbance on the fedora-devel mailing list can be forgiven for wondering what is going on here. In short: changes to the PulseAudio volume control widget shipped in Fedora 11 have made it hard for some users to get sound out of their systems in the manner to which they have become accustomed, and they're not happy about it.
The longer version goes something like this. The low-level ALSA sound system provides a great deal of control over the underlying hardware, exposing all of the knobs supported there. Volume-control applications have typically made all of those knobs available to the user. That sounds like the proper way to give users full control over their hardware, but, as anybody who has pulled up the mixer on moderately-complicated hardware knows, the result can be an unbelievable mess of confusing sliders. See this image for an example. There is a clear usability problem here.
The solution, as found in Fedora 11 (and, ultimately, GNOME
2.26), is to reduce the number of
sliders slightly. OK, more than slightly: there is now a single "output
volume" slider and a single "input volume" slider. The user has a single
knob to play with, and PulseAudio somehow makes everything else work
right in some magic, behind-the-scenes manner that need not be worried
about. And, in fact, on a reasonably normal system, the "just works"
factor is pretty high. If one is trying to get normal audio output from a
number of applications, the single volume control does the right thing.
Many users will, your editor suspects, never miss all those other sliders.
But the Fedora user base goes beyond "many users." And some of Fedora's testing users are finding that they can no longer make things work. Sometimes the behind-the-scenes magic doesn't get things right for specific hardware, and sometimes these users are just doing something strange that PulseAudio developer Lennart Poettering didn't envision. These users have, at times, filed bugs noting a regression in Fedora 11; they have been dismayed to see those bugs closed with a "not a bug" or "won't fix" status. To these users, the behavior of Fedora 11 is, indeed, a regression, and they are not happy about it.
It must be said that Lennart has, by virtue of a strong "not my problem" attitude, made the problem worse. His responses tend to look like this:
What he generally tells users who are unable to get the behavior they need is that they should drop down to alsamixer and fix things there. But users, strangely, dislike the idea of moving to a curses-based tool to gain access to functionality that was once part of their desktop. And, of course, just running "alsamixer" yields a beautiful, 24x80 rendering of, yes, the single PulseAudio output control; one must first figure out the proper command line options to get alsamixer to talk to the system at the right level. It just doesn't seem like much of a solution.
In the middle of this, the Fedora engineering steering committee (FESCo) held one of its regular meetings. The terse meeting summary includes the following:
This is a solution which has pleased nobody. Lennart thinks it's a big mistake, of course. Others don't like last-minute changes to the Fedora 11 feature set. And the people who are unhappy with the current state of affairs really would rather not have to go digging through the menus to find an emergency backup volume control which does what they really need. Many Fedora users, it is feared, will just see that functionality has disappeared and won't know where to go to find it again.
So what is the right solution? It seems pretty clear that the "one slider fits all" approach will never work for everybody. David Woodhouse expresses it well:
On the other hand, a general return to the "ALSA mixer of doom" (David's term) is clearly not in the cards. Presenting users with hundreds of sliders is, in most cases, not going to leave them feeling more empowered. The simplification work which has been done in the volume control application is clearly needed.
One suggestion which has come out of this is that the volume control should have an "expert mode" which makes more sliders available. That would allow those sliders to remain hidden for the (presumed) majority which will never want to adjust them, but it also makes them available in the obvious place for users who do need to go deeper. This solution, too, fails to please everybody, but it may please enough of the people involved to, eventually, cause the noise of this debate to subside a bit. Because, alas, there is no slider to turn that particular noise down, even in expert mode.
New Releases
Ubuntu 9.04 released
The Ubuntu 9.04 ("Jaunty Jackalope") release is out. There is, of course, no end of new stuff in this release; see the graphical overview for an introduction.Announcing Ubuntu 9.04 for ARM
The Ubuntu team has announced Ubuntu 9.04 Desktop edition for ARM processors. "This first, community-supported ARM release of Ubuntu supports the imx51, ixp4xx, and versatile subarchitectures, allowing use on a wide variety of hardware and virtual environments. Desktop installation images are available for the i.MX 51 Babbage development board, and netboot installation images for other subarchitectures."
Ubuntu Studio 9.04
The Ubuntu Studio team has announced its fifth release: Ubuntu Studio 9.04. "With this release, which you can download in a 1.2GB DVD, Ubuntu Studio offers a pre-made selection of packages, targeted at audio, video and graphics users. Ubuntu Studio greatly simplifies the creation of Linux-based multimedia workstations."
openSUSE 11.2 Milestone 1 Released
The first testing release of openSUSE 11.2, Milestone 1, is out. It contains a number of very recent software releases such as: Linux kernel 2.6.29, KDE 4.2.2, GNOME 2.26, OpenOffice.org 3.1 beta 4, Mono 2.4, and more. "This is a milestone release. It's for openSUSE contributors who want to use the release for testing and development (or want a sneak preview of the 11.2 release), but it is not for production use." Click below for the full announcement.
The Fedora 11 preview release is available
The Fedora 11 preview release is out. "This is the Fedora 11 Preview release, we're just a short time from releasing the full shebang. Therefore we need the most testing we can possibly get on this one." There's a reasonably advanced set of release notes available for Fedora 11; the final release is due on May 26.
Distribution News
Debian GNU/Linux
Linux-libre for Debian Lenny
The Debian project now has Linux-libre packages available for Lenny (Debian GNU/Linux 5.0). Linux-libre packages remove the binary-only firmware which are non-free according to the Debian Free Software Guidelines (DFSG).
Fedora
Reminder: Fedora Board IRC public meeting
The Fedora Advisory Board is holding its monthly public meeting on Tuesday, May 5, 2009, at 1800 UTC on IRC Freenode. Join #fedora-board-questions to discuss topics and post questions. Join #fedora-board-meeting to see the Board's conversation. "The moderator will voice people from the queue, one at a time, in the #fedora-board-meeting channel. We'll limit time per voice as needed to give everyone in the queue a chance to be heard."
Fedora Board Recap 2009-04-22
Click below for a brief recap of the April 22, 2009 meeting of the Fedora Advisory Board. Topics include Future Security Response Plans, What is Fedora?, and Status of Trademark Followup.
SUSE Linux and openSUSE
openSUSE Board Meeting Minutes, April 8, 2009
Click below for the minutes of the April 8, 2009 meeting of the openSUSE board. Topics include Trademark guide lines, openSUSE Foundation, openSUSE conference, Membership Advantages, and more.
Ubuntu family
Karmic Koala open for development
Karmic Koala (Ubuntu 9.10) is open for general development. "Please remember to wear your seat-belt, and remember that bugs in the rear-view mirror may be closer than they appear. Automatic syncs from Debian will begin shortly."
New Distributions
wattOS
wattOS is designed to be a lightweight but fully featured distribution using less energy. The OS will run on low power computers and recycled systems. wattOS Beta 2 combines OpenBox with a Ubuntu mini install. "We are actively looking for developers and supporters in our efforts to create a low power operating system that can be used personally and commercially. contact us and watch for updates to get involved."
Distribution Newsletters
Debian Misc developer news (#15)
In this issue of Misc developer news you'll find Updates of major desktop environments, Alioth updated to lenny and FusionForge 4.7, wiki.debian.org migrated to MoinMoin 1.7, Tidbits from the wanna-build team, RFH: Removing spam from the listarchive and Groupware discussion list.DistroWatch Weekly, Issue 300
The DistroWatch Weekly for April 27, 2009 is out. "Naturally, the biggest news event of the week was the release of Ubuntu's latest version - 9.04 Jaunty Jackalope. Reviews have started pouring in and users are busy upgrading. How well will the latest version be received? And does the success of Ubuntu mean, as some are beginning to wonder, that Debian GNU/Linux is no longer relevant? This week's feature article provides some answers in an interesting comparison between Xubuntu 9.04 and Debian 5.0.1 with Xfce to see how well each performs. We also post links to an interview with Ubuntu founder Mark Shuttleworth, while Tux Radar takes a look at the last ten releases of the world's most popular desktop Linux distro. Of course that's not the only thing that happened this past week - Debian has announced the availability of Lenny kernels with no closed-source firmware, the Fedora community has received up-to-date images of version 10, and the openSUSE online build service looks set to receive support for a Git version control backend, thanks to a Google Summer of Code project. Happy reading!"
Fedora Weekly News #173
The Fedora Weekly News for the week ending April 26, 2009 is out. "This week's issue starts with a welcome double dose of FedoraPlanet coverage, providing news and views from around the Fedora community. Our Ambassadors beat shares the LinuxFest Northwest experience. Developments covers the controversy over "PulseAudio: A Hearty and Robust Exchange of Ideas" and in Translation word comes of Fedora 11 Release Notes proofreading readiness. Configuration conflagration of Wacom graphics tablets is revealed in the Art beat. The Fedora Weekly Webcomic divines an unbreakable future. We're brought up to date with SecurityAdvisories for Fedora 9 and 10, and the Virtualization beat completes the issue with updates on virtualization status in Fedora, with specifics on a new libvirt 0.6.3 release, a new libguestfs 1.0.10 release, and KVM migration support in Fedora 11, to name but a few!"
The Mint Newsletter - issue 82
The Mint Newsletter for April 29, 2009 covers the Mint 7 roadmap and more.Ubuntu Weekly Newsletter #139
The Ubuntu Weekly Newsletter for the week ending April 25, 2009 is out. "In this issue we cover: Ubuntu 9.04 Released, Announcing Ubuntu 9.04 for ARM, Ubuntu Open Week Schedule, MOTU Council News, German LoCo team launches new portal, Ubuntu Live in Aalborg, Chicago Style Release Party, Rocked in Finger Lakes, Ubuntu-CL: FLiSoL, New Ubuntu US Teams Website, Limited edition Jaunty Jackalope T-shirts, Announcing Ubuntu Gaming Team, Spread Ubuntu to go live soon, Shuttleworth: Oracle's Sun buy validates open source, Ubuntu Podcast #25: Dustin Kirkland Interview, Full Circle Magazine #24, and much, much more!"
Newsletters and articles of interest
Tux Radar looks at Ubuntu's Jaunty release
Tux Radar has published three articles on Ubuntu and the recent release of 9.04 (Jaunty Jackalope). First some history with The road to Jaunty: a look back at Ubuntu's history. Next Shuttleworth on Jaunty, netbooks and more, an interview with Mark Shuttleworth. Finally there is Ubuntu 9.04 frankenreview for a test drive of Ubuntu 9.04.The Perfect Desktop - Ubuntu 9.04 (Jaunty Jackalope) (HowtoForge)
HowtoForge sets up a desktop system with the latest Ubuntu release. "This tutorial shows how you can set up an Ubuntu 9.04 (Jaunty Jackalope) desktop that is a full-fledged replacement for a Windows desktop, i.e. that has all the software that people need to do the things they do on their Windows desktops. The advantages are clear: you get a secure system without DRM restrictions that works even on old hardware, and the best thing is: all software comes free of charge."
Distribution reviews
Novell's SLES 11 is packed to the gills and keeps moving at a decent clip (Network World)
Network World has a fairly extensive review of SUSE Linux Enterprise Server 11, which was released last month. The review looks at new features added, as well as doing a bit of benchmarking to compare SLES 11 with 10.2. "And, finally, Novell has produced a YaST Security module, which consolidates a raft of formerly separate settings (file permissions, and login restrictions parameters, as a few examples) into a single and comprehensive (and finally usable) user interface. For instance, during testing we were able to make policy settings changes, and form user folder permissions without having to leap back and forth between formerly disparate user interfaces."
Ubuntu Users Looking a Bit Jaunty Today (Linux Journal)
Justin Ryan takes a look at the newly release Ubuntu 9.04 Jaunty Jackalope. "Prime among the features being touted by the Ubuntu camp are improvements in speed, perhaps rather fitting for a release named for the jackalope. Boot speed is reportedly greatly improved, as low as twenty-five seconds in some cases. Hibernation and suspend/resume have been enhanced, including immediate availability post-hibernation. Those we spoke to noted an impressive improvement in boot speed, significant even for virtual machines, as well as dramatic speed improvements in finding and connecting to wireless networks."
Page editor: Rebecca Sobol
Development
Reopening iFolder
Novell created the file sharing and synchronization tool iFolder years before it began the acquisitions that transformed the company into a major open source vendor. After it entered the Linux market, Novell placed the iFolder code under the GPL, but by 2007 the project was receiving little attention. Commercial packages continued to be available as part of Novell's Kablink groupware product, but source code releases languished. That made it a surprise when Novell resurrected iFolder in April of this year, posting new client and server packages for Linux, and clients for Windows and Mac OS X.
iFolder allows you to connect a local folder on your computer to a remote synchronization server. Every few seconds, the iFolder program on your computer detects whether any of the files have changed, and, if they have, uploads the updated files in the background. You can continue to work offline (such as while traveling), and iFolder will re-sync with the server the next time you reconnect. Plus, businesses and workgroups can use iFolder to share the same folder with multiple users, creating an automatically synchronized shared work space that is also backed up with remote storage.
The Comeback Kid
Although the continued availability of the source code is an oft-cited advantage to free software, it is still rare to for a dormant project to suddenly return to life — even more so when it is a commercially-sponsored project and the economy is down. Novell's iFolder project leader Brent McConnell said that a lot of voices inside the company pushed for the project to be revived, both because they believe it is valuable to the open source community and because it is a viable product for enterprise customers. "We think that it's a superior tool,
" he said, adding, "We also want to move it forward as an open project and see where the community takes it.
"
One of the voices inside Novell championing iFolder's cause belonged to OpenSUSE community manager Joe "Zonker" Brockmeier. In addition to believing in the value of the technology itself, Brockmeier said that he and others in the open source community at Novell felt it was very important that the company devote resources to an open source iFolder effort because not doing so would mean going back on the commitment made when the project was opened up.
McConnell echoed that sentiment, noting that Novell took criticism for not releasing iFolder source code as quickly as many would have liked, and admitting that such criticism was probably justly due. However, he added, Novell has committed resources to managing iFolder just as it has to other community open source projects like OpenSUSE. "I hope that the community sees this as a strong signal that we're committed to the project and building community around iFolder.
"
The basics
iFolder uses a client-server model to replicate a shared folder over the network. The replication can be between a single client and the server, thus serving as a remote backup, or it can share the same folder between multiple clients, enabling multi-user collaboration. In both cases, the system automatically tracks changes to the folder's contents and transparently synchronizes them, resolving conflicting change sets from different clients on the server side. In addition, unlike network-mounted storage, iFolder's client-side contents are locally stored in the client's filesystem, so editing, new file creation, and file deletion continue to function in disconnected mode.
The iFolder server runs on top of Apache, and can optionally use SSL encryption for client-server transfers. User accounts are managed through the server, with support for access control lists on each iFolder's contents, storage quotas per account, and integration of user accounts with an LDAP server.
The client side code is a small Mono application that handles authentication to the server, computes hashes of shared files to detect changes, and transfers changed files between the client and the server. The actual client-side files are local, and both their location in the filesystem and the underlying disk filesystem used are immaterial. Users can also connect to the server via a web interface and access the server's copy of their files without using the iFolder client.
April's release of iFolder is designated version 3.7.2. The server package is available for OpenSUSE 10.3 and SUSE Linux Enterprise Server (SLES) 10 in 32-bit and 64-bit editions for both. It requires Apache 2, OpenSSL, and Mono 1.2.6. The client package is also available for OpenSUSE 10.3 and SLES 10 in 32 and 64-bits, as well as for Windows XP, 32-bit and 64-bit Windows Vista, Mac OS X 10.4.11 and Mac OS X 10.5. The Windows builds work with Microsoft's .NET, while the Linux and Mac builds require Novell's Mono version 1.2.6.
iFolder compared to other systems
iFolder is not the only distributed or collaborative storage solution available for Linux, of course, but several features distinguish it from alternative lower-level systems and commercial products.
Basic network file systems like Samba and NFS are designed to work over a LAN. WebDAV, on the other hand, is based on HTTP, can be secured with SSL encryption, and allows for multiple users to connect to the same set of files. But unlike iFolder, WebDAV maintains only one copy of the shared folder and files — the original on the server. That prevents clients from continuing to work while disconnected from the server or the network as a whole.
There are distributed filesystems designed to operate in disconnected mode and with free software Linux implementations, notably the Coda project from Carnegie Mellon University. Coda is a complete filesystem rather than an add-on utility, however, and requires kernel-level support on the client machine. The Linux kernel has supported Coda for years, and it is supported by FreeBSD and NetBSD, but not by Windows or Mac OS X. Furthermore, Coda's disconnected mode operates by maintaining a temporary local cache on the client; when connected to the server, the server's copy of the file is used, just as in NFS, WebDAV, or other networked file systems.
Brockmeier said that he regards Dropbox as the only real comparable product on the market. Dropbox is a commercial service that provides shared online storage (with tiered free and paid accounts), but although its client-side program for GNOME's Nautilus file manager is open source, the server is proprietary.
The future
The current 3.7.2 release of iFolder is a welcome sight after more than a year without an update. But the Linux binaries are only available for SUSE, and the supported version — 10.3 — is no longer current. The Mono dependency is also old; iFolder 3.7.2 requires Mono 1.2.6, which was released in 2007.
Novell has set up project hosting at SourceForge.net, including user forums and wiki documentation, but so far the source code is only available through a Subversion checkout. McConnell said that Novell is committed to the project, however, and that reviving the project was slowed down by the need to do a full code review. He also posted to the project's user forum that work was underway to package the code for other Linux distributions, starting with OpenSUSE 11 and Debian.
The team is also working to update the software for compatibility with more recent releases of Mono, but improving support for other distributions will move much faster if those distributions join in the effort. There was widespread excitement in the Linux community when Novell announced the return of iFolder as an open source project. Hopefully that enthusiasm will be matched by contribution, when you combine its transparent replication, disconnected operation, and fine-grained user account management, iFolder holds significant promise.
System Applications
Database Software
PostgreSQL Weekly News
The April 26, 2009 edition of the PostgreSQL Weekly News is online with the latest PostgreSQL DBMS articles and resources.
Embedded Systems
Embedded Xen: Release xen-pxa270 (v1.0) (SourceForge)
Version 1.0 of Embedded Xen has been announced. "A port of the xen hypervisor on the ARM platform would permit to study the application of virtualisation in the embedded world. This release contains the Linux 2.6.18 tree (dom0/domU), hypervisor, miniOS. Only the hypervisor is booting with miniOS at the very early stage of bootstrapping process."
Interoperability
Samba 3.3.4 available for download
Version 3.3.4 of Samba has been announced. "This is the latest stable release of the Samba 3.3 series". See the release notes for more information.
Web Site Development
Midgard 8.09.5 released
Version 8.09.5 of the Midgard web content management system has been announced. "The Midgard Project has released the fifth maintenance release of Midgard 8.09 Ragnaroek LTS. Ragnaroek LTS is a Long Term Support version of the free software content management framework. The 8.09.5 "First Decade" release focuses on API and architecture cleanups in order to ease transition from Midgard 1.x series API to Midgard 2.x APIs."
Midgard2 9.03.0 released
Version 9.03.0 of the Midgard2 web framework has been announced. "In this release we provide Content Repository API bindings for the following programming languages: C, Python, PHP and Objective-C. D-Bus signals are used to inform different Midgard2 applications about things happening in the repository, enabling for example a PHP website and a Python background process to communicate with each other."
OpenLink Virtuoso (Open-Source Edition): New Release: v5.0.11 (SourceForge)
Version 5.0.11 of OpenLink Virtuoso has been announced. "Virtuoso is a scalable cross-platform server that combines SQL/RDF/XML Data Management with Web Application Server and Web Services Platform functionality. OpenLink Software is pleased to announce a new release of Virtuoso".
Desktop Applications
Audio Applications
XMMS2 DrMattDestruction released
The release entitled DrMattDestruction of the XMMS2 media player has been announced. "XMMS2 Team is proud to present a new release, as late as always. This time there has been huge changes "under the hook" with the new "xmmsv"." See the release notes for the full list of changes.
Desktop Environments
GNOME Software Announcements
- Glade 3.6.3 (bug fixes)
- Global Menu 0.7.5 (bug fixes, performance improvements and translation work)
- iris 0.1.0 concurrency kit (initial release)
- libgdata 0.2.0 (new features, bug fixes and translation work)
- Passepartout 0.7.1 (bug fixes and code improvements)
- Rhythmbox 0.12.1 (bug fixes, code cleanup and translation work)
KDE Software Announcements
The following new KDE software has been announced this week:- 2ManDVD 0.8.0 (new features, bug fixes and translation work)
- 2ManDVD 0.8.1 (bug fixes and translation work)
- 2ManDVD 0.8.2 (new features, bug fixes and translation work)
- 2ManDVD 0.8.3 (bug fix)
- eXaro 1.80.0 (new features, bug fixes and translation work)
- Image Commander 1.3 (new features and bug fixes)
- KRadio4 4.0.0-rc4 (bug fixes and translation work)
- Kwave 0.8.2 (new features and bug fixes)
- MyRT 0.1.3_alfa (new feature)
- QtCosmos 0.5 (unspecified)
- uspc 0.6 (new features, bug fixes and translation work)
- VariCAD 2009 1.04 (new features)
Xorg Software Announcements
The following new Xorg software has been announced this week:- intel-gpu-tools 1.0 (new features and bug fixes)
- libpciaccess 0.10.6 (new features, bug fixes and code cleanup)
- xf86-input-citron 2.2.2 (code cleanup and documentation work)
- xf86-video-mga 1.4.10 (bug fixes and code cleanup)
- xf86-video-siliconmotion 1.7.1 (new features, code cleanup and documentation work)
Packard on the state of Linux graphics
Keith Packard has posted an extensive summary of the state of the art in Linux graphics support and where things are going in the future. "In moving towards our eventual goal of a KMS/GEM/DRI2 world, weve felt obligated to avoid removing options until that goal worked best for as many people as possible. So, instead of forcing people to switch to brand new code that hasnt been entirely stable or fast, weve tried to make sure that each release of the driver has at least continued to work with the older options. However, some of the changes weve made have caused performance regressions in these older options, which doesnt exactly make people happy the old code runs slow, and the new code isnt quite ready for prime time in all situations. One option here would be to stop shipping code and sit around working on the perfect driver, to be released soon after the heat-death of the universe."
Desktop Publishing
Scribus 1.3.3.13 stable released
Stable version 1.3.3.13 of the Scribus desktop publishing application has been announced. "This stable release adds the following: * Several fixes and improvements to Pdf exporter. * More translation and documentation updates. * Several fixes to protect against possible crashes. * Improvements to the Scripter to enable more features."
Educational Software
Sugar Labs announces of Sugar on a Stick Beta 1
Sugar Labs has announced the Beta 1 release of Sugar on a Stick. "This version of the free open-source Sugar Learning Platform, available at www.sugarlabs.org for loading on any 1 Gb or greater USB stick, is designed to facilitate exploration of the award-winning Sugar interface beyond its original platform, the One Laptop per Child XO‑1, to such varied hardware as aging PCs and recent Macs to the latest netbooks."
Games
Ember 0.5.6 released (WorldForge)
The WorldForge game project has announced the release of Ember 0.5.6. "Ember is a 3d client for the WorldForge project. It uses the Ogre 3d graphics library for presentation and CEGUI for its GUI system. This release introduces a completely revamped lightning and shader model with real time shadows. It also includes major additions to the authoring interface, along with ingame area editing. The lightning model is completely new, and we would therefore appreciate any reports on artifacts and problems using different graphics cards."
Graphics
Inkscape: the road to 0.47
Version 0.47 of the Inkscape vector graphics editor is under development, expect a release around June 15. "Following announcement about our participation at Google Summer of Code 2009 we declare that we are beginning to wrap up to release long anticipated 0.47 version of Inkscape."
GUI Packages
uCFLTK: FLTK For Microcontrollers
Michael Pearce has announced an effort to port the FLTK GUI toolkit to a microcontroller platform. "This is an idea at the moment, but I am seriously looking at Porting and simplifying parts of FLTK to be able to run on a micro-controller with a QVGA type TFT display. There will be a single hardware layer that gets changed to suit different target Micro-controllers and displays. I am looking at starting with the Microchip PIC32."
Imaging Applications
eLynx SDK: v3.0.2 released (SourceForge)
Version 3.0.2 of eLynx SDK has been announced, it includes some new capabilities. "Windows & Linux image processing tools. Supports multi-core, 8 to 64-bit resolutions for grey,RGB,HLS,CIE Lab and Bayer images. Handles dng,tiff,fits,jpg,png file formats. eLynx lab is a GUI application based on wxWidgets & eLynx SDK."
Interoperability
Wine 1.1.20 announced
Version 1.1.20 of Wine has been announced. Changes include: "Show a dialog on application crashes. Much improved OLE copy/paste support. Various listview improvements. More Direct3D code cleanups. Various bug fixes."
Medical Applications
OpenQReg: 3.0.2 (SourceForge)
Version 3.0.2 of OpenQReg has been announced. "OpenQReg is a platform for building web based quality register solutions to be used primary for medical and health care applications. The provided platform makes use of and builds upon well known technologies such as Java, Tomcat and MySQL."
Multimedia
Elisa Media Center 0.5.37 released
Version 0.5.37 of Elisa Media Center has been announced. "Bugs fixed since 0.5.36: - 357097: Music scan partially fails on Ubuntu Jaunty - 361558: amp master doesn't KILL dead slaves - 330431: Previously played Audio track/video name appears on audio/video plugin - 366152: [win32] Integrate a browser control in the installer - 347174: Local videos thumbnail shown on Internet->video plugins"
Music Applications
guitarix 0.04.0-1 released
Version 0.04.0-1 of guitarix has been announced. "guitarix is a simple Linux Rock Guitar amplifier for jack(Jack Audio Connektion Kit) with one input and two outputs. Designed to get nice thrash/metal/rock/blues guitar sounds. . . . This release fix the probs with jackdmp and the register/unregister from the ports guitarix use for run jconv. Also included is a new Oscilloscope mode witch will show the wave by frame. The GUI is a little improved and some clean up's in the code have done."
Video Applications
PyAMF 0.4.2 released
Version 0.4.2 of PyAMF has been announced. "PyAMF is a lightweight library that allows Flash and Python applications to communicate via Adobe's ActionScript Message Format. This is a bugfix release".
Web Browsers
Firefox 3.0.10 released
There's another Firefox security release out there; this one fixes a new problem introduced with the 3.0.9 release. Expect the usual pile of distributor updates in the near future.Firefox 3.5 Beta 4 now available
Version 3.5 Beta 4 of the Firefox web browser is out. "Firefox 3.5 (formerly known as Firefox 3.1) Beta 4 is now available for download. This milestone is focused on testing the core functionality provided by many new features and changes to the platform scheduled for Firefox 3.5."
Miscellaneous
HylaFAX 6.0.0 released
Version 6.0.0 of HylaFAX, an interface to FAX modems, has been announced. "HylaFAX 6.0 introduces several new features for HylaFAX: * PCL Support * hfaxd can sort list output (like sendq) arbitrarily for clients * I18n: HylaFAX client strings are now translated and available in different languages * IPv6 support * New permissions in hfaxd to allow for more admin control on who can see/modify jobs/faxes (see PublicJobQ/JobProtection/PublicRecvQ) * Powerful page range handling capabilities * Better faxq/notify integration as well as many improvements to the T.30 protocol code"
PyEnchant 1.5.2 released
Version 1.5.2 of PyEnchant, the spell checking package behind the AbiWord word processor, has been announced. "This release fixes compatibility with py2exe, which had been broken during the move from SWIG to ctypes. I've also included a unittest to ensure that such breakage doesn't happen again."
Languages and Tools
Caml
Caml Weekly News
The April 28, 2009 edition of the Caml Weekly News is out with new articles about the Caml language.
HTML
encutils 0.9 released
Version 0.9 of encutils has been announced. "Encoding detection collection for Python developed mainly for use in cssutils but may be useful standalone too. 0.9 is a bugfix release."
Perl
Parrot 1.1.0 released
Version 1.1.0 of Parrot has been announced. "Parrot is a virtual machine aimed at running all dynamic languages."
Rakudo Perl 6 development release #16 ("Bratislava")
Development release #16 of Rakudo Perl 6, an implementation of Perl 6 on the Parrot Virtual Machine, has been announced. "Due to the continued rapid pace of Rakudo development and the frequent addition of new Perl 6 features and bugfixes, we continue to recommend that people wanting to use or work with Rakudo obtain the latest source directly from the main repository at github." Click below for the release details.
Python
itools 0.60.1 released
Version 0.60.1 of itools has been announced. "itools is a Python library, it groups a number of packages into a single meta-package for easier development and deployment. New feature in itools.git, there are now facilities to start a process that will specialize in calling Git commands, and send the data back to the parent process."
Jython 2.5 Beta 4 released
Version 2.5 Beta 4 of Jython has been announced. "While no new features have been added since Beta 3, we have fixed a number of bugs. One of the bugs prompted an almost total re-write of our Tuple and List implementations, which is the reason that this is another beta and not a release candidate. Expect a release candidate real soon now."
pyOpenSSL 0.9 released
Version 0.9 of pyOpenSSL has been announced. "This release includes several new features and a very important bug fix".
PyPy 1.1 final released
Version 1.1 final of PyPy, a Python interpreter implementation and an advanced compiler, has been announced. "Welcome to the PyPy 1.1 release - the first release after the end of EU funding. This release focuses on making PyPy's Python interpreter more compatible with CPython (currently CPython 2.5) and on making the interpreter more stable and bug-free."
Shed Skin 0.1.1 released
Version 0.1.1 of Shed Skin has been announced. "I have recently released version 0.1.1 of Shed Skin, an experimental (restricted-)Python-to-C++ compiler. This version comes with 5 new example programs (for a total of 35 examples, at over 10,000 lines in total). The most interesting new example is Minilight, a global illumination renderer (or raytracer), that uses triangle primitives and an octree spatial index."
Python-URL! - weekly Python news and links
The April 24, 2009 edition of the Python-URL! is online with a new collection of Python article links.
Tcl/Tk
Tcl-URL! - weekly Tcl news and links
The April 22, 2009 edition of the Tcl-URL! is online with new Tcl/Tk articles and resources.
Editors
Sources of E Text Editor released
The release of the source code for the wxWidgets-based E Text Editor has been announced. The code has been released under the Open Company License. "There has been many questions about whether the release of the source would make it possible to build a Linux version. The answer is yes. The source does build under Linux, it just needs a Linux version of the ecore library which will be released shortly. The editor could not have been build without the support of a lot of open source projects (most notably wxWidgets). So to give back, the Linux version will be totally free (as in beer)."
Test Suites
JUnit: 4.6 released (SourceForge)
Version 4.6 of JUnit has been announced. "JUnit is a simple framework for writing and running automated tests. As a political gesture, it celebrates programmers testing their own software. There are a few bug fixes included, and improvements to the core architecture that allow test reordering and parallelization for basic JUnit 3 and basic JUnit 4 tests and suites."
Version Control
bzr 1.13.2 released
Version 1.13.2 of the bzr version control system has been announced. "A regression was found in the 1.13.1 release. When bzr 1.13.1 and earlier push a stacked branch they do not take care to push all the parent inventories for the transferred revisions. This means that a smart server serving that branch often cannot calculate inventory deltas for the branch (because smart server does not/cannot open fallback repositories). Prior to 1.13 the server did not have a verb to stream revisions out of a repository, so that's why this bug has appeared now."
bzr 1.14 released
Version 1.14 of the bzr version control system has been announced. "New formats 1.14 and 1.14-rich-root supporting End-Of-Line (EOL) conversions, keyword templating (via the bzr-keywords plugin) and generic content filtering. End-of-line conversion is now supported for formats supporting content filtering."
Page editor: Forrest Cook
Linux in the news
Recommended Reading
Microsoft's TomTom patents posted for patent review (Linux-Watch)
Linux-Watch covers an Open Invention Network (OIN) announcement that three of the eight patents cited in Microsoft's lawsuit against TomTom have been posted for prior art review by the Linux community. "The three patents cited by Microsoft that cover the FAT filesystem and related technology -- U.S. patents 5579517, 5758352, and 6256642 -- have been posted on the Post-Issue Peer-to-Patent website associated with Linux Defenders, says OIN. The patents also cover flash-erasable programmable ROM and a GUI related patent, said OIN CEO Keith Bergelt in a phone interview."
Interviews
How Apple Co-Founder Steve Wozniak Gets Things Done (Lifehacker)
Lifehacker interviews Steve Wozniak. "Lifehacker: A lot of our readers want to know if you use Linux at all, and what you think about where it is today. Steve Wozniak: I never got into Linux. I swear to God, it's only lack of time. I'm past the years of my life where I can really dig into something like running a Linux system. I'm very sympathetic to the whole idea; Linux people always think the way I want to think."
Resources
Music Notation Software for Linux: a Progress Report, Part 2 (Linux Journal)
Dave Phillips concludes his review of Music Notation Software for Linux series with part two, where he looks at MuseScore, NtEd, Noteflight and Outro. "In this article, I conclude my status report on the development of some of the most active notation software projects for Linux."
Miscellaneous
Google Android 'five weeks' from MIPS port (The Register)
The Register reports that Embedded Alley is porting Google's Android stack to the MIPS microprocessor. "Embedded Alley has yet to complete its MIPS port, but chief executive Paul Staudacher told us that an end date is less than five weeks away. Before the end of May, the company will release its Android kit for building devices using the low-cost Alchemy chips fashioned by the Cupertino superconductor operation, RMI Corp."
Europe Funds Secure Operating System Research (PCWorld)
PCWorld is reporting that funding for Minix research has been extended for five more years through a grant from the European Research Council. "The €2.5 million (US$3.3 million) grant will fund three researchers and two programmers, said Andrew S. Tanenbaum, a computer science professor at Vrije Universiteit in the Netherlands. [...] Tanenbaum developed Minix, an operating system based somewhat on Unix that has a small code base and implements strong security controls."
Page editor: Forrest Cook
Announcements
Non-Commercial announcements
EFF: Wiki Operator Sues Apple Over Bogus Legal Threats
The Electronic Frontier Foundation reports that BluWiki is suing Apple. "Late last year, after BluWiki users began a discussion about making some Apple iPods and iPhones interoperate with software other than Apple's own iTunes, Apple lawyers demanded removal of the content. In a letter to OdioWorks, the attorneys alleged that the discussions constituted copyright infringement and a violation of the Digital Millennium Copyright Act's (DMCA's) prohibition on circumventing copy protection measures. Fearing legal action by Apple, OdioWorks took down the discussions from the BluWiki site. OdioWorks filed the lawsuit today in order to vindicate its right to restore those discussions."
Commercial announcements
Ulteo releases Open Virtual Desktop 1.0
Ulteo has announced the release of version 1.0 of Open Virtual Desktop (OVD). "Now organizations can deliver to their PC users either Windows or Linux applications or a mix of the two on the same user desktop via the Ulteo Open Virtual Desktop."
New Books
Learning SQL, Second Edition - New from O'Reilly
O'Reilly has published the book Learning SQL, Second Edition by Alan Beaulieu.Programming Ruby 1.9--New from Pragmatic Bookshelf
Pragmatic Bookshelf has published the book Programming Ruby 1.9 by Dave Thomas.The Twitter Book -- New from O'Reilly
O'Reilly has published The Twitter Book by Tim O'Reilly and Sarah Milstein.
Resources
Android 1.5 firmware available
HTC (the manufacturer of the Android Dev Phone) has posted a set of Android 1.5 images for the ADP1. There's also reasonably straightforward instructions on flashing those images into a phone. So ADP1 owners can go to the new code now, rather more quickly than usual; for those who prefer the "JF" builds, a 1.5-based build is said to be forthcoming in the near future.
Calls for Presentations
RTLWS 2009 call for papers
A call for papers has gone out for the Eleventh Real-Time Linux Workshop (RTLWS 2009). The event takes place on September 28-30, 2009 in Dresden, Germany, submissions are due by May 18, "Authors are invited to submit original work dealing with general topics related to real-time Linux research, experiments and case studies, as well as issues of integration of real-time and embedded Linux. A special focus will be on industrial applications and safety related systems."
WOOT'09 call for papers
A call for papers has gone out for WOOT'09, the 3rd USENIX Workshop on Offensive Technologies. "WOOT '09 will be co-located with the 18th USENIX Security Symposium (USENIX Security '09), which will take place August 1014, 2009. Important Dates * Submissions due: May 26, 2009, 11:59 p.m. PDT".
Upcoming Events
Events: May 7, 2009 to July 6, 2009
The following event listing is taken from the LWN.net Calendar.
Date(s) | Event | Location |
---|---|---|
May 4 May 8 |
JavaScript/Ajax Bootcamp at the Big Nerd Ranch | Atlanta, Georgia, USA |
May 4 May 7 |
RailsConf 2009 | Las Vegas, NV, USA |
May 6 May 9 |
Libre Graphics Meeting 2009 | Montreal, Quebec, Canada |
May 6 May 8 |
Embedded Linux training | Maynard, USA |
May 7 | NLUUG spring conference | Ede, The Netherlands |
May 8 May 10 |
PyCon Italy 2009 | Florence, Italy |
May 8 May 9 |
Linuxwochen Austria - Eisenstadt | Eisenstadt, Austria |
May 8 May 9 |
Erlanger Firebird Conference 2009 | Erlangen-Nürnberg, Germany |
May 11 | The Free! Summit | San Mateo, CA, USA |
May 13 May 15 |
FOSSLC Summercamp 2009 | Ottawa, Ontario, Canada |
May 15 May 16 |
CONFidence 2009 | Krakow, Poland |
May 15 | Firebird Developers Day - Brazil | Piracicaba, Brazil |
May 16 May 17 |
YAPC::Russia 2009 | Moscow, Russia |
May 18 May 19 |
Cloud Summit 2009 | Las Vegas, NV, USA |
May 19 May 22 |
PGCon PostgreSQL Conference | Ottawa, Canada |
May 19 | Workshop on Software Engineering for Secure Systems | Vancouver, Canada |
May 19 May 22 |
php|tek 2009 | Chicago, IL, USA |
May 19 May 21 |
Where 2.0 Conference | San Jose, CA, USA |
May 19 May 22 |
SEaCURE.it | Villasimius, Italy |
May 21 | 7th WhyFLOSS Conference Madrid 09 | Madrid, Spain |
May 22 May 23 |
eLiberatica - The Benefits of Open Source and Free Technologies | Bucharest, Romania |
May 23 May 24 |
LayerOne Security Conference | Anaheim, CA, USA |
May 25 May 29 |
Ubuntu Developers Summit - Karmic Koala | Barcelona, Spain |
May 27 May 28 |
EUSecWest 2009 | London, UK |
May 28 | Canberra LUG Monthly meeting - May 2009 | Canberra, Australia |
May 29 May 31 |
Mozilla Maemo Mer Danish Weekend | Copenhagen, Denmark |
May 31 June 3 |
Techno Security 2009 | Myrtle Beach, SC, USA |
June 1 June 5 |
Python Bootcamp with Dave Beazley | Atlanta, GA, USA |
June 2 June 4 |
SOA in Healthcare Conference | Chicago, IL, USA |
June 3 June 5 |
LinuxDays 2009 | Geneva, Switzerland |
June 3 June 4 |
Nordic Meet on Nagios 2009 | Stockholm, Sweden |
June 6 | PgDay Junín 2009 | Buenos Aires, Argentina |
June 8 June 12 |
Ruby on Rails Bootcamp with Charles B. Quinn | Atlanta, GA, USA |
June 10 June 11 |
FreedomHEC Taipei | Taipei, Taiwan |
June 11 June 12 |
ShakaCon Security Conference | Honolulu, HI, USA |
June 12 June 13 |
III Conferenza Italiana sul Software Libero | Bologna, Italy |
June 12 June 14 |
Writing Open Source: The Conference | Owen Sound, Canada |
June 13 | SouthEast LinuxFest | Clemson, SC, USA |
June 14 June 19 |
2009 USENIX Annual Technical Conference | San Diego, USA |
June 17 June 19 |
Open Source Bridge | Portland, OR, USA |
June 17 June 19 |
Conference on Cyber Warfare | Tallinn, Estonia |
June 20 June 26 |
Beginning iPhone for Commuters | New York, USA |
June 22 June 24 |
Velocity 2009 | San Jose, CA, USA |
June 22 June 24 |
YAPC|10 | Pittsburgh, PA, USA |
June 24 June 27 |
LinuxTag 2009 | Berlin, Germany |
June 24 June 27 |
10th International Free Software Forum | Porto Alegre, Brazil |
June 26 June 28 |
Fedora Users and Developers Conference - Berlin | Berlin, Germany |
June 26 June 30 |
Hacker Space Festival 2009 | Seine, France |
June 28 July 4 |
EuroPython 2009 | Birmingham, UK |
June 29 June 30 |
Open Source China World 2009 | Beijing, China |
July 1 July 3 |
OSPERT 2009 | Dublin, Ireland |
July 1 July 3 |
ICOODB 2009 | Zurich, Switzerland |
July 2 July 5 |
ToorCamp 2009 | Moses Lake, WA, USA |
July 3 July 11 |
Gran Canaria Desktop Summit (GUADEC/Akademy) | Gran Canaria, Spain |
July 3 | PHP'n Rio 09 | Rio de Janeiro, Brazil |
July 4 | Open Tech 2009 | London, UK |
If your event does not appear here, please tell us about it.
Audio and Video programs
Linux Audio Conference 2009 videos available
Videos from the 2009 Linux Audio Conference have been posted (in Theora format). The video site is somewhat sparse in its commentary (OK, there's none at all), and the videos themselves show signs of minimal editing. But the essential information is there for audio enthusiasts who were unable to be at the conference.
Page editor: Forrest Cook