User: Password:
|
|
Subscribe / Log in / New account

LWN.net Weekly Edition for September 4, 2008

Feature removal sparks Git flamewar

By Jake Edge
September 3, 2008

Removing features from a tool is never easy. Once there is enough of a user base to complain about annoyances, there is also a vocal group that uses and likes those same annoyances. The recent removal of the git-foo style commands from Git is just such a case, but many of those using those commands did not find out about the removal until after the change was made, which only served to increase their outrage.

Until version 1.6.0, Git has always had two ways to invoke the same functionality: git foo and git-foo. This was done by installing many—usually more than 100—different entries into /usr/bin for all of the different git subcommands. Some were concerned that Git was polluting that directory, but the bigger issue was the effect on new users. Partially because of shell autocompletion, a new user might be overwhelmed by the number of different Git commands available; even regular users might find it difficult to find the command they are looking for if they have to sort through 100 or more.

Many of the Git subcommands that exist are not necessarily regularly used. There are quite a number of "plumbing" commands that rarely, if ever, should be invoked by users. Those are best hidden from view, which can be done by moving them out of /usr/bin. This has been done for the 1.6.0 release, but Junio Hamano opened up a can of worms when he posted a request for discussion about taking the next step to the Git mailing list.

In the 1.6.0 release, the only things exposed in /usr/bin are the git binary itself along with a few other utilities; the rest have been moved to /usr/libexec/git-core. The hard links for each of the git-foo commands have been maintained in the new location, which allows folks that still want the old behavior to get it by adding:

    PATH="$PATH:$(git --exec-path)"
to .bashrc (or some other startup file, depending on the shell). This would allow users—especially scripts—to continue using the dash versions of commands.

Unfortunately, for many users, the first they heard about this change was when things stopped working after they installed 1.6.0. The Git team admittedly did not get the word out very well; by trying to be nice, they missed an opportunity to make users notice the change. As Hamano puts it:

But that niceness backfired. Many people seem to argue now that we should have annoyed people by throwing loud deprecation notices to stderr when they typed "git-foo", and we should have risked breaking their scripts iff they relied on not seeing anything extra on the stderr.

Hamano got caught in the middle to some extent as he wasn't particularly in favor of the original change, but at the time it was decided, there were few advocates for keeping 100+ commands in /usr/bin. There were several complaints about having that many commands, but chief amongst them was confusion for new users. By removing them from /usr/bin and providing an autocompletion script for bash that completes only a subset of the git subcommands, users will have fewer options to scan through—and to be scared of.

The original plan called for moving the dash-style commands out, which has been done, but also eventually removing the links for any of the git-foo commands that are implemented in the core git binary. Over time, much of the functionality that was handled by external commands has migrated into the main git program. It is the eventual removal of the links that Hamano is asking about in his message, but much of the response was flames about the step already taken; some could not see any advantage to moving the git-foo commands out of /usr/bin.

David Woodhouse is one of those who wants things to remain the same:

Just don't do it. Leave the git-foo commands as they were. They weren't actually hurting anyone, and you don't actually _gain_ anything by removing them. For those occasional nutters who _really_ care about the size of /usr/bin, give them the _option_ of a 'make install' without installing the aliases.

Several others agreed, but that particular horse had already left the barn. Throughout the thread, Linus Torvalds was increasingly strident about the $PATH-based workaround, which effectively ends the discussion that Hamano was trying to have. For that workaround to continue working, the links must be installed in /usr/libexec/git-core. Though it strays from the original intent, it is a reasonable compromise, one that will serve git-traditionalists as well as new users and others who no longer want the git-foo syntax.

Two things have helped keep the controversy alive: some documentation, test, and example scripts still refer to dash-style commands, but worse than that, one must do man git-foo to get the man page for that subcommand. It is a convention within the Git community to use the dash style when referring to commands in text, which explains some of the usage. Because man requires a single argument, the dash style is used there as well, though git help foo is a reasonable alternative. For users who started relatively early with Git, and are aware of the dash style commands, these examples further muddy the water.

It is a difficult problem. Projects must have room to change, but once users become used to a particular way of doing things, they will resist changing—sometimes quite loudly. As Petr "Pasky" Baudis points out, though, Git is still evolving:

You can't ask us to stop making any incompatible changes - Git is still too young for that and it's UI got evolved, not designed. But we do document the changes we do, even though we might do a better job *spreading* the word.

The Git developers still see it as a young tool that may still undergo some fairly substantial modifications, while the hardcore users see it is a fixed tool that they use daily—or more frequently—to get work done. The tension between those two views is what leads to flamewars like we have seen here. Certainly the Git folks could have done a much better job in getting the word out—Hamano was looking for suggestions on how to do that better in his original post—but users are going to have to be flexible as well.

Comments (22 posted)

DRI, BSD, and Linux

By Jonathan Corbet
September 2, 2008
The Direct Rendering Infrastructure project has long been working toward improved 3D graphics support in free operating systems. It is a crucial part of the desktop Linux experience, but, thus far, DRI development has been done in a relatively isolated manner. Development process changes which have the potential to make life better for Linux users are in the works, but, sometimes, that's not the only thing that matters.

The DRI project makes its home at freedesktop.org. Among other things, the project maintains a set of git repositories representing various views of the current state of DRI development (and the direct rendering manager (DRM) work in particular). This much is not unusual; most Linux kernel subsystems have their own repository at this point. The DRM repository is different, though, in that it is not based on any Linux kernel tree; it is, instead, an entirely separate line of development.

That separation is important; it means that its development is almost entirely disconnected from mainline kernel development. DRM patches going into the kernel must be pulled out of the DRM tree and put into a form suitable for merging, and any changes made within the kernel tree must be carefully carried back to the DRM tree by hand. So this work is not just an out-of-tree project; it's an entirely separate project producing code which is occasionally turned into a patch for the Linux kernel. It is not surprising that DRM and the mainline tend not to follow each other well. As Jesse Barnes put it recently:

Things are actually worse than I thought. There are some fairly large differences between linux-core and upstream, some of which have been in linux-core for a long time. It's one thing to have an out-of-tree development process but another entirely to let stuff rot for months & years there.

The result of all this has been a lot of developer frustration, trouble getting code merged, concerns that the project is hard for new developers to join, and more. As the DRM developers look to merge more significant chunks of code (GEM, for example), the pressure for changes to the development process has been growing. So Dave Airlie's recent announcement of a proposed new DRM development process did not entirely come as a surprise. There are a number of changes being contemplated, but the core ones are these:

  • The DRM tree will be based on the mainline kernel, allowing for the easy flow of patches in both directions. The old tree will be no more.

  • A more standard process for getting patches to the upstream kernel will be adopted; these will include standard techniques like topic branches and review of patches on the relevant mailing lists.

  • Users of the DRM interface will not ship any releases depending on DRM features which are not yet present in the mainline kernel.

The result of all this, it is hoped, will be a development process which is more efficient, more tightly coupled to the upstream kernel, and more accessible for developers outside of the current "DRM cabal." These are all worthy objectives, but there may also be a cost associated with these changes resulting from the unique role the DRI/DRM project has in the free software community.

There is clearly a great deal of code shared between Linux and other free operating systems, and with the BSD variants in particular. But that sharing tends not to happen at the kernel level. The Linux kernel is vastly different from anything BSD-derived, so moving code between them is never a straightforward task. GPL-licensed code is not welcome in BSD-licensed kernels, naturally, making it hard for code move from Linux to BSD even when it makes sense from a technical point of view. When code moves from BSD to Linux, it often brings a certain amount of acrimony with it. So, while ideas can and do move freely, there is little sharing of code between free kernels.

One significant exception is the DRM project, which is also used in most versions of BSD. One of the reasons behind the DRM project's current repository organization is the facilitation of that cooperation; there are separate directories for Linux code, BSD code, and code which is common to both. Developers from all systems contribute to the code (though the BSD developers are far outnumbered by their Linux counterparts), and they are all able to use the code in their kernels. When working in the common code directory, developers know to be careful about not breaking other systems. All told, it is a bit of welcome collaboration in an area where development resources have tended to be in short supply - even if it benefits the BSD side more than Linux.

Changing the organization of the DRM tree to be more directly based on Linux seems unlikely to make life easier for the BSD developers. Space for BSD-specific code will remain available in the DRM repository, but turning the "shared-code" directory into code in the Linux driver tree will make its shared status less clear, and, thus, easier for Linux developers to break on BSD. Additionally, it seems clear that this code may become more Linux-specific; Dave Airlie says:

However I am sure that we will see more of a push towards using Linux constructs in dri drivers, things like idr, list.h, locking constructs are too much of a pain to reinvent for every driver.

Much of this functionality can be reproduced through compatibility layers on the BSD side, but it must carry a bit of a second-class citizen feel. Dave has, in fact, made that state of affairs clear:

The thing is you can't expect equality, its just not possible, there are about 10-15 Linux developers, and 1 Free and 1 Open BSD developer working on DRM stuff at any one time, so you cannot expect the Linux developers to know what the BSD requirements are.

The fact that fewer people will be able to commit to the new repository - in fact, it may be limited to Dave Airlie - also does not help. So FreeBSD developer Robert Noland, while calling this proposal "the most fair" of any he has heard, is far from sure that he will be able to work with it:

I am having a really difficult time seeing what benefit I get from continuing to work in drm.git with this proposed model. While all commits to master going through the mailing list, I don't anticipate that I have any veto power or even delay powers until I can at least prevent imports from breaking BSD. Then once I do get it squared away, I'm still left having to send those to the ML and wait for approval to push the fixes. I can just save myself that part of the hassle and work privately. If I'm going to have to hand edit and merge every change, I don't see how it is really any harder to do that in my own repo, where I'm only subject to FreeBSD rules.

On the other hand, it's worth noting that OpenBSD developer Owain Ainsworth already works in his own repository and seems generally supportive of these changes.

Given the difference between the numbers of Linux-based and BSD-based developers, it seems almost certain that a more Linux-friendly process will win over. There is one rumored change which will not be happening, though: nobody is proposing to relicense the DRM code to the GPL. The DRM developers are only willing to support BSD to a certain point, but they certainly are not looking to make life harder for the BSD community. So they will try to accommodate the BSD developers while moving to a more Linux-centric development model; that is how things are likely to go until such a time as the BSD community is able to bring more developers to the party.

Comments (28 posted)

The Kernel Hacker's Bookshelf: UNIX Internals

September 3, 2008

This article was contributed by Valerie Henson

Back in 2001, I landed my (then) dream job as a full-time Linux kernel developer and distribution maintainer for a small embedded systems company. I was thrilled - and horrified. I'd only been working as a programmer for a couple of years and I was sure it was only a matter of time before my new employer figured out they'd hired an idiot. The only solution was to learn more about operating systems, and quickly. So I pulled out my favorite operating systems textbook and read and re-read it obsessively over the course of the next year. It worked well enough that my company tried very hard to convince me not to quit when I got bored with my "dream job" and left to work at Sun.

That operating systems textbook was UNIX Internals by Uresh Vahalia. UNIX Internals is a careful, detailed examination of multiple UNIX implementations as they evolved over time, from the perspective of both the academic theorist and the practical kernel developer. What makes this book particularly valuable to the practicing operating systems developer is that the review of each operating systems concept - say, processes and threads - is accompanied by descriptions of specific implementations and their histories - say, threading in Solaris, Mach, and Digital UNIX. Each implementation is then compared on a number of practical levels, including performance, effect on programming interfaces, portability, and long-term maintenance burden - factors that Linux developers care passionately about, but are seldom considered in the academic operating systems literature.

UNIX Internals was published in 1996. A valid question is whether a book on the implementation details of UNIX operating systems published so long ago is still useful today. For example, Linux is only mentioned briefly in the introduction, and many of the UNIX variants described are now defunct. It is true that UNIX Internals holds relatively little value for the developer actively staying up to date with the latest research and development in a particular area. However, my personal experience has been that many of the problems facing today's Linux developers are described in this book - and so are many of the proposed solutions, complete with the unsolved implementation problems. More importantly, the analysis is often detailed enough that it describes exactly the changes needed to improve the technique, if only anyone took the time to implement them.

In the rest of this review, we'll cover two chapters of UNIX Internals in detail, "Kernel Memory Allocation" and "File System Implementations." The chapter on kernel memory allocation is an example of the historical, cross-platform review and analysis that sets this book apart, covering eight popular allocators from several different flavors of UNIX. The chapter on file system implementations shows how lessons learned from the oldest and most basic file system implementations can be useful when solving the latest and hottest file system design problems.

Kernel Memory Allocation

The kernel memory allocator (KMA) is one of the most performance-critical kernel subsystems. A poor KMA implementation will hurt performance in every code path that needs to allocate or free memory. Worse, it will fragment and waste precious kernel memory - memory that can't be easily freed or paged out - and pollute hardware caches with instructions and data used for allocation management. Historically, a KMA was considered pretty good if it only wasted 50% of the total memory allocated by the kernel.

Vahalia begins with a short conceptual description of kernel memory allocation and then immediately dives into practical implementation, starting with page-level allocation in BSD. Next, he describes memory allocation in the very earliest UNIX systems: a collection of fixed-size tables for structures like inodes and process table entries, occasional "borrowing" of blocks from the buffer cache, and a few subsystem-specific ad hoc allocators. This primitive approach required a great deal of tuning, wasted a lot of memory, and made the system fragile.

What constitutes a good KMA? After a quick review of the functional requirements, Vahalia lays out the criteria he'll use to judge the allocators: low waste (fragmentation), good performance, simple interface appropriate for many different users, good alignment, efficient under changing workloads, reassignment of memory allocated for one buffer size to another, and integration with the paging system. He also takes into consideration more subtle points, such as the cache and TLB footprint of the KMA's code, along with cache and lock contention in multi-processor systems.

This is an example of how even the oldest and clunkiest algorithms can influence the design of the latest and greatest. The first KMA reviewed is the resource map allocator, an extremely simple allocator using a list of <base, size> pairs describing each free segment of memory, sorted by base address. The charms of the resource map allocator include simplicity and allocation of exactly the size requested; the vices include high fragmentation and poor performance under nearly every workload. Even this allocation algorithm is useful under the right circumstances; Vahalia describes several subsystems that still use it (System V semaphore allocation and management of free space in directory blocks on some systems) and some minor tweaks that improve the algorithm. One tweak to the resource map allocator keeps the description of each free region in the first few bytes of the region, a technique later used in the state-of-the-art SLUB allocator in the Linux kernel. This is an example of how even the oldest and clunkiest algorithms can influence the design of the latest and greatest.

Each following KMA is discussed in terms of the problems it solves from previous allocators, along with the problems it introduces. The resource map's sorted list of base/size pairs is followed by power-of-two free lists with a one-word in-buffer header (better performance, low external fragmentation, but high internal fragmentation, esp. for exact power-of-two allocations), the McKusick-Karels allocator (power-of-two free lists optimized for power-of-two allocation; extremely fast, but prone to external fragmentation), the buddy allocator (buffer splitting on power-of-two boundaries plus coalescing of adjacent free buffers; poor performance due to unnecessary splitting and coalescing), and the lazy buddy allocator (buddy plus delayed buffer coalescing; good steady-state performance but unpredictable under changing workloads). The accompanying diagrams of the data structures and buffers used to implement each allocator are particularly helpful in understanding the structure of the allocators.

After covering the simpler KMAs, we get into more interesting territory: the zone allocator from Mach, the hierarchical allocator from Dynix, and the SLAB allocator, originally implemented on Solaris and later adopted by several UNIXes, including Linux and the BSDs. Mach's zone allocator is the only fully garbage-collected KMA studied, with the concomitant unpredictable system-wide performance slowdowns during garbage collection, which would strike it from most developers' lists of useful KMAs. But as with the resource map allocator, we still have lessons to learn from the zone allocator. Many of the features of the zone allocator also appear in the SLAB allocator, commonly considered the current best-of-breed KMA.

The zone allocator creates a "zone" of memory reserved for each class of object allocated (e.g., inodes), similar to kmem caches in the later SLAB allocator. Pages are allocated to a zone as needed, up to a limit set at zone allocation time. Objects are packed tightly within each zone, even across pages, for very low internal fragmentation. Anonymous power-of-two zones are also available. Each zone has its own free list and once a zone is set up, allocation and freeing simply add and remove items from the per-zone free list (free list structures are also allocated from a zone). Memory is reclaimed on a per-page basis by the garbage collector, which runs as part of the swapper task. It uses a two-pass algorithm: the first pass counts up the number of free objects in each page, and the second pass frees empty pages. Overall, the zone allocator was a major improvement on previous KMAs: fast, space efficient, and easy to use, marred only by the inefficient and unpredictable garbage collection algorithm.

The next KMA on the list is the hierarchical memory allocator for Dynix, which ran on the highly parallel Sequent S2000. One of the major designers and implementers is our own Paul McKenney, familiar to many LWN readers as the progenitor of the read-copy-update (RCU) system used in many places in the Linux kernel. The goal of the Dynix allocator was efficient parallel memory allocation, in particular avoiding lock contention between processors. The solution was to create several layers in the memory allocation system, with per-cpu caches at the bottom and collections of large free segments at the top. As memory is freed or allocated, regions move up and down one level of the hierarchy in batches. For example, each per-cpu cache has two free lists, one in active use and the other in reserve. When the active list runs out of free buffers, the free buffers from the reserve list are moved onto it, and the reserve list replenishes itself with buffers from the global list. All the work requiring synchronization between multiple CPUs happens in one big transaction, rather than incurring synchronization overhead on each buffer allocation.

The Dynix allocator was a major advance: 3 - 5 times faster than the BSD allocator even on a single CPU. Its memory reclamation system was far more efficient than the zone allocator's, performed on an on-going basis with bounded worst case performance on each operation. Performance on SMP systems was unparalleled.

The final KMA in this chapter is the SLAB allocator, initially implemented on Solaris and later re-implemented on Linux and BSD. The SLAB allocator refined some existing techniques (simple allocation/free computations for small cache footprint, per-object caches) and introduced several new ones (cache coloring, efficient object reuse). The result is an allocator that was both the best performing and the most efficient by a wide margin - only 14% fragmentation versus 27% for the SunOS 4.1.3 sequential-fit allocator, 45% for the 4.4BSD McKusick-Karel allocator, and 46% for the SunOS 5.x buddy allocator.

Like the zone allocator, SLAB allocates per-object caches (along with anonymous caches in useful sizes) called kmem caches. Each cache has an associated optional constructor and destructor function run on the objects in a newly allocated and newly freed page, respectively (though the destructor has since been removed in the Linux allocator). Each cache is a doubly-linked list of slabs - large contiguous chunks of memory. Each slab keeps its slab data structure at the end of the slab, and divides the rest of the space into objects. Any leftover free space in the slab is divided between the beginning and end of the objects in order to vary the offset of objects with respect to the CPU cache, improving cache utilization (in other words, cache coloring). Each object has an associated 4-byte free list pointer.

The slabs within each kmem cache are in a doubly linked list, sorted so that free slabs are located at one end, fully allocated slabs at the other, and partially allocated slabs in the middle. Allocations always come from partially allocated slabs before touching free slabs. Freeing an object is simple: since slabs are always the same size and alignment, the base address of the slab can be calculated from the address of the object being freed. This address is used to find the slab on the doubly linked list. Free counts are maintained on an on-going basis. When memory pressure occurs, the slab allocator walks the kmem caches freeing the free slabs at the end of the cache's slab list. Slabs for larger objects are organized differently, with the slab management structure allocated separately and additional buffer management data included.

This section of UNIX Internals has aged particularly well, partly because the SLAB allocator continues to work well on modern systems. As Vahalia notes, the SLAB allocator initially lacked optimizations for multi-processor systems, but these were added shortly afterward, using many of the same techniques as the Dynix hierarchical allocator. Since then, most production kernel memory allocators have been SLAB-based. Recently, Christoph Lameter rewrote SLAB to get the SLUB allocator for Linux; both are available as kernel configuration options. (The third option, the SLOB allocator, is not related to SLAB - it is a simple allocator optimized for small embedded systems.) When viewed in isolation, the SLAB allocator may appear arbitrary or over-complex; when viewed in the context of previous memory allocators and their problems, the motivation behind each design decision is intuitive and clear.

File Systems Implementations

UNIX Internals includes four chapters on file systems, covering the user and kernel file system interface (VFS/vnode), implementations of on-disk and in-memory file systems, distributed/network file systems, and "advanced" file system topics - journaling, log-structured file systems, etc. Despite the intervening years, these four chapters are the most comprehensive and practical description of file systems design and implementation I have yet seen. I definitely recommend it over UNIX File System Design and Implementation - a massive sprawling book which lacks the focus and advanced implementation details of UNIX Internals.

The chapter on file systems implementations is too packed with useful detail to review fully in this article, so I'll focus on the points that are relevant to current hot file system design problems. The chapter describes the System V File System (s5fs) and Berkeley Fast File System (FFS) implementations in great detail, followed by a survey of useful in-memory file systems, including tmpfs, procfs (a.k.a. /proc file system), an early variant of a device file system called specfs, and a sysfs-style interface for managing processors. This chapter also covers the implementation of buffer caches, inode caches, directory entry caches, etc. One of the features of this chapter (as elsewhere in the book) is the carefully chosen bibliography. Bibliographies in research papers serve a double purpose as demonstrations of the authors' breadth of knowledge in the area and tend to be cluttered with more marginal references; the per-chapter bibliographies in UNIX Internals list only the most relevant publications and make excellent supplementary reading guides.

System V File System (s5fs) evolved from the first UNIX file system. The on-disk layout consisted of a boot block followed by a superblock followed by a single monolithic inode table. The remainder of the disk is used for data and indirect blocks. File data blocks are located via a standard single/double/triple indirect block scheme. s5fs has no block or inode allocation bitmaps; instead it maintains on-disk free lists. The inode free list is partial; when no more free inodes are on the list, it is replenished by scanning the inode table. Free blocks are tracked in a singly linked list rooted in the superblock - a truly terrifying design from the point of view of file system repair, especially given the lack of backup superblocks.

In many respects, s5fs is simultaneously the simplest and the worst UNIX file system possible: its throughput was commonly as little as 5% of the raw disk bandwidth, it was easily corrupted, it had a 14 character limit on file names, and so on. On the other hand, elements of the s5fs design have come back into vogue, often without addressing the inherent drawbacks still unsolved in the intervening decades.

The most striking example of a new/old design principle illustrated by s5fs is the placement of most of the metadata in one spot. This turned out to be a key performance problem for s5fs, as every uncached file read virtually guaranteed a disk seek of non-trivial magnitude between the location of the metadata at the beginning of the disk and the file data, located anywhere except the beginning of the disk. One of the major advances of FFS was to distribute inodes and bitmaps evenly across the disk and allocate associated file data and indirect blocks nearby. Recently, collecting metadata in one place has returned as a way to optimize file system check and repair time as well as other metadata-intensive operations. It also appears in designs that keep metadata on a separate high-performance device (usually solid state storage).

The problems with these schemes are the same as the first time around. For the fsck optimization case, most normal workloads will suffer from the required seek for reads of file data from uncached inodes (in particular, system boot time would suffer greatly). In the separate metadata device case, the problem of keeping a single, easily-corrupted copy of important metadata returns. Currently, most solid-state storage is less reliable than disk, yet most proposals to move file system metadata to solid state storage make no provision for backup copies on disk.

Another cutting edge file system design issue first encountered in s5fs is backup, restore, and general manipulation of sparse files. System administrators quickly discovered that it was possible to create a user-level backup that could not be restored because the tools would attempt to actually write (and allocate) the zero-filled unallocated portions of sparse files. Even more intelligent tools that do not explicitly write zero-filled portions of files still had to pointlessly copy pages of zeroes out of the kernel when reading sparse files. In general, the file and socket I/O interface requires a lot of ultimately unnecessary copying of file data into and out of the kernel for common operations. It has only been in the last few years that more sophisticated file system interfaces have been proposed and implemented, including SEEK_HOLE/SEEK_DATA and splice() and friends.

The chapters on file systems are definitely frustratingly out of date, especially with regard to advances in on-disk file system design. You'll find little or no discussion of copy-on-write file systems, extents, btrees, or file system repair outside of the context of non-journaled file systems. Unfortunately, I can't offer much in the way of a follow-up reading list; most of the papers in my file systems reading list are covered in this book (exceptions include the papers on soft updates, WAFL, and XFS). File systems developers seem to publish less often than they used to; often the options for learning about the cutting edge are reading the code, browsing the project wiki, and attending presentations from the developers. Your next opportunity for the latter is the Linux Plumbers Conference, which has a number of file system-related talks.

Another major flaw in the book, and one of the few places where Vahalia was charmed by an on-going OS design fad, is the near-complete lack of coverage of TCP/IP and other networking topics (the index entry for TCP/IP lists only two pages!). Instead, we get an entire chapter devoted to streams, at the time considered the obvious next step in UNIX I/O. If you want to learn more about UNIX networking design and implementation, this is the wrong book; buy some of the Stevens and Comer networking books instead.

Summary

UNIX Internals was the original inspiration for the Kernel Hacker's Bookshelf series, simply because you could always find it on the bookshelf of every serious kernel hacker I knew. As the age of the book is its most serious weakness, I originally intended to wait until the planned second edition was released before reviewing it. To my intense regret, the planned release date came and went and the second edition now appears to have been canceled.

UNIX Internals is not the right operating systems book for everyone; in particular, it is not a good textbook for an introductory operating systems course (although I don't think I suffered too much from the experience). However, UNIX Internals remains a valuable reference book for the practicing kernel developer and a good starting point for the aspiring kernel developer.

Comments (38 posted)

Page editor: Jonathan Corbet

Security

Find SQL injection vulnerabilities with sqlmap

By Jake Edge
September 3, 2008

SQL injections are a particularly nasty type of web application vulnerability that can lead to loss or disclosure of the contents of a database. Testing a web application to find SQL injection holes can be a tedious process, which is where the sqlmap tool may come in handy. sqlmap automates the process of testing a particular web page for various kinds of SQL injection flaws.

Sqlmap is a command-line driven Python application that can help in both finding and exploiting SQL injections. By giving it a URL and parameter names of interest (from HTML forms or GET parameters), it tries to determine which of those parameters cause different output based on their value, indicating that they control the dynamic behavior of the application. Those parameters are then tested by repeatedly making an HTTP request with slightly different values. Each of the values passed corresponds to a SQL injection technique, such as appending a single-quote. Based on whether the HTML response is different from the original response, the potential for a SQL injection can be inferred.

The tool also tests an often overlooked input source: cookies. The user can specify a cookie value which the tool will then manipulate to attempt a SQL injection via the cookie. Since many applications store their session information in a database using the cookie value as a key, this is a relatively common route to SQL injection—one that penetration tests sometimes miss.

While it does help remove some of the tedium involved in testing for SQL injections, sqlmap is by no means an automated solution. A fair amount of work is required to find a vulnerable parameter. Once a vulnerability has been found, though, a great deal of information, including database contents, can be retrieved with a single command.

Like many security tools, sqlmap can be used by those of malicious intent rather easily. The automated retrieval of database passwords and contents from a vulnerable application are particularly powerful—thus dangerous. For some database installations, there is even a mode that will get a shell prompt on the server as the user that runs the database application.

Because it is free software, sqlmap is very useful for understanding SQL injections and, perhaps more importantly, what kinds of things an attacker can do by abusing a vulnerable application. There is excellent documentation, both for developers and users. Sqlmap recently released version 0.6 and is certainly worth a look for anyone interested in testing a web application or curious about SQL injection in general.

Comments (1 posted)

New vulnerabilities

ruby: multiple vulnerabilities

Package(s):ruby CVE #(s):CVE-2008-3655 CVE-2008-3656 CVE-2008-3657
Created:September 1, 2008 Updated:December 17, 2008
Description:

From the CVE entries:

CVE-2008-3655: Ruby 1.8.5 and earlier, 1.8.6 through 1.8.6-p286, 1.8.7 through 1.8.7-p71, and 1.9 through r18423 does not properly restrict access to critical variables and methods at various safe levels, which allows context-dependent attackers to bypass intended access restrictions via (1) untrace_var, (2) $PROGRAM_NAME, and (3) syslog at safe level 4, and (4) insecure methods at safe levels 1 through 3.

CVE-2008-3656: Algorithmic complexity vulnerability in WEBrick::HTTP::DefaultFileHandler in WEBrick in Ruby 1.8.5 and earlier, 1.8.6 through 1.8.6-p286, 1.8.7 through 1.8.7-p71, and 1.9 through r18423 allows context-dependent attackers to cause a denial of service (CPU consumption) via a crafted HTTP request that is processed by a backtracking regular expression.

CVE-2008-3657: The dl module in Ruby 1.8.5 and earlier, 1.8.6 through 1.8.6-p286, 1.8.7 through 1.8.7-p71, and 1.9 through r18423 does not check "taintness" of inputs, which allows context-dependent attackers to bypass safe levels and execute dangerous functions by accessing a library using DL.dlopen.

Alerts:
Gentoo 200812-17 ruby 2008-12-16
Red Hat RHSA-2008:0981-02 ruby 2008-12-04
Mandriva MDVSA-2008:226 ruby 2008-11-06
CentOS CESA-2008:0897 ruby 2008-10-24
CentOS CESA-2008:0896 ruby 2008-10-21
Red Hat RHSA-2008:0897-01 ruby 2008-10-21
Red Hat RHSA-2008:0896-01 ruby 2008-10-21
Red Hat RHSA-2008:0895-02 ruby 2008-10-21
Debian DSA-1652-1 ruby1.9 2008-10-12
Debian DSA-1651-1 ruby1.8 2008-10-12
Ubuntu USN-651-1 ruby1.8 2008-10-10
Fedora FEDORA-2008-8736 ruby 2008-10-09
Fedora FEDORA-2008-8738 ruby 2008-10-09
rPath rPSA-2008-0264-1 ruby 2008-08-31

Comments (none posted)

slash: SQL injection, cross-site scripting

Package(s):slash CVE #(s):CVE-2008-2231 CVE-2008-2553
Created:September 2, 2008 Updated:September 3, 2008
Description: From the Debian alert: It has been discovered that Slash, the Slashdot Like Automated Storytelling Homepage suffers from two vulnerabilities related to insufficient input sanitation, leading to execution of SQL commands (CVE-2008-2231) and cross-site scripting (CVE-2008-2553).
Alerts:
Debian DSA-1633-1 slash 2008-09-01

Comments (none posted)

wordnet: stack and heap overflows

Package(s):wordnet CVE #(s):CVE-2008-2149
Created:September 2, 2008 Updated:October 7, 2008
Description: From the Debian alert: Rob Holland discovered several programming errors in WordNet, an electronic lexical database of the English language. These flaws could allow arbitrary code execution when used with untrusted input, for example when WordNet is in use as a back end for a web application.
Alerts:
Gentoo 200810-01 wordnet 2008-10-07
Debian DSA-1634-2 wordnet 2008-09-20
Mandriva MDVSA-2008:182-1 wordnet 2008-09-15
Mandriva MDVSA-2008:182 wordnet 2008-09-02
Debian DSA-1634-1 wordnet 2008-09-01

Comments (none posted)

Page editor: Jake Edge

Kernel development

Brief items

Kernel release status

The current 2.6 development kernel is 2.6.27-rc5, released on August 28. "The most exciting (well, for me personally - my life is apparently too boring for words) was how we had some stack overflows that totally corrupted some basic thread data structures. That's exciting because we haven't had those in a long time. The cause turned out to be a somewhat overly optimistic increase in the maximum NR_CPUS value, but it also caused some introspection about our stack usage in general." More excitement can be found in the full changelog.

Fixes continue to flow into the mainline repository; the 2.6.27-rc6 prepatch can be expected sometime soon.

No stable kernel releases have been made over the last week. The 2.6.25.17 and 2.6.26.4 stable updates are in the review process as of this writing; they can be expected on or after September 6.

Comments (4 posted)

Kernel development news

Quotes of the week

Quite frankly, most programmers aren't "supposedly bad". And if you think that the hard-RT "real man" programmers aren't bad, I really have nothing to say.
-- Linus Torvalds

"real man" programmers stare at the code in Zen contemplation and debug by powercycling - thats one thing even hard RT processes can't beat.
-- Alan Cox

The last burst of checkins has brought Tux3 to the pointer where it undeniably acts like a filesystem: one can write files, go away, come back later and read those files by name. We can see some of the hoped for attractiveness starting to emerge: Tux3 clearly does scale from the very small to the very big at the same time. We have our Exabyte file with 4K blocksize and we can also create 64 Petabyte files using 256 byte blocks. How cool is that? Not much chance for internal fragmentation with 256 byte blocks.
-- Daniel Phillips

Comments (2 posted)

Linux 3.0?

By Jake Edge
September 3, 2008

The Linux kernel summit is happening this month, so various discussion topics are being tossed around on the Ksummit-2008-discuss mailing list. Alan Cox suggested a Linux release that would "throw out" some accumulated, unmaintained cruft as a topic to be discussed. Cox would like to see that release be well publicized, with a new release number, so that the intention of the release would be clear. While there will be disagreements about which drivers and subsystems can be removed, participants in the thread seem favorably disposed to the idea—at least enough that it should be discussed.

There is already a process in place for deprecating and eventually removing parts of the kernel that need it, but it is somewhat haphazardly used. Cox proposes:

At some point soon we add all the old legacy ISA drivers (barring the odd ones that turn up in embedded chipsets on LPC bus) into the feature-removal list and declare an 'ISA death' flag day which we brand 2.8 or 3.0 or something so everyone knows that we are having a single clean 'throw out' of old junk.

It would also be a chance to throw out a whole pile of other "legacy" things like ipt_tos, bzImage symlinks, ancient SCTP options, ancient lmsensor support, V4L1 only driver stuff etc.

Cox's list sparked immediate protest about some of the items on it, but the general idea was well received. There are certainly sizable portions of the kernel, especially for older hardware, that are unmaintained and probably completely broken. No one seems to have any interest in carrying that stuff forward, but, without a concerted effort to identify and remove crufty code, it is likely to remain. Cox has suggested one way to make that happen; discussion at the kernel summit might refine his idea or come up with something entirely different.

Part of the reason that unmaintained code tends to hang around is that the kernel hackers have gotten much better at fixing all affected code when they make an API change. While that is definitely a change for the better, it does have the effect of sometimes hiding code that might be ready to be removed. In earlier times, dead code would have become unbuildable after an API change or two leading to either a maintainer stepping up or the code being removed.

The need to make a "major" kernel release, with a corresponding change to the major or minor release number is the biggest question that the kernel hackers seem to have. Greg Kroah-Hartman asks:

Can't we do all of the above today in our current model? Or is it just a marketing thing to bump to 3.0? If so, should we just pick a release and say, "here, 2.6.31 is the last 2.6 kernel and for the next 3 months we are just going to rip things out and create 3.0"?

There is an element of "marketing" to Cox's proposal. Publicizing a major release, along with the intention to get rid of "legacy" code, will allow interested parties to step up to maintain pieces that they do not want to see removed. As Cox, puts it:

I thought it might be useful to actually draw some definite lines so we can actually get around to throwing stuff out rather than letting it rot forever and also if its well telegraphed both give people a chance to fix where the line goes and - yes - as a marketing thing as much as anything else to define the line in a way that non-techies, press etc get.

Plus it appeals to my sense of the open source way of doing things differently - a major release about getting rid of old junk not about adding more new wackiness people don't need 8)

Arjan van de Ven thinks that gathering the list of things to be removed is a good exercise:

I like the idea of at least discussing this, and for a bunch of people making a long list of what would go. Based on that whole list it becomes a value discussion/decision; is there enough of this to make it worth doing.

Once the list has been gathered and discussed, van de Ven notes, it may well be that it can be done under the current development model, without a major release. "But let's at least do the exercise. It's worth validating the model we have once in a while ;)"

This may not be the only discussion of kernel version numbers that takes place at the summit. Back in July, Linus Torvalds mentioned a bikeshed painting project that he planned to bring up. It seems that Torvalds is less than completely happy with how large the minor release number of the kernel is; he would like to see numbers that have more meaning, possibly date-based:

The only thing I do know is that I agree that "big meaningless numbers" are bad. "26" is already pretty big. As you point out, the 2.4.x series has much bigger numbers yet.

And yes, something like "2008" is obviously numerically bigger, but has a direct meaning and as such is possibly better than something arbitrary and non-descriptive like "26".

Version numbers are not important, per se, but having a consistent, well-understood numbering scheme certainly is. The current system has been in place for four years or so without much need to modify it. That may still be the case, but with ideas about altering it coming from multiple directions, there could be changes afoot as well.

For the kernel hackers themselves, there is little benefit—except, perhaps, preventing the annoyance of ever-increasing numbers—but version numbering does provide a mechanism to communicate with the "outside world". Users have come to expect the occasional major release, with some sizable and visible chunk of changes, but the current incremental kernel releases do not provide that numerically; instead, big changes come with nearly every kernel release. There may be value in raising the visibility of one particular release, either as a means to clean up the kernel or to move to a different versioning scheme—perhaps both at once.

Comments (31 posted)

High- (but not too high-) resolution timeouts

By Jonathan Corbet
September 2, 2008
Linux provides a number of system calls that allow an application to wait for file descriptors to become ready for I/O; they include select(), pselect(), poll(), ppoll(), and epoll_wait(). Each of these interfaces allows the specification of a timeout putting an upper bound on how long the application will be blocked. In typical fashion, the form of that timeout varies greatly. poll() and epoll_wait() take an integer number of milliseconds; select() takes a struct timeval with microsecond resolution, and ppoll() and pselect() take a struct timespec with nanosecond resolution.

They are all the same, though, in that they convert this timeout value to jiffies, with a maximum resolution between one and ten milliseconds. A programmer might program a pselect() call with a 10 nanosecond timeout, but the call may not return until 10 milliseconds later, even in the absence of contention for the CPU. An error of six orders of magnitude seems like a bit much, especially given that contemporary hardware can easily support much more accurate timing.

Arjan van de Ven recently surfaced with a patch set aimed at addressing this problem. The core idea is simple: have the code implementing poll() and select() use high-resolution timers instead of converting the timeout period to low-resolution jiffies. The implementation relied on a new function to provide the timeouts:

    long schedule_hrtimeout(struct timespec *time, int mode);

Here, time is the timeout period, as interpreted by mode (which is either HRTIMER_MODE_ABS or HRTIMER_MODE_REL).

High-resolution timeouts are a nice feature, but one can immediately imagine a problem: higher-resolution timeouts are less likely to coincide with other events which wake up the processor. The result will be more wakeups and greater power consumption. As it happens, there are few developers who are more aware of this fact than Arjan, who has done quite a bit of work aimed at keeping processors asleep as much as possible. His solution to this problem was to only use high-resolution timeouts if the timeout period is less than one second. For longer timeout periods, the old, jiffie-based mechanism was used as before.

Linus didn't like that solution, calling it "ugly." His preference, instead, was to have schedule_hrtimeout() apply an appropriate amount of fuzz to all timeout values; the longer the timeout, the less resolution would be supplied. Alan Cox suggested that a better mechanism would be for the caller to supply the required accuracy with the timeout value. The problem with that idea, as Linus pointed out, is that the current system call interfaces provide no way for an application to supply the accuracy value. One could create more poll()-like system calls - as if there weren't enough of them already - with an accuracy parameter, but that looks like a lot of trouble to create a non-standard interface which few programmers would bother to use.

A different solution came in the form of Arjan's range-capable timer patch set. This patch extends hrtimers to accept two timeout values, called the "soft" and "hard" timeouts. The soft value - the shorter of the two - is the first time at which the timeout can expire; the kernel will make its best effort to ensure that it does not expire after the hard period has elapsed. In between the two, the kernel is free to expire the timer at any convenient time.

It's a useful feature, but it comes at the cost of some significant API changes. To begin with, the expires field of struct hrtimer goes away. Rather than manipulate expires directly, kernel code must now use one of the new accessor functions:

    void hrtimer_set_expires(struct hrtimer *timer, ktime_t time);
    void hrtimer_set_expires_tv64(struct hrtimer *timer, s64 tv64);
    void hrtimer_add_expires(struct hrtimer *timer, ktime_t time);
    void hrtimer_add_expires_ns(struct hrtimer *timer, unsigned long ns);
    ktime_t hrtimer_get_expires(const struct hrtimer *timer);
    s64 hrtimer_get_expires_tv64(const struct hrtimer *timer);
    s64 hrtimer_get_expires_ns(const struct hrtimer *timer);
    ktime_t hrtimer_expires_remaining(const struct hrtimer *timer);

Once that's done, the range capability is added to hrtimers. By default, the soft and hard expiration times are the same; code which wishes to set them independently can use the new functions:

    void hrtimer_set_expires_range(struct hrtimer *timer, ktime_t time, 
                                   ktime_t delta);
    void hrtimer_set_expires_range_ns(struct hrtimer *timer, ktime_t time,
                                      unsigned long delta);
    ktime_t hrtimer_get_softexpires(const struct hrtimer *timer);
    s64 hrtimer_get_softexpires_tv64(const struct hrtimer *timer)

In the new "set" functions, the specified time is the soft timeout, while time+delta provides the hard timeout value. There is also another form of schedule_timeout():

    int schedule_hrtimeout_range(ktime_t *expires, unsigned long delta,
				 const enum hrtimer_mode mode);

With this infrastructure in place, poll() and friends can be given approximate timeouts; the only remaining question is just how wide the range of times should be. In Arjan's patch, that range comes from two different sources. The first is a new field in the task structure called timer_slack_ns; as one might expect, it specifies the maximum expected timer accuracy in nanoseconds. This value can be adjusted via the prctl() system call. The default value is set to 50 microseconds - approximate to a certain degree, but still far more accurate than the timeouts in current kernels.

Beyond that, though, there is a heuristic function which provides an accuracy value depending on the requested timeout period. In the case of especially long timeouts - more than ten seconds - the accuracy is set to 100ms; as the timeouts get shorter, the amount of acceptable error drops, down to a minimum of 10ns for very brief timeouts. Normally, poll() and company will use the value returned by the heuristic, but with the exception that the accuracy will never exceed the value found in timer_slack_ns.

The end result is the provision of more accurate timeouts on the polling functions while, simultaneously, preserving the ability to combine timeouts with other system events.

Comments (14 posted)

SCHED_FIFO and realtime throttling

By Jonathan Corbet
September 1, 2008
The SCHED_FIFO scheduling class is a longstanding, POSIX-specified realtime feature. Processes in this class are given the CPU for as long as they want it, subject only to the needs of higher-priority realtime processes. If there are two SCHED_FIFO processes with the same priority contending for the CPU, the process which is currently running will continue to do so until it decides to give the processor up. SCHED_FIFO is thus useful for realtime applications where one wants to know, with great assurance, that the highest-priority process on the system will have full access to the processor for as long as it needs it.

One of the many features merged back in the 2.6.25 cycle was realtime group scheduling. As a way of balancing CPU usage between competing groups of processes, each of which can be running realtime tasks, the group scheduler introduced the concept of "realtime bandwidth," or rt_bandwith. This bandwidth consists of a pair of values: a CPU time accounting period, and the amount of CPU that the group is allowed to use - at realtime priority - during that period. Once a SCHED_FIFO task causes a group to exceed its rt_bandwidth, it will be pushed out of the processor whether it wants to go or not.

This feature is required if one wants to allow multiple groups to split a system's realtime processing power. But it also turns out to have its uses in the default situation, where all processes on the system are contained within a single, default group. Kernels shipped since 2.6.25 have set the rt_bandwidth value for the default group to be 0.95 out of every 1.0 seconds. In other words, the group scheduler is configured, by default, to reserve 5% of the CPU for non-SCHED_FIFO tasks.

It seems that nobody really noticed this feature until mid-August, when Peter Zijlstra posted a patch which set the default value to "unlimited." At that point it became clear that some developers have a different idea about how this kind of policy should be set than others do.

Ingo Molnar disagreed with the patch, saying:

The thing is, i got far more bugreports about locked up RT tasks where the lockup was unintentional, than real bugreports about anyone _intending_ for the whole box to come to a grinding halt because a high-prio RT tasks is monopolizing the CPU.

Ingo's suggestion was to raise the limit to ten seconds of CPU time. As he (and others) pointed out: any SCHED_FIFO application which needs to monopolize the CPU for that long has serious problems and needs to be fixed.

There are real problems associated with letting a SCHED_FIFO process run indefinitely. Should that process never get around to relinquishing the CPU, the system will simply hang forevermore; there is no possibility of the administrator slipping in with a kill command. This process will also block important things like kernel threads; even if it releases the processor after ten seconds, it will have seriously degraded the operation of the rest of the system. Even on a multiprocessor system, there will typically be processes bound to the CPU where the SCHED_FIFO process is running; there will be no way to recover those processes without breaking their CPU affinity, which is not a step anybody wants to take.

So, it is argued, the rt_bandwidth limit is an important safety breaker. With it in place, even a runaway SCHED_FIFO cannot prevent the administrator from (eventually) regaining control of the system and figuring out what is going on. In exchange for this safety, this feature only robs SCHED_FIFO tasks of a small amount of CPU time - the equivalent of running the application on a slightly weaker processor.

Those opposed to the default rt_bandwidth limit cite two main points: it is a user-space API change (which also breaks POSIX compliance) and represents an imposition of policy by the kernel. On the first point, Nick Piggin worries that this change could lead to broken applications:

It's not common sense to change this. It would be perfectly valid to engineer a realtime process that uses a peak of say 90% of the CPU with a 10% margin for safety and other services. Now they only have 5%.

Or a realtime app could definitely use the CPU adaptively up to 100% but still unable to tolerate an unexpected preemption.

What could make the problem worse is that the throttle might not cut in during testing; it could, instead, wait until something unexpected comes up in a production system. Needless to say, that is a prospect which can prove scary for people who create and deploy this kind of system.

The "policy in the kernel" argument was mostly shot down by Linus, who pointed out that there's lots of policy in the kernel, especially when it comes to the default settings of tunable parameters. He says:

And the default policy should generally be the one that makes sense for most people. Quite frankly, if it's an issue where all normal distros would basically be expected to set a value, then that value should _be_ the default policy, and none of the normal distros should ever need to worry.

Linus carefully avoided taking a position on which setting makes sense for the most people here. One could certainly argue that making systems resistant to being taken over by runaway realtime processes is the more sensible setting, especially considering that there is a certain amount of interest in running scary applications like PulseAudio with realtime priority. On the other hand, one can also make the case that conforming to the standard (and expected) SCHED_FIFO semantics is the only option which makes sense at all.

There has been some talk of creating a new realtime scheduling class with throttling being explicitly part of its semantics; this class could, with a suitably low limit, even be made available to unprivileged processes. Meanwhile, as of this writing, the 0.95-second limit - the one option that nobody seems to like - remains unchanged. It will almost certainly be raised; how much is something we'll have to wait to see.

Comments (25 posted)

Patches and updates

Kernel trees

Architecture-specific

Core kernel code

Development tools

Device drivers

Documentation

Filesystems and block I/O

Memory management

Networking

Security-related

Virtualization and containers

Benchmarks and bugs

Miscellaneous

Page editor: Jonathan Corbet

Distributions

News and Editorials

Spinning Fedora

By Rebecca Sobol
September 3, 2008
There was a discussion recently on the fedora-advisory-board list about when a derivative is an official spin vs. one that is Fedora based. It started out innocently enough with a request for trademark approval for an Appliance Operating Spin.

Right away Bill Nottingham noted that SELinux is disabled in this spin and wondered why. The answer was simple enough, there are some current issues with the building tool and SELinux.

A simple enough start to what turned into a somewhat lengthy discussion of what makes Fedora Fedora. This is not the first time that the Fedora Advisory Board has tackled this issue, but it seems that not all board members are in complete agreement of the difference between an official Fedora spin and something which is merely Fedora based.

Jesse Keating recalled a conversation that took place during the merge of core and extras on whether or not there should be a "Fedora Standard Base".

That is, a basic set of things you must have in your "spin" in order to call it Fedora. These include things like rpm, yum, and SELinux (at least in my opinion), but we never really coded this up nor hashed out what should be in the FSB, or if FSB was even a good name for the concept.

A draft version of trademark guidelines is available, and awaiting comments and approval by the Fedora Board. The guidelines in this document do not make any packages mandatory for trademark approval. They do state that official spins will include only those packages that are available in the official Fedora repository. Pretty much all spins, with the notable exception of the Everything Spin, will contain a subset of all the packages in the repository and are left to chose which packages they need or don't need.

Axel Thimm posted that official spins should have high standards and should improve the brand name.

Currently I cannot imagine Fedora w/o rpm or yum, but I can imagine it w/o selinux if I think about very small footprints, nano-Fedoras and all the recent suggestion. I wouldn't mind my phone to advertise that it runs on Fedora, even if selinux was turned off (but the high standard of security is ensured in another way).

Since we can't envision what nice spins/derivatives people will come up with (I first heard of the appliance spin), we should not statically enforce any requirements, but instead have the board be the checking instance like it is now.

Of course, it's not just about the trademarks. The discussion also brought up the kickstart pool and whether unofficial spins should be included in the pool, or even whether all official spins should be included. So there could be trademarked Fedora spins that aren't allowed in the kickstart pool, perhaps because of their choice of packages. Or there could be "Xora", a Fedora based distribution, that would be in the kickstart pool and available in the Fedora Hosted service.

Jeff Spaleta looked at how the kickstart pool might be structured.

Under the current workflow, there are essentially 3 different technical levels.
1) Spin SIG best practices to get into kickstart pool
2) Technical issues which are associated with trademark approval
3) Technical requirements for RelEng for 'release' of a spin.

These can be layered technical hurdles, which the kickstart pool could be structured to mimic.

The bottom line, in this instance, seems to be that AOS (Appliance Operating Spin) will likely get trademark approval, since it only contains official Fedora packages. However, unless they get SELinux running on it, either with permissive mode or with a custom policy, it won't get into the kickstart pool. Or perhaps it will be relegated to a second-class pool.

It may seem odd that an appliance needs SELinux, but as Jeroen van Meeuwen says: "On the other hand, of course we do have an agenda to push and that agenda includes SELinux as being one of the core features of the entire Fedora line of products (including the few enterprise linux spin-offs). It's one of the main features and we would rather see appliances built upon an AOS that has SELinux enforcing by default while it can still be disabled."

Comments (none posted)

New Releases

PLD Live 2.0 beta3

PLD Live 2.0 beta3 is out, along with a new Anaconda installer on the Live CD.

Full Story (comments: none)

Webconverger update

Live CD images of Webconverger 3.3 beta are available for testing. "Announcing Webconverger 3.3 beta Live CD with a new feature to install to the hard drive. This is a much anticipated feature where users can effectively setup a PC as a public Web kiosk in a matter of minutes."

Full Story (comments: 3)

Distribution News

Debian GNU/Linux

Debian successor to Lenny has been named

The release of Debian that follows Lenny has been named. In keeping with the Toy Story theme, the codename will be Squeeze (a "three-eyed space alien"). The name was announced as part of a release update email. "We are happy to publish yet another issue of our highly successful motivational status updates. This month's issue contains, as reward for your continued interest, the name for lenny's successor."

Comments (29 posted)

German and French reach 100% for po-debconf in unstable

The Debian project has announced 100% completeness for po-debconf translations in unstable (not counting Debian Installer packages, handled in a specific way). "The i18n work force would like to thank all translators who made this happen as well as all package maintainers who had a very collaborative attitude wrt localization efforts during the entire etch-lenny release cycle."

Full Story (comments: none)

Fedora

The proposed Fedora key-migration plan

For those who wonder how the Fedora project plans to migrate its users to a new set of package signing keys, a proposed plan has been posted. It involves an update to the fedora-release package (signed with the old key) which swaps in a new key and repository location, and a slow movement of older packages to the new repository. It should work, as long as one is sure that the old key can be trusted for a little longer.

Comments (7 posted)

Fedora Board meeting minutes (2008-AUG-26)

Terse minutes from the August 26 Fedora board meeting have been posted; they offer some hints at how the "infrastructure issues" discussion went. One-line summaries include "Ongoing tension between Fedora being able to act independently and Red Hat being liable for Fedora's actions" and "Don't want to get into a situation where every Fedora decision or announcement has to be vetted through Red Hat executive levels."

Full Story (comments: 18)

Slackware Linux

KDE 3.5.10 in Slackware

We previously reported that KDE 4.1 is available in Slackware current. Now the KDE 3.5 branch has been upgraded to 3.5.10.

Full Story (comments: none)

SUSE Linux and openSUSE

SUSE HackWeek

Some articles and pictures from the SUSE HackWeek can be found on the openSUSE Lizards blog site.

Comments (none posted)

Ubuntu family

Call for testing of 2.6.27 kernel in Intrepid

A new 2.6.27 kernel became available for Ubuntu's Intrepid Ibex, along with a call for testing. "We'd like to ask everyone to really give it a good kicking around to ensure we aren't introducing major regressions from 2.6.26."

Full Story (comments: none)

Feature Freeze in place; Alpha 5 freeze ahead

The Feature Freeze is now in effect for Intrepid. From now until release, the focus is on polishing and bug fixing. "Our next testing milestone, Intrepid Alpha 5, is scheduled for next Thursday, September 4."

Full Story (comments: none)

New Distributions

New Tin Hat release 20080830

Tin Hat is derived from hardened Gentoo. The project aims to provide a very secure, stable and fast Desktop environment that lives purely in RAM. "This release includes bugfixes/updates to keep Tin Hat in sync with Gentoo, including updating the hardened kernel to the latest stable version: 2.6.25-hardened-r4."

Full Story (comments: none)

Distribution Newsletters

Arch Linux Newsletter

The Arch Linux Newsletter for September covers new versions of Eclipse and Pacman, Arch at FrOSCon 2008, Arch in the 10 Best-designed Linux Distribution Websites, a Review: FaunOS 0.5.4, a featured interview with Allan McRae, Roman Kyrylych & Grigorios Bouzakis, Talk About Arch Linux Bugs.and much more.

Comments (none posted)

DistroWatch Weekly, Issue 268

The DistroWatch Weekly for September 1, 2008 is out. "The world of Linux distribution has traditionally associated the arrival of September with the start of a grand testing period as all major projects are about to finalise their feature lists, freeze their development trees and begin fixing any remaining bugs. So what can we expect when the final products eventually hit the download mirrors? We'll take a look at the feature lists of all major distributions to see what's coming up in the next few months. In the news section, Debian announces the code name of its post-Lenny release, Novell launches SUSE Studio - a web-based tool for building custom distributions, and Linpus Technologies releases an installable Linpus Lite live CD for netbooks. Also among the interesting web links, a user reports how Xubuntu has managed to turn an OLPC into a perfect travelling companion, while the developers of FreeNAS tell us why their FreeBSD-based distribution is an excellent way of storing important files on a remote machine."

Comments (none posted)

Echo Monthly News, Issue 1

The development team for Fedora's echo-icon-theme has released it's first Echo Monthly News. Inside you'll find New Icons, "Huge" icons - 256x256, One Canvas Work-Flow, Automating the secondary jobs, Echo for Fedora 10?, Future plans, and a Request for feedback.

Full Story (comments: none)

Fedora Weekly News 141

The Fedora Weekly News for August 30, 2008 covers the Fedora Unity release of Fedora 8 Re-Spin, Planet Fedora articles on the Education Spin, how to get an OLPC laptop, Tech Tidbits, Fedora at events, discussions on Resurrecting Multi-Key Signatures in RPM and Intrusion Recovery Slow and Steady, and much more.

Full Story (comments: none)

Gentoo Monthly Newsletter

The Gentoo Monthly Newsletter for August 2008 covers PHP4 removed from the Portage tree, Trustees Meeting, Interview: Google Summer of Code Student Nandeep Mali, Tigase: A Gentoo-based LiveCD, Tin Hat: A Hardened Gentoo-based LiveCD, and much more.

Comments (none posted)

OpenSUSE Weekly News/36

This edition of the OpenSUSE Weekly News looks at Hack Week III, openSUSE Election Committee Founded, openSUSE at Utah Open Source Conference, T&T: Accelerate your build speed with Icecream, linux.com: A video tour of openSUSE 11 (with KDE 4 desktop), and several other topics.

Comments (none posted)

Ubuntu Weekly Newsletter #106

The Ubuntu Weekly Newsletter for August 30, 2008 covers: Second Ubuntu Developers Week, Intrepid feature freeze - Alpha 5 freeze ahead, Call for testing of 2.6.27 kernel(Intrepid), Xfce 4.6-beta now available for Intrepid users, Asia Oceania board, Using identi.ca for Ubuntu information, Ubucon El Salvador, This week in Launchpad's web API, Full Circle Magazine #16, Ubuntu Christian 4.0, Post your Xfce news on reddit, Server team meeting summary, and much more.

Full Story (comments: none)

Newsletters and articles of interest

Build an embedded Linux distro from scratch (developerWorks)

IBM developerWorks has a tutorial (registration required) on building a custom embedded Linux distribution. "This tutorial shows you how to install Linux on a target system. Not a prebuilt Linux distribution, but your own, built from scratch. While the details of the procedure necessarily vary from one target to another, the same general principles apply. The result of this tutorial (if you have a suitable target) is a functional Linux system you can get a shell prompt on."

Comments (none posted)

Triggering Commands On File/Directory Changes With Incron (HowtoForge)

HowtoForge covers the use of incron on a Debian etch (stable) system. "This guide shows how you can install and use incron on a Debian Etch system. Incron is similar to cron, but instead of running commands based on time, it can trigger commands when file or directory events occur (e.g. a file modification, changes of permissions, etc.)."

Comments (none posted)

Page editor: Rebecca Sobol

Development

Cinelerra 4 arrives

By Forrest Cook
September 3, 2008

Cinelerra is a compositing video and audio editor that is being developed by Heroine Virtual LTD's Adam Williams when he isn't playing with autonomous miniature helicopters. Cinelerra is derived from the now-discontinued Broadcast 2000 project. The project is described:

Unleash the 50,000 watt flamethrower of content creation in your UNIX box. Cinelerra does primarily 3 things: capturing, compositing, and editing audio and video with sample level accuracy. It's a movie studio in a box. If you want the same kind of editing suite that the big boys use, on an efficient UNIX operating system, it's time for Cinelerra. Cinelerra is not community approved and there is no support from the developer. Donations to community websites do not fund Cinelerra development.

The Wikipedia entry for Cinelerra summarizes the project's window set:

The user is presented with four screens: 1. The timeline, which gives the user a time-based view of all video and audio tracks in the project, as well as keyframe data for e.g. camera movement, effects, or opacity; 2. the viewer, which gives the user a method of "scrubbing" through footage; 3. the resource window, which presents the user with a view of all audio and video resources in the project, as well as available audio and video effects and transitions; and 4. the compositor, which presents the user with a view of the final project as it would look when rendered. The compositor is interactive in that it allows the user to adjust the positions of video objects; it also updates in response to user input.

The main Cinelerra page lists the software's many features. Version 4.0 of Cinelerra was released on August 8, 2008, the change log details the most recent feature additions. Older project history is available in the news document. One big change for this release is the availability of pre-compiled binaries for 32 and 64 bit versions of Ubuntu 8.04. This can be a real time saver due to the complexity of the build process, and will give access to a wider variety of users.

Cinelerra works best with specific hardware configurations. An NVidia graphic card is recommended: "Cinelerra supports OpenGL shaders on NVidia graphics cards. The video crunching power that was once exclusively the domain of SGI minicomputers is now yours. NVidia users can run many effects in realtime instead of rendering them. OpenGL also opens up new video resolutions, up to 4096x4096 on high end cards." And a 64 bit Linux platform is a good idea: "Since it's Linux, it's been 64 bit compliant for years. In fact, Cinelerra is only recommended for 64 bit mode. The reason is the large amount of virtual memory required for page flipping and floating point images often exceeds the limit of 32 bits. "

Your author has used Cinelerra in the past for audio editing, see this article for details. Cinelerra has one capability that is hard to find in other Linux audio editing software, the ability to split (render) a huge .wav file into a group of smaller .wav files across multiple position labels, all in one operation. This feature is useful for processing long audio recordings such as digitized vinyl album sides and copies of digital audio (DAT) tapes. This was the first operation that Cinelerra 4 was tried on. After some initial crashing difficulties, a startup warning message about an insufficient shmmax value was heeded. Changing shmmax is simply a matter of running echo 0x7fffffff > /proc/sys/kernel/shmmax as root before starting Cinelerra. After doing that, your author was unable to make the software crash while processing audio.

[Lightning Shot 1] Lacking a high resolution video camera, your author was able to use his Nikon Coolpix S10 VR digital camera to produce low resolution .mov format movies with mono audio tracks. Cinelerra was able to display videos from this camera, specifically movies of thunderstorms. Individual frames containing lightning strikes were located by single stepping through interesting sections of the movie, the still frames were grabbed from the screen using an external application (xv). The single-step capability allowed the life cycle of a lightning bolt to be observed. This is a much less expensive way to procure photographs of lightning compared to using lots of 35mm film and specialized hardware.

Attempts to do actual video editing were somewhat less successful than simple playback. Creating a fade-in at the beginning of a short video clip worked, but several attempts to add a second video track crashed Cinelerra, as did saving a modified track. This may be related to the camera's data, which has confused other video players (mplayer) in the past or the lack of a professional quality video device. The computer was running a (not recommended) 32-bit version of Ubuntu and an older Radeon video card. As with high-end audio processing, it is probably best to put together a system with the specific hardware and operating system that is recommended for the application.

While Cinelerra is more of a professional video tool than a generic desktop application, it nonetheless has some very useful capabilities outside of its primary application space. It is the most full-featured video playback application that your author has experimented with, and it functions nicely as an audio processing tool.

Comments (6 posted)

System Applications

Audio Projects

Rivendell 1.0.1 announced

Version 1.0.1 of the Rivendell radio automation system has been announced. "On behalf of the entire Rivendell development team, I'm very pleased to announce the release of the first full production release of Rivendell. Rivendell is a full-featured radio automation system that is targeted for use in professional broadcast environments. It has all of the features one would expect in a modern radio automation system, including fully interactive voicetracking, podcast origination and support for a huge array of third-party broadcast hardware and software."

Full Story (comments: none)

Clusters and Grids

Release of rsplib 2.5.0

Stable version 2.5.0 of rsplib has been announced. "RSPLIB provides a light-weight environment for server pooling. If you are looking for a simple-to-use workload distribution system without the overhead and configuration effort of GRID computing, this package is what you are looking for! RSPLIB is the Open Source implementation (GPLv3) of the IETF's upcoming standard for Reliable Server Pooling (RSerPool)."

Full Story (comments: none)

Database Software

MySQL Administration Tools: 0.3.1 released (SourceForge)

Version 0.3.1 of MySQL Administration Tools has been announced. "The MyCAT project is an open-source tool-set for managing MySQL/Linux servers, currently composed of tools that: monitor replication, monitor and rotate binary logs, and allow remote shell access to arbitrary groups of servers. Release 0.3.1 fixes a few critical bugs in both rep_mon and binlog_mon. In particular, binlog_mon could fail to delete logs even when disk space reaches 100%. Anyone using 0.3.0 should update right away."

Comments (none posted)

PostgreSQL Weekly News

The August 31, 2008 edition of the PostgreSQL Weekly News is online with the latest PostgreSQL DBMS articles and resources.

Full Story (comments: none)

SQLite release 3.6.2 announced

Version 3.6.2 of the SQLite DBMS has been announced. "SQLite version 3.6.2 contains rewrites of the page-cache subsystem and the procedures for matching identifiers to table columns in SQL statements. These changes are designed to better modularize the code and make it more maintainable and reliable moving forward. Nearly 5000 non-comment lines of core code (about 11.3%) have changed from the previous release. Nevertheless, there should be no application-visible changes, other than bug fixes."

Comments (none posted)

Device Drivers

VIA releases open source Xorg driver

Harald Welte reports in a blog post that VIA has released an open source Xorg driver for their integrated graphics chips. "I am very happy to see this! It's one more step that VIA has been working on to improve and show their support for Free Software and Linux. Please notice that this driver (as opposed to VIA's proprietary binary-only Xorg driver) has no support for 3D, hardware video codec or TV encoder support. Nevertheless, it is a big step ahead."

Comments (46 posted)

Security

announcing ClamAV 0.94

Version 0.94 of ClamAV has been announced, it includes a number of new capabilities and other improvements.

Full Story (comments: none)

Virtualization Software

oVirt 0.92-1 released

Version 0.92-1 of oVirt, a virtual machines management system, has been announced. A number of new capabilities have been introduced.

Full Story (comments: none)

Web Site Development

Apache Lenya 2.0.2 released

Version 2.0.2 of Apache Lenya has been announced, it includes new features and bug fixes. "Apache Lenya is an Open Source Java/XML Content Management System and comes with revision control, site management, scheduling, search, WYSIWYG editors, and workflow."

Full Story (comments: none)

Django 1.0 beta 2 released

Version 1.0 beta 2 of the Django web platform has been announced. "Please keep in mind, though, that this release is not meant for production use, and is intended primarily for developers who are interested in checking out the new features in 1.0 and helping to identify and resolve bugs prior to the final release. The 1.0 alpha and beta releases will not receive long-term support and will not be updated with security fixes, since their main purpose is to serve as a stepping-stone on the path to the final Django 1.0, due to be released on September 2, 2008."

Comments (1 posted)

Senayan 3 Stable 5 released (SourceForge)

Senayan 3 Stable 5 has been announced. "SENAYAN Library Automation is web based open source Library Automation System, focusing on simplicity, ease of usage and complete modules for automating library task such as cataloging, circulation, membership, stock take. Senayan 3 Stable 5 is release with many improvements such as improved template system, new template for OPAC module, AJAX drop down search suggestion in author and keyword search, bugs fixed and many more."

Comments (none posted)

Desktop Applications

Audio Applications

Jokosher 0.10 released

Version 0.10 of the Jokosher audio editor has been announced. "Jokosher is a simple yet powerful multi-track studio. With it you can create and record music, podcasts and more, all from an integrated simple environment. "

Comments (none posted)

Business Applications

Openbravo POS: 2.20 released (SourceForge)

Version 2.20 of Openbravo POS has been announced, it includes many new capabilities and bug fixes. "Openbravo POS is a point of sale application designed for touch screens, supports ESC/POS ticket printers, customer displays and barcode scanners. It is multiuser providing product entry forms, reports and charts."

Comments (none posted)

Desktop Environments

GNOME Software Announcements

The following new GNOME software has been announced this week: You can find more new GNOME software releases at gnomefiles.org.

Comments (none posted)

KDE 4.1.1 released

KDE 4.1.1 has been released. This is primarily a bug-fix release; see the full changelog for all the details.

Comments (4 posted)

Akademy Redux: Release Team Members Propose New Development Process (KDE.News)

KDE.News covers some changes that are planned for the KDE development process. "At Akademy 2008, KDE Release Team members Sebastian Kügler and Dirk Müller discussed the future of KDE's development process. Describing the challenges KDE faces and proposing some solutions, they spawned a lot of discussion. Read on for a summary of what has been said and done around this topic at Akademy. Our current development model has served us for over 10 years now. We did a transition to Subversion some years ago, and we now use CMake, but basically we still work like we did a long time ago: only some tools have changed slightly. But times are changing."

Comments (6 posted)

KDE Commit-Digest (KDE.News)

The August 17, 2008 edition of the KDE Commit-Digest has been announced. The content summary says: "New "Browser History", "Konqueror Sessions", "Konsole Sessions", and "Kate Sessions" KRunners in Plasma. Proof-of-concept of simple uploading in Plasmagik. A MythTV data engine for retrieving data about a MythTV installation (upcoming recordings, etc), and the start of a RSIBreak engine. An applet for displaying new message information from KMail, Kopete, etc for use with the Plasmoids-on-Screensaver project. Support for panel form factors, and a configuration dialog in the Lancelot alternative menu. Various improvements in the "Desktop Grid" KWin-Composite effect. More bugfixes for Kicker in KDE 3.5. A backtrace browser plugin for Kate..."

Comments (none posted)

KDE Software Announcements

The following new KDE software has been announced this week: You can find more new KDE software releases at kde-apps.org.

Comments (none posted)

Xorg Software Announcements

The following new Xorg software has been announced this week: More information can be found on the X.Org Foundation wiki.

Comments (none posted)

Desktop Publishing

LyX version 1.6.0 rc2 is released

Version 1.6.0 rc2 of LyX, a GUI front end to the TeX typesetter, has been announced. "LyX 1.6.0 will be the culmination of 12 months of hard work since the release of the LyX 1.5 series. We sincerely hope you will enjoy the result. As usual with a major release, a lot of work that is not directly visible has taken place. The core of LyX has seen more cleanups and some of the new features are the direct results of this work."

Full Story (comments: none)

Multimedia

Elisa Media Center 0.5.8 released

Version 0.5.8 of Elisa Media Center has been announced. "This week the focus was on the support of more remote controls on Windows and on performance improvements. As usual, numerous bug were also fixed."

Full Story (comments: none)

Music Applications

Virtual MIDI Piano Keyboard 0.1.0 announced

Version 0.1.0 of Virtual MIDI Piano Keyboard has been announced. "This is the first public release of Virtual MIDI Piano Keyboard. It is a MIDI event generator and receiver. It doesn't produce any sound by itself, but can be used to drive a MIDI synthesizer (either hardware or software, internal or external). You can use the computer's keyboard to play MIDI notes, and also the mouse. You can use the Virtual MIDI Piano Keyboard to display the played MIDI notes from another instrument or MIDI file player."

Full Story (comments: none)

News Readers

SABnzbdPlus: SABnzbd-0.4.3 is released (SourceForge)

Version 0.4.3 of SABnzbdPlus has been announced. The project description states: "Binary Newsgrabber written in Python, server-oriented using a web-interface. The active successor of the abandoned SABnzbd project."

Comments (none posted)

Office Suites

KOffice Releases 10th Alpha of KOffice 2.0 (KDE.News)

KDE.News reports on the release of KOffice 2.0 Alpha 10. "This Alpha release contains all the work done by the Google Summer of Code students. Remember, these are: a bristle-based brush engine for Krita, a calligraphy tool for Karbon (which is available in all applications), a quantum leap in KWord ODF support, especially for styles, lists, page styles, a .doc to .odt conversion filter, a .kpr to .odp conversion filter, the presentation view for KPresenter, and the Kexi web forms feature..."

Comments (none posted)

OpenOffice.org Newsletter

The August, 2008 edition of the OpenOffice.org Newsletter is out with the latest OO.o office suite articles and events.

Full Story (comments: none)

Web Browsers

The Google Chrome comic book

Google Chrome, being the new Webkit-based web browser due to be released from the Googleplex on September 2, has been preceded by a lengthy comic book explaining the principles behind its design. "But, when you have to do interpretation, you have to look at the structure of your internal representation over and over again. So instead, V8 looks at the JavaScript source code and generates machine code that can run directly on the CPU that's running the browser."

Comments (66 posted)

Miscellaneous

JabRef: 2.4 released (SourceForge)

Version 2.4 of JabRef has been announced. "JabRef is a graphical application for managing bibliographical databases. JabRef is designed specifically for BibTeX bases, but can import and export many other bibliographic formats. JabRef runs on all platforms and requires Java 1.5 or newer. JabRef 2.4 brings a many new features. The most notable features are plugin support, global search, better crossref handling and new web query options. Many bugs have also been fixed."

Comments (none posted)

Roundup Issue Tracker version 1.4.6 released

Version 1.4.6 of Roundup Issue Tracker has been announced, it includes bug fixes. "Roundup is a simple-to-use and -install issue-tracking system with command-line, web and e-mail interfaces. It is based on the winning design from Ka-Ping Yee in the Software Carpentry "Track" design competition."

Full Story (comments: none)

Languages and Tools

C

GCC 4.3.2 released

Version 4.3.2 of GCC has been announced. "GCC 4.3.2 is a bug-fix release, containing fixes for regressions in GCC 4.3.1 relative to previous GCC releases."

Full Story (comments: none)

GCC 4.4.0 Status Report

The September 2, 2008 edition of the GCC 4.4.0 Status Report has been published. "The trunk is now in stage3 phase, so only bugfixes, documentation changes and new ports are allowed at this point. As an exception the GRAPHITE branch, which has been AFAIK mostly approved already but missed the deadline, can be checked in within next two weeks."

Full Story (comments: none)

Caml

Caml Weekly News

The August 26 - September 2, 2008 edition of the Caml Weekly News is out with new articles about the Caml language.

Full Story (comments: none)

Java

PMD: 4.2.3 released (SourceForge)

Version 4.2.3 of PMD has been announced. "PMD is a Java source code analyzer. It finds unused variables, empty catch blocks, unnecessary object creation, and so forth. This release fixes a few bugs in the 4.2.2 version but does not introduce major changes."

Comments (none posted)

Python

What's new in Python 2.6

Andrew Kuchling has done his usual top-quality job in the recently-posted What's New in Python 2.6 document. "The major theme of Python 2.6 is preparing the migration path to Python 3.0, a major redesign of the language. Whenever possible, Python 2.6 incorporates new features and syntax from 3.0 while remaining compatible with existing code by not removing older features or syntax." Required reading for any Python programmer.

Comments (12 posted)

ftputil 2.2.4 released

Version 2.2.4 of ftputil has been announced, it includes a bug fix. "ftputil is a high-level FTP client library for the Python programming language. ftputil implements a virtual file system for accessing FTP servers, that is, it can generate file-like objects for remote files. The library supports many functions similar to those in the os, os.path and shutil modules. ftputil has convenience functions for conditional uploads and downloads, and handles FTP clients and servers in different timezones."

Full Story (comments: none)

Python-URL! - weekly Python news and links

The September 2, 2008 edition of the Python-URL! is online with a new collection of Python article links.

Full Story (comments: none)

Tcl/Tk

Tcl-URL! - weekly Tcl news and links

The August 28, 2008 edition of the Tcl-URL! is online with new Tcl/Tk articles and resources.

Full Story (comments: none)

Editors

Leo 4.5 final released

Version 4.5 final of Leo has been announced, it adds new capabilities and bug fixes. "Leo is a text editor, data organizer, project manager and much more."

Full Story (comments: none)

Page editor: Forrest Cook

Linux in the news

Recommended Reading

Strip mining of open source (ITPro)

ITPro has posted a lengthy article looking at the differences in corporate behavior brought about by different free software licenses. "IBM has taken a three-year old version of OpenOffice, 1.1.4, which was the last release to be dual-licensed by Sun, and has heavily modified the code, which it has no obligation to release back to the community, and has clearly chosen this version precisely because this is the case. The perceived advantage for IBM is that the part-proprietary code can be marketed uniquely as an IBM product, and the extensions don't have to be released back to the community. As a result, IBM has effectively forked the code and cannot take advantage of later enhancements to OpenOffice."

Comments (4 posted)

CSI Stick grabs data from cell phones (CNet)

Here's a CNet article about the "CSI Stick," a new data-grabbing gadget evidently favored by law enforcement agencies. "This device connects to the data/charging port and will seamlessly grab e-mails, instant messages, dialed numbers, phone books and anything else that is stored in memory. It will even retrieve deleted files that have not been overwritten. And there is no trace whatsoever that the information has been compromised, nor any risk of corruption." Another good reason to want a phone with free (and replaceable) operating software - this sort of vulnerability can be fixed. (Via Schneier).

Comments (8 posted)

Companies

Intel acquires Linux mobile developers for Atom (ZDNet UK)

ZDNet UK covers the Intel acquisition of Opened Hand, a London-based company which specializes in mobile Linux development and services. "Opened Hand will focus on participating in the Moblin Software Platform community, which is developing a Linux software stack for Intel's Atom processors. The software will be optimised for low-power netbooks and 'mobile internet devices'."

Comments (8 posted)

Interviews

Interview with Krita developers (KDE.News)

KDE.News mentions the posting of a new interview with Krita developers Boudewijn Rempt and Cyrille Berger. "Alexandre Prokoudine has an interview with Krita developers on his blog. Taken at the Libre Graphics Meeting he talks to Boud and Cyrille about KDE's painting application. When asked what are Krita's primary goals the answer is "Krita is a very flexible foundation for all kinds of image processing. We’ve got an unparalleled architecture to build raster graphics on and a really flexible system of plug-ins", which covers pretty much everything."

Comments (none posted)

Microsoft's Man in Open Source: Sam Ramji on Redmond's Linux Strategy (Datamation)

A fairly short interview with Microsoft's Sam Ramji over at Datamation has one of the better non-answers seen lately: "Q: At a recent Microsoft Worldwide Partner conference, Microsoft CEO Steve Ballmer seemed to be saying that Microsoft will work with open source, but will never actually produce open source software. Is that a correct reading of the company's attitude? I’m glad you asked this, because it’s incredibly important that we accurately articulate Microsoft’s open source strategy. Microsoft believes that the next ten years of software will be a time of growth and change where both open source and Microsoft communities will grow together. We believe that in an increasingly interconnected world, more people have more opportunity; to use more technology; to do more things than ever before. We support those choices and are expanding interoperability between open source technologies and Microsoft technologies."

Comments (9 posted)

Reviews

One Tale of Two Scientific Distros (Linux Journal)

Linux Journal takes a look at Fermi Linux. "Fermilab supports its own users and directs others toward Scientific Linux, which was codeveloped by Fermilab, CERN and other laboratories and universities. Troy Dawson is the primary contact for both Fermi Linux and Scientific Linux. On his own site, he explains, "Fermilab uses what is called Fermi Linux. It is now based on Scientific Linux. It is actually a site modification, so technically it is Scientific Linux Fermi. But we call all of the releases we have made Fermi Linux.""

Comments (2 posted)

iRex iLiad e-Reader: Linux's Answer to the Kindle? (informIT)

David Chisnall takes a look at the Linux-based iRex iLiad, a type of E-book device. "As a development platform, the iLiad is quite interesting. It has a fairly standard Linux kernel and X11 display, with slight modifications to the X protocol to allow for efficient partial updates of the screen. The included software uses GTK. If you register as a developer (it's free), your iLiad is unlocked, allowing you to run shell scripts as root. From here you can install third-party software easily."

Comments (13 posted)

An Overview of Twitter Clients for Linux (Linux Journal)

Daniel Bartholomew reviews a number of Linux twitter clients on Linux Journal. "Micro-blogging sites are everywhere these days. There's Jaiku, FriendFeed, Pownce, Tumblr, and Identi.ca, to name a few. For many, though, the original micro-blogging site is the best: Twitter. It certainly has the biggest userbase, if nothing else. If you don't know what micro-blogging is and how it is different from regular blogging, check out one of the many online Twitter introductions. One thing that has helped Twitter become as popular as it has is the Twitter API. For users of Twitter, this ability for nearly any developer to create applications that work with the service means that in addition to posting via a browser or my cell phone, I can post from a score of different Desktop applications."

Comments (5 posted)

Miscellaneous

Bitten by the Red Hat Perl bug (InfoWorld)

InfoWorld's Neil McAllister investigates a bug with Perl's object instantiation on Red Hat Linux. "To make a long story short, he got rid of the Perl executable that came with his CentOS installation, compiled a new one from stock source code, and the bug disappeared. Clearly, the Perl hackers are blameless in this case. The fault lies squarely with Red Hat for distributing a buggy version of the interpreter. What's more disturbing, however, is that it turns out that this Red Hat Perl performance issue is a known bug. It was documented and verified long before Prakash ever raised a stink about it. How long? Try 2006, according to Red Hat's own Bugzilla database."

Comments (61 posted)

Page editor: Forrest Cook

Announcements

Non-Commercial announcements

FSF and Stephen Fry celebrate the GNU Project 25th anniversary

The GNU project is turning 25 this year, and the Free Software Foundation (FSF) has kicked off its month-long celebration of the anniversary by releasing "Happy Birthday to GNU," a short film featuring the English humorist, actor, novelist and filmmaker Stephen Fry.

Full Story (comments: 5)

Summer of Code Recap (use Perl)

use Perl recaps the Google's Summer of Code's Perl projects. "Google's Summer of Code 2008 is wrapping up now and I'm very pleased with how well The Perl Foundation's students and mentors have done. The five projects which survived the halfway point have all finished with great results. Many thanks to all of the mentors and students as well as everyone in the community who helped or supported the process. Also, thanks to Google for putting on the program and to Richard Dice and Jim Brandt at TPF."

Comments (none posted)

2008Q3 Grants Results (use Perl)

use Perl reports on the 2008 third quarter Perl Foundation grant results. "Unfortunately TPF is unable to fund all the proposed grants as they exceed the funds available for Grants. Thus, TPF GC ranked proposals accordingly with its relevance to the community, and the first few were funded. These are the funded proposals: * Perl cross-compilation for linux and wince * Barcode support in Act * Tcl/Tk access for Rakudo * Embedding perl into C++ applications * Extending BSDPAN"

Comments (none posted)

September 24 is World Day Against Software Patents

World Day Against Software Patents will take place on September 24. "Five years ago, on 24 September 2003, the European Parliament adopted amendments to limit the scope of patent law and thereby protect small software companies from the harmful effects of broad and trivial software patents. A global petition asking to effectively stop software patents worldwide will be launched on 24 September 2008, together with specific additional requests for certain regions such as Europe, the United States or India."

Full Story (comments: none)

Commercial announcements

Novell Reports Financial Results for Third Fiscal Quarter 2008

Novell, Inc. has announced its third 2008 quarter financial results. "2008. For the quarter, Novell reported net revenue of $245 million. This compares to net revenue of $237 million for the third fiscal quarter 2007. Income from operations for the third fiscal quarter 2008 was $1 million, compared to a loss from operations of $10 million for the third fiscal quarter 2007. Loss from continuing operations in the third fiscal quarter 2008 was $15 million, or $0.04 loss per share, due to a $15 million impairment charge related to our auction-rate securities. This compares to a loss from continuing operations of $4 million, or $0.01 loss per share, for the third fiscal quarter 2007."

Comments (none posted)

SGI releases fourth quarter financial results

SGI has announced its 2008 fourth quarter financial results. "SGI today announced financial results for the fourth quarter and fiscal year 2008 ended June 27, 2008. The Company achieved its stated objectives for the fiscal year of strong growth in bookings, a strengthened leadership team, an array of new products and services, and penetration into new customer accounts."

Comments (none posted)

New Books

Head First Ajax - New From O'Reilly

O'Reilly has published the book Head First Ajax by Rebecca M. Riordan.

Full Story (comments: none)

Resources

Linux Gazette #154 is out

Issue #154 of the Linux Gazette has been announced. Topics include: Mailbag, Mailbag 2, Talkback, 2-Cent Tips, News Bytes, by Deividson Luiz Okopnik and Howard Dyckoff, Hacking a Canon A720IS digital camera with CHDK on GNU/Linux, by Sujith H, Book Review: Blown to Bits, by Kat Tanaka Okopnik, WPA Supplicant LEAP, by Nic Tjirkalli, Software Review: uvhd - file investigation utility, by Owen Townsend, HelpDex, by Shane Collinge, Ecol, by Javier Malonda, XKCD, by Randall Munroe and The Linux Launderette.

Full Story (comments: none)

Meeting Minutes

Perl 6 Design Minutes (use Perl)

use Perl has published the meeting minutes for the July 2, 2008, July 9, 2008 and July 16, 2008 Perl 6 design team meetings. "Larry, Allison, Jesse, Jerry, Patrick, and chromatic attended."

Comments (none posted)

Calls for Presentations

MySQL Conference and Expo Opens Call for Participation

A call for participation has been opened for the MySQL Conference & Expo. "O'Reilly Media has opened the Call for Participation for the 2009 MySQL Conference & Expo, scheduled for April 20-23, in Santa Clara, California. Conference program chair Colin Charles and the program committee invite proposals for conference sessions, panel discussions, and tutorials. More than 2,000 attendees are expected to participate in over 120 sessions at next year's event." Submissions are due by October 22.

Full Story (comments: none)

Upcoming Events

The Linux Foundation End User Collaboration Summit

The Linux Foundation End User Collaboration Summit will be held on October 13-14, 2008 in New York, NY. "The Linux Foundation will be hosting our first ever End User Collaboration Summit this October in New York. This forum is designed for sophisticated users of Linux who will be able share best practices about how they are using Linux and speak directly with the core developers of the Linux platform."

Full Story (comments: none)

The MontaVista Vision 2008 Embedded Linux Developers Conference

The MontaVista Vision 2008 Embedded Linux Developers Conference has been announced, LWN's Jon Corbet will be speaking. "The Vision 2008 Embedded Linux Developers Conference will be held Oct. 1-3 at the Palace Hotel in San Francisco, California. Developers who attend Vision 2008 will learn how to work with new technologies such as multicore processors and mobile applications, will meet other developers and industry experts, and will see the breadth of platforms and solutions available for embedded Linux development."

Comments (none posted)

Events: September 11, 2008 to November 10, 2008

The following event listing is taken from the LWN.net Calendar.

Date(s)EventLocation
September 7
September 14
Python Game Programming Challenge Online
September 9
September 11
EFMI STC 2008 London, England
September 12
September 14
The UK Python Conference Birmingham, England
September 15
September 18
ZendCon PHP 2008 Santa Clara, CA, USA
September 15
September 16
Linux Kernel Summit 2008 Portland, OR, USA
September 16
September 19
Web 2.0 Expo New York, NY, USA
September 17
September 19
The Linux Plumbers Conference Portland, OR, USA
September 18
September 19
Italian Perl Workshop Pisa, Italy
September 19
September 20
Maemo Summit 2008 Berlin, Germany
September 20 Celebrating Software Freedom Day in Riga, Latvia Riga, Latvia
September 22
September 25
Storage Developer Conference 2008 Santa Clara, CA, USA
September 23
September 25
4th International Conference on IT Incident Management and IT Forensics Manheim, Germany
September 24
September 25
OpenExpo 2008 Zürich Winterthur, Switzerland
September 25
September 27
Firebird Conference 2008 Bergamo, Italy
September 26
September 27
PGCon Brazil 2008 Sao Paulo, Brazil
September 26 Far East Perl Workshop 2008 Vladivostok, Russia
September 26
September 28
ToorCon Information Security Conference San Diego, CA, USA
September 27
September 28
WineConf 2008 Bloomington, MN, USA
September 29
October 3
Netfilter Workshop 2008 Paris, France
September 29
September 30
Conference on Software Language Engineering Toulouse, France
September 30
October 1
BA-Con 2008 Buenos Aires, Argentina
October 1
October 3
Vision 2008 Embedded Linux Developers Conference San Francisco, USA
October 2
October 3
ekoparty Security Conference Buenos Aires, Argentina
October 3
October 4
Open Source Days 2008 Copenhagen, Denmark
October 4 PyArkansas 2008 Central Arkansas, USA
October 4
October 5
Texas Regional Python Unconference 2008 Austin, TX, USA
October 7
October 10
OWASP NYC AppSec 2008 Conference New York, NY, USA
October 7 Openmind 2008 Tampere, Finland
October 7
October 10
Linux-Kongress 2008 Hamburg, Germany
October 7 Red Hat Government Users and Developers Conference Washington, DC, United States
October 10
October 12
Ohio LinuxFest 2008 Columbus, Ohio, USA
October 10
October 12
PostgreSQL Conference West 08 Portland, OR, USA
October 10
October 12
Skolelinux Developer Gathering Oslo, Norway
October 11
October 12
Pittsburgh Perl Workshop Pittsburgh, PA, USA
October 11
October 12
MerbCamp San Diego, CA, USA
October 13
October 14
Linux Foundation End User Collaboration Summit New York, USA
October 13 Skolelinux User Conference Oslo, Norway
October 15
October 16
OpenSAF Developer Days Munich, Germany
October 17
October 18
European PGDay 2008 Prato, Italy
October 18
October 19
Maker Faire Austin Austin, TX, USA
October 19
October 24
Colorado Software Summit 2008 Keystone, CO, USA
October 20
October 24
15th Annual Tcl/Tk Conference Manassas, VA, USA
October 21
October 23
Web 2.0 Expo Europe Berlin, Germany
October 21
October 24
Systems Munich, Germany
October 22
October 24
Hack.lu 2008 Parc Hotel Alvisse, Luxembourg
October 22
October 24
Encuentro Linux Concepción, Chile
October 24
October 26
Free Society Conference and Nordic Summit Gothenburg, Sweden
October 25
October 26
T-DOSE 2008 Eindhoven, the Netherlands
October 25 Ontario Linux Fest 2008 Toronto, Canada
October 26
October 31
IBM Information On Demand 2008 Mandalay Bay - Las Vegas, Nevada, USA
October 27
October 30
Embedded Systems Conference - Boston Boston, USA
October 29
November 1
10th Real-Time Linux Workshop Colotlán, Jalisco, Mexico
November 3
November 7
ApacheCon US 2008 New Orleans, LA, USA
November 5
November 7
OpenOffice.org Conference 2008 Beijing, China
November 6 NLUUG autumn conference: Mobile Applications Ede, Netherlands
November 6
November 7
Embedded Linux Conference Europe 2008 Ede, Netherlands
November 7
November 8
TwinCity Perl Workshop 2008 Vienna, Austria
November 7
November 9
UKUUG linux conference Manchester, UK
November 8
November 9
Hackers to Hackers Conference 05' Sao Paulo, Brazil
November 8
November 9
FOSS.my Kuala Lumpur, Malaysia

If your event does not appear here, please tell us about it.

Page editor: Forrest Cook


Copyright © 2008, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds