LWN.net Weekly Edition for November 11, 2010
LPC: Michael Meeks on LibreOffice and code ownership
Back when the 2010 Linux Plumbers Conference was looking for presentations, the LibreOffice project had not yet announced its existence. So Michael Meeks put in a vague proposal for a talk having to do with OpenOffice.org and promised the organizers it would be worth their time. Fortunately, they believed him; in an energetic closing keynote, Michael talked at length about what is going on with LibreOffice - and with the free software development community as a whole. According to Michael, both good and bad things are afoot. (Michael's slides [PDF] are available for those who would like to follow along).Naturally enough, LibreOffice is one of the good things; it's going to be "awesome." It seems that there are some widely diverging views on the awesomeness of OpenOffice.org; those who are based near Hamburg (where StarDivision was based) think it is a wonderful tool. People in the rest of the world tend to have a rather less enthusiastic view. The purpose of the new LibreOffice project is to produce a system that we can all be proud of.
Michael started by posing a couple of questions and answering them, the
first of which was "why not rewrite into C# or HTML5?" He noted with a
straight face that going to a web-based approach might not succeed in
improving the program's well-known performance problems. He also said that
he has yet to go to a conference where he did not get kicked off the
network at some point. For now, he just doesn't buy the concept of doing
everything on the web.
Why LibreOffice? Ten years ago, Sun promised the community that an independent foundation would be created for OpenOffice.org. That foundation still does not exist. So, quite simply, members of the community got frustrated and created one of their own. The result, he says, is a great opportunity for the improvement of the system; LibreOffice is now a vendor-neutral project with no copyright assignment requirements. The project, he says, has received great support. It is pleasing to have both the Open Source Initiative and the Free Software Foundation express their support, but it's even more fun to see Novell and BoycottNovell on the same page.
Since LibreOffice launched, the project has seen 50 new code contributors and 27 new translators, all of whom had never contributed to the project before. These folks are working, for now, on paying down the vast pile of "technical debt" accumulated by OpenOffice.org over the years. They are trying to clean up an ancient, gnarled code base which has grown organically over many years with no review and no refactoring. They are targeting problems like memory leaks which result, Michael said, from the "opt-in approach to lifecycle management" used in the past. After ten years, the code still has over 100,000 lines of German-language comments; those are now being targeted with the help of a script which repurposes the built-in language-guessing code which is part of the spelling checker.
OpenOffice.org has a somewhat checkered history when it comes to revision control. CVS was used for some years, resulting in a fair amount of pain; simply tagging a release would take about two hours to run. Still, they lived with CVS for some time until OpenOffice.org launched into a study to determine which alternative revision control system would be best to move to. The study came back recommending Git, but that wasn't what the managers wanted to hear, so they moved to Subversion instead - losing most of the project's history in the process. Later, a move to Mercurial was done, losing history again. The result is a code base littered with commented-out code; nobody ever felt confident actually deleting anything because they never knew if they would be able to get it back. Many code changes are essentially changelogged within the code itself as well. Now LibreOffice is using Git and a determined effort is being made to clean that stuff up.
LibreOffice is also doing its best to make contribution easy. "Easy hacks" are documented online. The project is making a point of saying: "we want your changes." Unit tests are being developed. The crufty old virtual object system - deprecated for ten years - is being removed. The extensive pile of distributor patches is being merged. And they are starting to see the addition of interesting new features, such as inline interactive formula editing. There will be a new mechanism whereby adventurous users will be able to enable experimental features at run time.
What I really came to talk about was...
There is a point in "Alice's Restaurant" where Arlo Guthrie, at the conclusion of a long-winded tall tale, informs the audience that he was actually there to talk about something completely different. Michael did something similar after putting up a plot showing the increase in outside contributions over time. He wasn't really there to talk about a desktop productivity application; instead, he wanted to talk about a threat he sees looming over the free software development community.
That threat, of course, comes from the growing debate about the ownership structure of free software projects. As a community, Michael said, we are simply too nice. We have adopted licenses for our code which are entirely reasonable, and we expect others to be nice in the same way. But any project which requires copyright assignment (or an equivalent full-license grant) changes the equation; it is not being nice. There is some behind-the-scenes activity going on now which may well make things worse.
Copyright assignment does not normally deprive a contributor of the right to use the contributed software as he or she may wish. But it reserves to the corporation receiving the assignments the right to make decisions regarding the complete work. We as a community have traditionally cared a lot about licenses, but we have been less concerned about the conditions that others have to accept. Copyright assignment policies are a barrier to entry to anybody else who would work with the software in question. These policies also disrupt the balance between developers and "suit wearers," and it creates FUD around free software license practices.
Many people draw a distinction between projects owned by for-profit corporations and those owned by foundations. But even assignment policies of the variety used by the Free Software Foundation have their problems. Consider, Michael said, the split between emacs and xemacs; why does xemacs continue to exist? One reason is that a good chunk of xemacs code is owned by Sun, and Sun (along with its successor) is unwilling to assign copyright to the FSF. But there is also a group of developers out there who think that it's a good thing to have a version of emacs for which copyright assignment is not required. Michael also said that the FSF policy sets a bad example, one which companies pushing assignment policies have been quick to take advantage of.
Michael mentioned a study entitled "The Best of Strangers" which focused on the willingness to give out personal information. All participants were given a questionnaire with a long list of increasingly invasive questions; the researchers cared little about the answers, but were quite interested in how far participants got before deciding they were not willing to answer anymore. Some participants received, at the outset, a strongly-worded policy full of privacy assurances; they provided very little information. Participants who did not receive that policy got rather further through the questionnaire, while those who were pointed to a questionnaire on a web site filled it in completely. Starting with the legalese ruined the participants' trust and made them unwilling to talk about themselves.
Michael said that a similar dynamic applies to contributors to a free software project; if they are confronted with a document full of legalese on the first day, their trust in the project will suffer and they may just walk away. He pointed out the recently-created systemd project's policy, paraphrased as "because we value your contributions, we require no copyright assignments," as the way to encourage contributors and earn their trust.
Assignment agreements are harmful to the hacker/suit balance. If you work for a company, Michael said, your pet project is already probably owned by the boss. This can be a problem; as managers work their way into the system, they tend to lose track of the impact of what they do. They also tend to deal with other companies in unpleasant ways which we do not normally see at the development level; the last thing we want to do is to let these managers import "corporate aggression" into our community. If suits start making collaboration decisions, the results are not always going to be a positive thing for our community; they can also introduce a great deal of delay into the process. Inter-corporation agreements tend to be confidential and can pop up in strange ways; the freedom to fork a specific project may well be compromised by an agreement involving the company which owns the code. When somebody starts pushing inter-corporation agreements regarding code contributions and ownership, we need to be concerned.
Michael cited the agreements around the open-sourcing of the openSPARC architecture as one example of how things can go wrong. Another is the flurry of lawsuits in the mobile area; those are likely to divide companies into competing camps and destroy the solidarity we have at the development level.
Given all this, he asked, why would anybody sign such an agreement? The
freedom to change the license is one often-cited reason; Michael says that
using permissive licenses or "plus licenses" (those which allow "any later
version") as a better way of addressing that problem. The ability to offer
indemnification is another reason, but indemnification is entirely
orthogonal to ownership. One still hears the claim full ownership is
required to be able to go after infringers, but that has been decisively
proved to be false at this point. There is also an occasional appeal to
weird local laws; Michael dismissed those as silly and self serving. There
is, he says, something else going on.
What works best, he says, is when the license itself is the contributor agreement. "Inbound" and "outbound" licensing, where everybody has the same rights, is best.
But not everybody is convinced of that. Michael warned that there is "a sustained marketing drive coming" to push the copyright-assignment agenda. While we were sitting in the audience, he said, somebody was calling our bosses. They'll be saying that copyright assignment policies are required for companies to be willing to invest in non-sexy projects. But the fact of the matter is that almost all of the stack, many parts of which lack sexiness, is not owned by corporations. "All cleanly-written software," Michael says, "is sexy." Our bosses will hear that copyright assignment is required for companies to get outside investment; it's the only way they can pursue the famous MySQL model. But we should not let monopolistic companies claim that their business plans are good for free software; beyond that, Michael suggested that the MySQL model may not look as good as it did a year or two ago. Managers will be told that only assignment-based projects are successful. One only need to look at the list of successful projects, starting with the Linux kernel, to see the falseness of that claim.
Instead, Michael says, having a single company doing all of the heavy lifting is the sign of a project without a real community. It is an indicator of risk. People are figuring this out; that is why we're seeing in increasing number of single-company projects being forked and rewritten. Examples include xpdf and poppler, libart_lgpl and cairo, MySQL and Maria. There are a number of companies, Novell and Red Hat included, which are dismantling the copyright-assignment policies they used to maintain.
At this point, Michael decided that we'd had enough and needed a brief technical break. So he talked about Git: the LibreOffice project likes to work with shallow clones because the full history is so huge. But it's not possible to push patches from a shallow clone, that is a pain. Michael also noted that git am is obnoxious to use. On the other hand, he says, the valgrind DHAT tool is a wonderful way of analyzing heap memory usage patterns and finding bugs. Valgrind, he says, does not get anywhere near enough attention. There was also some brief talk of "component-based everything" architecture and some work the project is doing to facilitate parallel contribution.
The conclusion, though, came back to copyright assignment. We need to prepare for the marketing push, which could cause well-meaning people to do dumb things. It's time for developers to talk to their bosses and make it clear that copyright assignment policies are not the way toward successful projects. Before we contribute to a project, he said, we need to check more than the license; we need to look at what others will be able to do with the code. We should be more ungrateful toward corporations which seek to dominate development projects and get involved with more open alternatives.
One of those alternatives, it went without saying, is the LibreOffice project. LibreOffice is trying to build a vibrant community which resembles the kernel community. But it will be more fun: the kernel, Michael said, "is done" while LibreOffice is far from done. There is a lot of low-hanging fruit and many opportunities for interesting projects. And, if that's not enough, developers should consider that every bit of memory saved will be multiplied across millions of LibreOffice users; what better way can there be to offset one's carbon footprint? So, he said, please come and help; it's an exciting time to be working with LibreOffice.
LPC: Life after X
Keith Packard has probably done more work to put the X Window System onto our desks than just about anybody else. With some 25 years of history, X has had a good run, but nothing is forever. Is that run coming to an end, and what might come after? In his Linux Plumbers Conference talk, Keith claimed to have no control over how things might go, but he did have some ideas. Those ideas add up to an interesting vision of our graphical future.We have reached a point where we are running graphical applications on a wide variety of systems. There is the classic desktop environment that X was born into, but that is just the beginning. Mobile systems have become increasingly powerful and are displacing desktops in a number of situations. Media-specific devices have display requirements of their own. We are seeing graphical applications in vehicles, and in a number of other embedded situations.
Keith asked: how many of these applications care about network transparency, which was one of the original headline features of X? How many of them care about ICCCM compliance? How many of them care about X at all? The answer to all of those questions, of course, is "very few." Instead, developers designing these systems are more likely to resent X for its complexity, for its memory and CPU footprint, and for its contribution to lengthy boot times. They would happily get rid of it. Keith says that he means to accommodate them without wrecking things for the rest of us.
Toward a non-X future
For better or for worse, there is currently a wide variety of rendering APIs to choose from when writing graphical libraries. According to Keith, only two of them are interesting. For video rendering, there's the VDPAU/VAAPI pair; for everything else, there's OpenGL. Nothing else really matters going forward.
In the era of direct rendering, neither of those APIs really depends on X.
So what is X good for? There is still a lot which is done in the X server,
starting with video mode setting. Much of that work has been moved into
the kernel, at least for graphics chipsets from the "big three," but X
still does it
for the rest. If you still want to do boring 2D graphics, X is there for
you - as Keith put it, we all love ugly lines and lumpy text. Input is
still very much handled in X; the kernel's evdev interface does some of it
but falls far short of doing the whole job. Key mapping is done in X;
again, what's provided by the kernel in this area is "primitive." X
handles clipping when application windows overlap each other; it also takes
care of 3D object management via the GLX extension.
These tasks have a lot to do with why the X server is still in charge of our screens. Traditionally mode setting has been a big and hairy task, with the requisite code being buried deep within the X server; that has put up a big barrier to entry to any competing window systems. The clipping job had to be done somewhere. The management of video memory was done in the X server, leading to a situation where only the server gets to take advantage of any sort of persistent video memory. X is also there to make external window managers (and, later, compositing managers) work.
But things have changed in the 25 years or so since work began on X. Back in 1985, Unix systems did not support shared libraries; if the user ran two applications linked to the same library, there would be two copies of that library in memory, which was a scarce resource in those days. So it made a lot of sense to put graphics code into a central server (X), where it could be shared among applications. We no longer need to do things that way; our systems have gotten much better at sharing code which appears in different address spaces.
We also have much more complex applications - back then xterm was just about all there was. These applications manipulate a lot more graphical data, and almost every operation involves images. Remote applications are implemented with protocols like HTTP; there is little need to use the X protocol for that purpose anymore. We have graphical toolkits which can implement dynamic themes, so it is no longer necessary to run a separate window manager to impose a theme on the system. It is a lot easier to make the system respond "quickly enough"; a lot of hackery in the X server (such as the "mouse ahead" feature) was designed for a time when systems were much less responsive. And we have color screens now; they were scarce and expensive in the early days of X.
Over time, the window system has been split apart into multiple pieces - the X server, the window manager, the compositing manager, etc. All of these pieces are linked by complex, asynchronous protocols. Performance suffers as a result; for example, every keystroke must pass through at least three processes: the application, the X server, and the compositing manager. But we don't need to do things that way any more; we can simplify the architecture and improve responsiveness. There are some unsolved problems associated with removing all these processes - it's not clear how all of the fancy 3D bling provided by window/compositing managers like compiz can be implemented - but maybe we don't need all of that.
What about remote applications in an X-free world? Keith suggests that there is little need for X-style network transparency anymore. One of the early uses for network transparency was applications oriented around forms and dialog boxes; those are all implemented with web browsers now. For other applications, tools like VNC and rdesktop work and perform better than native X. Technologies like WiDi (Intel's Wireless Display) can also handle remote display needs in some situations.
Work to do
So maybe we can get rid of X, but, as described above, there are still a number of important things done by the X server. If X goes, those functions need to be handled elsewhere. Mode setting is going to into the kernel, but there are still a lot of devices without kernel mode setting (KMS) support. Somebody will have to implement KMS drivers for those devices, or they may eventually stop working. Input device support is partly handled by evdev. Graphical memory management is now handled in the kernel by GEM in a number of cases. In other words, things are moving into the kernel - Keith seemed pleased at the notion of making all of the functionality be somebody else's problem.
Some things are missing, though. Proper key mapping is one of them; that cannot (or should not) all be done in the kernel. Work is afoot to create a "libxkbcommon" library so that key mapping could be incorporated into applications directly. Accessibility work - mouse keys and sticky keys, for example - also needs to be handled in user space somewhere. The input driver problem is not completely solved; complicated devices (like touchpads) need user-space support. Some things need to be made cheaper, a task that can mostly be accomplished by replacing APIs with more efficient variants. So GLX can be replaced by EGL, in many cases, GLES can can be used instead of OpenGL, and VDPAU is an improvement over Xv. There is also the little problem of mixing X and non-X applications while providing a unified user experience.
Keith reflected on some of the unintended benefits that have come from the development work done in recent years; many of these will prove helpful going forward. Compositing, for example, was added as a way of adding fancy effects to 2D applications. Once the X developers had compositing, though, they realized that it enabled the rendering of windows without clipping, simplifying things considerably. It also separated rendering from changing on-screen content - two tasks which had been tightly tied before - making rendering more broadly useful. The GEM code had a number of goals, including making video memory pageable, enabling zero-copy texture creation from pixmaps, and the management of persistent 3D objects. Along with GEM came lockless direct rendering, improving performance and making it possible to run multiple window systems with no performance hit. Kernel mode setting was designed to make graphical setup more reliable and to enable the display of kernel panic messages, but KMS also made it easy to implement alternative window systems - or to run applications with no window system at all. EGL was designed to enable porting of applications between platforms; it also enabled running those application on non-X window systems and the dumping of the expensive GLX buffer sharing scheme.
Keith put up two pictures showing the organization of graphics on Linux. In the "before" picture, a pile of rendering interfaces can be seen all talking to the X server, which is at the center of the universe. In the "after" scene, instead, the Linux kernel sits in the middle, and window systems like X and Wayland are off in the corner, little more than special applications. When we get to "after," we'll have a much-simplified graphics system offering more flexibility and better performance.
Getting there will require getting a few more things done, naturally. There is still work to be done to fully integrate GL and VDPAU into the system. The input driver problem needs to be solved, as does the question of KMS support for video adaptors from other than the "big three" vendors. If we get rid of window managers somebody else has to do that work; Windows and Mac OS push that task into applications, maybe we should too. But, otherwise, this future is already mostly here. It is possible, for example, to run X as a client of Wayland - or vice versa. The post-X era is beginning.
Ghosts of Unix past, part 2: Conflated designs
In the first article in this series, we commenced our historical search for design patterns in Linux and Unix by illuminating the "Full exploitation" pattern which provides a significant contribution to the strength of Unix. In this second part we will look at the first of three patterns which characterize some design decisions that didn't work out so well.
The fact that these design decisions are still with us and worth talking about shows that their weaknesses were not immediately obvious and, additionally, that these designs lasted long enough to become sufficiently entrenched that simply replacing them would cause more harm than good. With these types of design issues, early warning is vitally important. The study of these patterns can only serve if they help us to avoid similar mistakes early enough. If they only allow us to classify that which we cannot avoid, there would be little point in studying them at all.
These three patterns are ordered from the one which seems to give most predictive power to that which is least valuable as an early warning. But hopefully the ending note will not be one of complete despair - any guidance in preparing for the future is surely better than none.
Conflated Designs
This week's pattern is exposed using two design decisions which were present in early Unix and have been followed by a series of fixes which have address most of the resulting difficulties. By understanding the underlying reason that the fixes were needed, we can hope to avoid future designs which would need such fixing. The first of these design decisions is taken from the implementation of the single namespace discussed in part 1.
The mount command
The central tool for implementing a single namespace is the 'mount' command, which makes the contents of a disk drive available as a filesystem and attaches that filesystem to the existing namespace. The flaw in this design which exemplifies this pattern is the word 'and' in that description. The 'mount' command performs two separate actions in one command. Firstly it makes the contents of a storage device appear as a filesystem, and secondly it binds that filesystem into the namespace. These two steps must always be done together, and cannot be separated. Similarly the unmount command performs the two reverse actions of unbinding from the namespace and deactivating the filesystem. These are, or at least were, inextricably combined and if one failed for some reason, the other would not be attempted.
It may seem at first that it is perfectly natural to combine these two operations and there is no value in separating them. History, however, suggests otherwise. Considerable effort has gone into separating these operations from each other.
Since version 2.4.11 (released in 2001), Linux has a 'lazy' version of unmount. This unbinds a filesystem from the namespace without insisting on deactivating it at the same time. This goes some way to splitting out the two functional aspects of the original unmount. The 'lazy' unmount is particularly useful when a filesystem has started to fail for some reason, a common example being an NFS filesystem from a server which is no longer accessible. It may not be possible to deactivate the filesystem as there could well be processes with open files on the filesystem. But at least with lazy unmounted it can be removed from the namespace so new processes wont be able to try to open files and so get stuck.
As well as 'lazy' unmounts, Linux developers have found it useful to add 'bind' mounts and 'move' mounts. These allow one part of the name space to be bound to another part of the namespace (so it appears twice) or a filesystem to be moved from one location to another — effectively a 'bind' mount followed by a 'lazy' unmount. Finally we have a pivot_root() system call which performs a slightly complicated dance between two filesystem starting out with the first being the root filesystem and the second being a normal mounted file system, and ending with the second being the root and the first being mounted somewhere else in that root.
It might seem that all of the issues with combining the two functions into a single 'mount' operation have been adequately resolved in the natural course of development, but it is hard to be convinced of this. The collection of namespace manipulation functions that we now have is quite ad hoc and so, while it seems to meet current needs, there can be no certainty that it is in any sense complete. A hint of this incompleteness can be seen in the fact that, once you perform a lazy unmount, the filesystem may well still exist, but it is no longer possible to manipulate it as it does not have a name in the global namespace, and all current manipulation operations require such a name. This makes it difficult to perform a 'forced' unmount after a 'lazy' unmount.
To see what a complete interface would look like we would need to exploit the design concept discussed last week: "everything can have a file descriptor". Had that pattern been imposed on the design of the mount system call we would likely have:
- A mount call that simply returned a file descriptor for the file system.
- A bind call that connected a file descriptor into the namespace, and
- An unmount call that disconnected a filesystem and returned a file descriptor.
One of the many strengths of Unix - particularly seen in the set of tools that came with the kernel - is the principle of building and then combining tools. Each tool should do one thing and do it well. These tools can then be combined in various ways, often to achieve ends that the tool developer could not have foreseen. Unfortunately the same discipline was not maintained with the mount() system call.
So this pattern is to some extent the opposite of the 'tools approach'. It needs a better name than that, though; a good choice seems to be to call it a "conflated design". One dictionary (PJC) defines "conflate" as "to ignore distinctions between, by treating two or more distinguishable objects or ideas as one", which seems to sum up the pattern quite well.
The open() system call.
Our second example of a conflated design is found in the open() system call. This system call (in Linux) takes 13 distinct flags which modify its behavior, adding or removing elements of functionality - multiple concepts are thus combined in the one system call. Much of this combination does not imply a conflated design. Several of the flags can be set or cleared independently of the open() using the F_SETFL option to fcntl(). Thus while they are commonly combined, they are easily separated and so need not be considered to be conflated.
Three elements of the open() call are worthy of particular attention in the current context. They are O_TRUNC, O_CLOEXEC and O_NONBLOCK.
In early versions of Unix, up to and including Level 7, opening with O_TRUNC was the only way to truncate a file and, consequently, it could only be truncated to become empty. Partial truncation was not possible. Having truncation intrinsically tied to open() is exactly the sort of conflated design that should be avoided and, fortunately, it is easy to recognize. BSD Unix introduced the ftruncate() system call which allows a file to be truncated after it has been opened and, additionally, allows the new size to be any arbitrary value, including values greater than the current file size. Thus that conflation was easily resolved.
O_CLOEXEC has a more subtle story. The standard behavior of the exec() system call (which causes a process to stop running one program and to start running another) is that all file descriptors available before the exec() are equally available afterward. This behavior can be changed, quite separately from the open() call which created the file descriptor, with another fcntl() call. For a long time this appeared to be a perfectly satisfactory arrangement.
However the advent of threads, where multiple processes could share their file descriptors (so when one thread or process opens a file, all threads in the group can see the file descriptor immediately), made room for a potential race. If one process opens a file with the intent of setting the close-on-exec flag immediately, and another process performs an exec() (which causes the file table to not be shared any more), the new program in the second process will inherit a file descriptor which it should not. In response to this problem, the recently-added O_CLOEXEC flag causes open() to mark the file descriptor as close-on-exec atomically with the open so there can be no leakage.
It could be argued that creating a file descriptor and allowing it to be preserved across an exec() should be two separate operations. That is, the default should have been to not keep a file descriptor open across exec(), and a special request would be needed to preserve it. However foreseeing the problems of threads when first designing open() would be beyond reasonable expectations, and even to have considered the effects on open() when adding the ability to share file tables would be a bit much to ask.
The main point of the O_CLOEXEC example then is to acknowledge that recognizing a conflated design early can be very hard, which hopefully will be an encouragement to put more effort in reviewing a design for these sorts of problems.
The third flag of interest is O_NONBLOCK. This flag is itself conflated, but also shows conflation within open(). In Linux, O_NONBLOCK has two quite separate, though superficially similar, meanings.
Firstly, O_NONBLOCK affects all read or write operations on the file descriptor, allowing them to return immediately after processing less data than requested, or even none at all. This functionality can separately be enabled or disabled with fcntl() and so is of little further interest.
The other function of O_NONBLOCK is to cause the open() itself not to block. This has a variety of different effects depending on the circumstances. When opening a named pipe for write, the open will fail rather than block if there are no readers. When opening a named pipe for read, the open will succeed rather than block, and reads will then return an error until some process writes something into the pipe. On CDROM devices an open for read with O_NONBLOCK will also succeed but no disk checks will be performed and so no reads will be possible. Rather the file descriptor can only be used for ioctl() commands such as to poll for the presence of media or to open or close the CDROM tray.
The last gives a hint concerning another aspect of open() which is conflated. Allocating a file descriptor to refer to a file and preparing that file for I/O are conceptually two separate operations. They certainly are often combined and including them both in the one system call can make sense. Requiring them to be combined is where the problem lies.
If it were possible to get a file descriptor on a given file (or device) without waiting for or triggering any action within that file, and, subsequently, to request the file be readied for I/O, then a number of subtle issues would be resolved. In particular there are various races possible between checking that a file is of a particular type and opening that file. If the file was renamed between these two operations, the program might suffer unexpected consequences of the open. The O_DIRECTORY flag was created precisely to avoid this sort of race, but it only serves when the program is expecting to open a directory. This race could be simply and universally avoided if these two stages of opening a file were easily separable.
A strong parallel can be seen between this issue and the 'socket' API for creating network connections. Sockets are created almost completely uninitialized; thereafter a number of aspects of the socket can be tuned (with e.g. bind() or setsockopt()) before the socket is finally connected.
In both the file and socket cases there is sometimes value in being able to set up or verify some aspects of a connection before the connection is effected. However with open() it is not really possible in general to separate the two.
It is worth noting here that opening a file with the 'flags' set to '3' (which is normally an invalid value) can sometimes have a similar meaning to O_NONBLOCK in that no particular read or write access is requested. Clearly developers see a need here but we still don't have a uniform way to be certain of getting a file descriptor without causing any access to the device, or a way to upgrade a file descriptor from having no read/write access to having that access.
As we saw, most of the difficulties caused by conflated design, at least in these two examples, have been addressed over time. It could therefore be argued that as there is minimal ongoing pain, the pattern should not be a serious concern. That argument though would miss two important points. Firstly they have already caused pain over many years. This could well have discouraged people from using the whole system and so reduce the overall involvement in, and growth of, the Unix ecosystem.
Secondly, though the worst offenses have largely been fixed, the result is not as neat and orthogonal as it could be. As we saw during the exploration, there are some elements of functionality that have not yet been separated out. This is largely because there is no clear need for them. However we often find that a use for a particular element of functionality only presents itself once the functionality is already available. So by not having all the elements cleanly separated we might be missing out on some particular useful tools without realizing it.
There are undoubtedly other areas of Unix or Linux design where multiple concepts have been conflated into a single operation, however the point here is not to enumerate all of the flaws in Unix. Rather it is to illustrate the ease with which separate concepts can be combined without even noticing it, and the difficulty (in some cases) of separating them after the fact. This hopefully will be an encouragement to future designers to be aware of the separate steps involved in a complex operation and to allow - where meaningful - those steps to be performed separately if desired.
Next week we will continue this exploration and describe a pattern of misdesign that is significantly harder to detect early, and appears to be significantly harder to fix late. Meanwhile, following are some exercises that may be used to explore conflated designed more deeply.
Exercises.
-
Explain why open() with O_CREAT benefits from an O_EXCL flag, but
other system calls which create filesystem entries (mkdir(), mknod(),
link(), etc) do not need such a flag. Determine if there is any
conflation implied by this difference.
-
Explore the possibilities of the hypothetical bind() call that
attaches a file descriptor to a location in the namespace. What
other file descriptor types might this make sense for, and what
might the result mean in each case.
- Identify one or more design aspects in the IP protocol suite which show conflated design and explain the negative consequences of this conflation.
Next article
Ghosts of Unix past, part 3: Unfixable designs
Security
Bitcoin: Virtual money created by CPU cycles
The Bitcoin virtual currency system was launched in 2009, but has gained increased exposure in recent months as a few businesses and entities announced that they would support transactions in Bitcoins (abbreviated "BTC"). Bitcoin is not the first attempt to create an entirely virtual currency, but it supports some very interesting features, including anonymity and a decentralized, peer-to-peer network structure that verifies Bitcoin transactions cryptographically.
One of Bitcoin's advantages over other currency systems is that it does not rely on a central authority or bank. Instead, the entire network keeps track of — and validates — transactions. It also separates "accounts" from identities, so transactions are, for all practical purposes, anonymous. Two users can make a Bitcoin exchange without knowing each others' real identities or locations. Because Bitcoin does not rely on brick-and-mortar banks and because the Bitcoin currency is divisible down to eight decimal places, it is seen by proponents as a potential micropayment system that works better than the fee-based, currency-backed banking systems of today.
Bitcoin 101
The Bitcoin project was devised and created by Satoshi Nakamoto. There is an RFC-style draft specification available on the project's wiki, although it is not an IETF project. The current specification is numbered 0.0.1, and outlines a Bitcoin transaction message. Considerably more detail is required to explain how the system works in practice, however. The two key ideas are Bitcoin addresses and blocks.
Actual Bitcoins do not exist as independent objects anywhere in the Bitcoin network. Instead, the P2P network of Bitcoin clients keep track of all Bitcoin transactions — including the transfer of Bitcoins from one Bitcoin address to another, and the creation of new Bitcoins, which is a tightly controlled process.
A Bitcoin address is a hash of the public key of an Elliptic Curve Digital Signature Algorithm (ECDSA) public/private key pair. Whenever a new user starts up the Bitcoin client, it generates a new Bitcoin address that is initially associated with zero Bitcoins. But the address is not tied to the identity of the user in any way; in fact clients can generate multiple Bitcoin addresses to easily isolate or categorize transactions. A user's keys are stored locally in a wallet.dat file; losing or erasing the file means that all Bitcoins associated with the addresses inside are effectively lost.
Sending Bitcoins from one address to another is done by publishing a transaction to the network, listing both the source and destination address along with the amount, signed by the source address's private key. The transaction is propagated to all of the active clients on the network. These transactions are collected into the other Bitcoin primitive, the block. Active clients periodically publish new blocks, which serve as the permanent record of all of the transactions that have taken place since the last block was published.
Unlike signing and verifying recent transactions, publishing a block is not a trivial affair. It is, instead, a cryptographic problem that a client must solve with a reward offered for doing so. The Bitcoin network is designed so that block publishing is a difficult task, and the reward will encourage users to run the client software, which in turn validates and records the ongoing transactions.
The Bitcoin network is currently in currency-issuing mode; during this phase, whenever a client solves and publishes the network's next block, the client is credited with 50 freshly-created Bitcoins. That provides the incentive for clients to contribute CPU (or GPU) cycles to the process. The block-solving reward amount is scheduled to drop on a regular basis; eventually dropping to zero. At that point, transaction fees will replace Bitcoin-generation as an incentive for clients to participate.
The problem that constitutes "solving" a block is novel. Clients perform SHA-256 hash calculations on a data set consisting of the recent transactions, the previous block's hash value, and a nonce. Each hash is then compared to a published threshold value; if the hash is below the threshold, the client has solved the block. If not, the client generates a new nonce and tries again. The threshold value is chosen to be artificially low, so that the hashes (which are pseudo-random) have a very small chance of being below it. Thus it takes many CPU cycles to stumble across a hash that solves the block, but it is trivial for all other clients on the network to check that the hash is genuine.
Bitcoining in practice
As of November 9, there have been just under 91,000 blocks published, and there are about 4.5 million BTC in circulation. The project says that approximately six blocks are solved and published per hour, and according to the reward-reduction schedule, the eventual total circulation will be just short of 21 million BTC. The threshold value is periodically adjusted to keep the rate of new blocks predictable — presumably to provide some level of guarantee that transactions are validated and recorded in a timely fashion.
The project has an official, MIT/X11-licensed Bitcoin client application available for download. The current release is numbered 0.3.14. OS X, Windows, and Linux builds (both 32-bit and 64-bit) are provided in addition to the source code. The client serves two purposes; it allows the user to keep track of his or her wallet and its associated Bitcoin addresses, and it runs a background process to solve blocks. There is a command-line version of the client available in addition to the GUI, for use on headless machines.
The GUI client's interface is simple: there is a transaction log, balance count, and Bitcoin "address book." From the address book, you can generate new Bitcoin addresses at will. The block-solving functionality is activated or deactivated from the "Settings" menu. The Options configuration dialog allows you to limit the number of processors on which to run (by default Bitcoin uses all available CPUs). The client communicates to Bitcoin peers over TCP port 8333, and for the moment is IPv4-compatible only.
At the moment, of course, running the fastest client possible is the key to grabbing as many fresh Bitcoins as you can. In addition to the official Bitcoin client, there are several third-party variants that tailor the block-solving routine for different processor architectures — including OpenCL and CUDA-capable 3-D graphics cards. The official client runs its solver with the lowest possible priority, so keeping it running constantly should not severely impact performance — third-party clients may or may not offer such a guarantee.
SHA-256 is generally considered to be strongly pseudo-random, so your odds of solving the current block on any given try do not increase the longer you run the client. However, all of the active Bitcoin clients "mine" — try to solve the current block — simultaneously, so dedicating more or faster cores increase your chances of solving the current block now, before someone else does and everyone starts over on a new block.
Criticisms and questions
Despite a design in which all clients supposedly have an equal chance of solving the next block and earning the reward, some Bitcoin users on the project's official forum seem to think that the current system is driving away casual users, because users with fast GPUs can check hashes ten- or twenty-times faster than a typical CPU. Several of the users that have written custom GPU-mining clients do not make their code publicly available, and thus generate significantly more Bitcoins than the average participant — including one individual who is alleged to represent 25% of the block-solving power of the network at any one time.
An online calculator allows you to put in the current hashes-per-second count reported by the client and estimate how long it would take on average to solve a block at that speed. I tested Bitcoin 0.3.14 on an (apparently modest) Athlon X2 system that is predicted to average one block solve every 94 days. That does seem like a discouragingly-low-payoff for keeping two CPU cores running 24 hours a day.
The system does seem to score high on privacy and fraud-prevention, though. All transactions between clients are conducted in the clear, but because Bitcoin addresses rely on public-key cryptographic signatures, an attacker cannot forge a transaction outright. The system has other safeguards in place to prevent attacks on the block-solving system. That is why, for example, each block includes the hash of the previous solved block — this creates a "block chain" that clients can trace backwards all the way to Bitcoin's first "genesis block" for verification purposes.
The distributed network design offers its own set of challenges. For example, if a rogue client simultaneously (or nearly simultaneously) broadcasts two transactions to different parts of the network that total more BTC than the client actually has, both transactions could temporarily be validated if two different clients simultaneously solve the current block. In that case, however, one of the two competing blocks will be invalidated by the next block solved, and all of the transactions in the invalidated block returned to the general queue. Thus the duplicate transaction will eventually be merged back into the same block chain as the original, and the insufficient funds will be noticed.
Some of Bitcoin's security relies on all of the participating clients knowing and agreeing on the rules of the game. For example, a rogue client could attempt to award itself 100 BTC upon solving a block, but the illegal amount would be caught and flagged by honest clients.
Nevertheless, there does not seem to have been a serious examination of Bitcoin's security by outside professional researchers. Beyond the basic transaction framework, there are numerous features in the system that might make for a plausible attack vector. For example, the system includes a way for senders to script transactions, so that they are only triggered after a set of conditions has been met.
Some of the adaptive measures in the system use arbitrary time frames that seem geared towards human convenience, rather than pro-active prevention of attacks — such as re-evaluating and adjusting the difficulty of the block-solving threshold only every 2,016 blocks. It is also possible to send Bitcoin payments directly to an IP address instead of to a Bitcoin address; in some sense, a "buyer beware" caution is advised, but it is also possible that there are exploits yet undiscovered.
Economics
The bigger open questions about Bitcoin are about its viability as a currency system. For the moment, the majority of the "businesses" that accept BTC as a payment method are online casinos, but a few less-shady establishments (such as the Electronic Frontier Foundation) have recently decided to accept Bitcoin transactions.
There is a dedicated economics forum on the Bitcoin project Web site; there debates circulate about the strengths and weaknesses of the Bitcoin system, specifically whether it has any value as a "real" currency, but also on more technical points, such as the arbitrary limit on the number of Bitcoins to be minted, and the decision to limit each Bitcoin's divisibility (a Bitcoin can be divided down to eight decimal places to spend in transactions).
Another wrinkle is that Bitcoins are effectively "virtual cash" — which makes them untraceable. Although the anonymity is important to some early-adopters, some are concerned that if the system were ever to catch on in widespread usage, governments would intervene to ban or block it because of the relative ease of tax evasion or money laundering.
Although BTC can be exchanged for other currencies, Bitcoin is different from electronic payment systems like Paypal that are really just computerized interfaces to traditional banks. There have been virtual cash systems in the past, such as David Chaum's digital-signature-based ecash, which in the late 1990s was redeemable at several banks, and more recently the Linden Dollars used and created inside Second Life.
Because Bitcoins are not tied to gold or to any other traded property, their value is determined solely by how much others are willing to exchange for them. Those who have had more economics than I will probably explain that this is true of all currency systems, but at the moment, there are several online BTC exchanges, such as BitcoinMarket.com, where one can observe the actual price of BTC-to-USD (or other currency) transactions. Whether those prices represent any real value seems to be entirely in the eye of the beholder. The rate on November 9 was 0.27 USD to 1 BTC. For comparison's sake, 94 days of dual-CPU processing power on Amazon's EC2 cloud service would cost $389.91. That is a for-profit example, of course, but the question remains: are the CPU cycles you spend "mining" for Bitcoins worth the value of the Bitcoins you receive? Does the abstract notion of "supporting the Bitcoin network" make up the difference? There is just no objective answer.
Some pundits think that Bitcoin is a viable prospect for a long-term virtual currency, but as always seems to be the case with economists, others disagree, citing government intervention and susceptibility to destruction by electromagnetic solar storms as risks to a digital-only currency system not backed by any physical monetary system.
The peculiarity of the idea itself seems to be waning in the face of recent global economic conditions, though, conditions which to Bitcoin proponents demonstrate how little "traditional" currencies offer over new, entirely virtual monetary systems. The Bitcoin network's current rate of BTC generation is scheduled to continue issuing new Bitcoins until 2140. If it lasts even a fraction of that amount of time, it will have outlasted the other purely-virtual currency systems, which is certainly worth ... something.
Brief items
Jones: system call abuse
Dave Jones has been fuzzing Linux system calls lately, and has found a bug in the interaction between perf and mprotect(). He has plans for adding other fuzzing techniques and expects that this is just the first bug that will be found. "So I started exploring the idea of writing a tool that instead of passing random junk, actually passed semi sensible data. If the first thing a syscall does is check if a value is between 0 and 3, then passing rand() % 3 is going to get us further into the function than it would if we had just passed rand() unmasked. There are a bunch of other things that can be done too. If a syscall expects a file descriptor, pass one. If it expects an address of a structure, pass it realistic looking addresses (kernel addresses, userspace addresses, 'weird' looking addresses)."
New vulnerabilities
flash-player: multiple vulnerabilities
| Package(s): | flash-player | CVE #(s): | CVE-2010-3636 CVE-2010-3637 CVE-2010-3638 CVE-2010-3639 CVE-2010-3640 CVE-2010-3641 CVE-2010-3642 CVE-2010-3643 CVE-2010-3644 CVE-2010-3645 CVE-2010-3646 CVE-2010-3647 CVE-2010-3648 CVE-2010-3649 CVE-2010-3650 CVE-2010-3651 CVE-2010-3652 CVE-2010-3654 CVE-2010-3976 | ||||||||||||||||||||||||||||||||||||||||
| Created: | November 5, 2010 | Updated: | January 21, 2011 | ||||||||||||||||||||||||||||||||||||||||
| Description: | From the Adobe security advisory:
This vulnerability (CVE-2010-3654) could cause a crash and potentially allow an attacker to take control of the affected system. There are reports that this vulnerability is being actively exploited in the wild against Adobe Reader and Acrobat 9.x. Adobe is not currently aware of attacks targeting Adobe Flash Player. From the Adobe security bulletin: Critical vulnerabilities have been identified in Adobe Flash Player 10.1.85.3 and earlier versions for Windows, Macintosh, Linux, and Solaris, and Adobe Flash Player 10.1.95.1 for Android. These vulnerabilities, including CVE-2010-3654 referenced in Security Advisory APSA10-05, could cause the application to crash and could potentially allow an attacker to take control of the affected system. | ||||||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||||||
freetype: multiple vulnerabilities
| Package(s): | freetype | CVE #(s): | CVE-2010-3814 CVE-2010-3855 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Created: | November 4, 2010 | Updated: | April 19, 2011 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Description: | From the Ubuntu advisory: Chris Evans discovered that FreeType did not correctly handle certain malformed TrueType font files. If a user were tricked into using a specially crafted TrueType file, a remote attacker could cause FreeType to crash or possibly execute arbitrary code with user privileges. This issue only affected Ubuntu 8.04 LTS, 9.10, 10.04 LTS and 10.10. (CVE-2010-3814) It was discovered that FreeType did not correctly handle certain malformed TrueType font files. If a user were tricked into using a specially crafted TrueType file, a remote attacker could cause FreeType to crash or possibly execute arbitrary code with user privileges. (CVE-2010-3855) | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
horde: cross-site scripting
| Package(s): | horde | CVE #(s): | CVE-2010-3077 CVE-2010-3694 | ||||||||||||||||||||
| Created: | November 5, 2010 | Updated: | July 18, 2011 | ||||||||||||||||||||
| Description: | From the Red Hat bugzilla:
a deficiency in the way Horde framework sanitized user-provided 'subdir' parameter, when composing final path to the image file. A remote, unauthenticated user could use this flaw to conduct cross-site scripting attacks (execute arbitrary HTML or scripting code) by providing a specially-crafted URL to the running Horde framework instance. | ||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||
isc-dhcp: denial of service
| Package(s): | isc-dhcp | CVE #(s): | CVE-2010-3611 | ||||||||||||||||||||||||||||
| Created: | November 9, 2010 | Updated: | April 13, 2011 | ||||||||||||||||||||||||||||
| Description: | From the CVE entry:
ISC DHCP server 4.0 before 4.0.2, 4.1 before 4.1.2, and 4.2 before 4.2.0-P1 allows remote attackers to cause a denial of service (crash) via a DHCPv6 packet containing a Relay-Forward message without an address in the Relay-Forward link-address field. | ||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||
libmbfl: information disclosure
| Package(s): | libmbfl | CVE #(s): | CVE-2010-4156 | ||||||||||||||||||||||||||||||||||||||||
| Created: | November 10, 2010 | Updated: | April 15, 2011 | ||||||||||||||||||||||||||||||||||||||||
| Description: | The libmbfl mb_strcut() function can be made to return uninitialized data via an excessive length parameter; see this bug entry for details. | ||||||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||||||
libvpx: code execution
| Package(s): | libvpx | CVE #(s): | CVE-2010-4203 | ||||||||||||||||||||
| Created: | November 10, 2010 | Updated: | January 17, 2011 | ||||||||||||||||||||
| Description: | The libvpx library fails to properly perform bounds checking, leading to a crash or possible code execution vulnerability exploitable via a specially crafted WebM file. | ||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||
monotone: remote denial of service
| Package(s): | monotone | CVE #(s): | CVE-2010-4098 | ||||||||||||||||
| Created: | November 8, 2010 | Updated: | November 16, 2010 | ||||||||||||||||
| Description: | From the CVE entry:
monotone before 0.48.1, when configured to allow remote commands, allows remote attackers to cause a denial of service (crash) via an empty argument to the mtn command. | ||||||||||||||||||
| Alerts: |
| ||||||||||||||||||
mysql: denial of service
| Package(s): | mysql | CVE #(s): | CVE-2010-3840 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Created: | November 4, 2010 | Updated: | July 19, 2011 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Description: | From the Red Hat advisory: It was found that the MySQL PolyFromWKB() function did not sanity check Well-Known Binary (WKB) data. A remote, authenticated attacker could use specially-crafted WKB data to crash mysqld. This issue only caused a temporary denial of service, as mysqld was automatically restarted after the crash. (CVE-2010-3840) | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
mysql: multiple vulnerabilities
| Package(s): | mysql | CVE #(s): | CVE-2010-3833 CVE-2010-3835 CVE-2010-3836 CVE-2010-3837 CVE-2010-3838 CVE-2010-3839 | ||||||||||||||||||||||||||||||||||||||||||||||||||||
| Created: | November 4, 2010 | Updated: | July 19, 2011 | ||||||||||||||||||||||||||||||||||||||||||||||||||||
| Description: | From the Red Hat advisory: A flaw was found in the way MySQL processed certain JOIN queries. If a stored procedure contained JOIN queries, and that procedure was executed twice in sequence, it could cause an infinite loop, leading to excessive CPU use (up to 100%). A remote, authenticated attacker could use this flaw to cause a denial of service. (CVE-2010-3839) A flaw was found in the way MySQL processed queries that provide a mixture of numeric and longblob data types to the LEAST or GREATEST function. A remote, authenticated attacker could use this flaw to crash mysqld. This issue only caused a temporary denial of service, as mysqld was automatically restarted after the crash. (CVE-2010-3838) A flaw was found in the way MySQL processed PREPARE statements containing both GROUP_CONCAT and the WITH ROLLUP modifier. A remote, authenticated attacker could use this flaw to crash mysqld. This issue only caused a temporary denial of service, as mysqld was automatically restarted after the crash. (CVE-2010-3837) It was found that MySQL did not properly pre-evaluate LIKE arguments in view prepare mode. A remote, authenticated attacker could possibly use this flaw to crash mysqld. (CVE-2010-3836) A flaw was found in the way MySQL processed statements that assign a value to a user-defined variable and that also contain a logical value evaluation. A remote, authenticated attacker could use this flaw to crash mysqld. This issue only caused a temporary denial of service, as mysqld was automatically restarted after the crash. (CVE-2010-3835) A flaw was found in the way MySQL evaluated the arguments of extreme-value functions, such as LEAST and GREATEST. A remote, authenticated attacker could use this flaw to crash mysqld. This issue only caused a temporary denial of service, as mysqld was automatically restarted after the crash. (CVE-2010-3833) | ||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||
php: cross-site scripting
| Package(s): | php | CVE #(s): | CVE-2010-3870 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Created: | November 10, 2010 | Updated: | March 21, 2011 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Description: | A decoding error in xml_utf8_decode() leads to a cross-site scripting vulnerability in PHP applications; see this bug entry for more information. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
pootle: cross-site scripting
| Package(s): | pootle | CVE #(s): | |||||||||||||
| Created: | November 9, 2010 | Updated: | November 10, 2010 | ||||||||||||
| Description: | From the Red Hat bugzilla:
Pootle allows XSS on the match_names parameter when searching for matching check failures. | ||||||||||||||
| Alerts: |
| ||||||||||||||
pyftpdlib: multiple vulnerabilities
| Package(s): | pyftpdlib | CVE #(s): | CVE-2009-5011 CVE-2009-5012 CVE-2009-5013 CVE-2010-3494 | ||||
| Created: | November 5, 2010 | Updated: | November 10, 2010 | ||||
| Description: | From the CVE entries:
Race condition in the FTPHandler class in ftpserver.py in pyftpdlib before 0.5.2 allows remote attackers to cause a denial of service (daemon outage) by establishing and then immediately closing a TCP connection, leading to the getpeername function having an ENOTCONN error, a different vulnerability than CVE-2010-3494. (CVE-2009-5011) ftpserver.py in pyftpdlib before 0.5.2 does not require the l permission for the MLST command, which allows remote authenticated users to bypass intended access restrictions and list the root directory via an FTP session. (CVE-2009-5012) Memory leak in the on_dtp_close function in ftpserver.py in pyftpdlib before 0.5.2 allows remote authenticated users to cause a denial of service (memory consumption) by sending a QUIT command during a data transfer. (CVE-2009-5013) Race condition in the FTPHandler class in ftpserver.py in pyftpdlib before 0.5.2 allows remote attackers to cause a denial of service (daemon outage) by establishing and then immediately closing a TCP connection, leading to the accept function having an unexpected value of None for the address, or an ECONNABORTED, EAGAIN, or EWOULDBLOCK error, a related issue to CVE-2010-3492. (CVE-2010-3494) | ||||||
| Alerts: |
| ||||||
qt: unknown impact
| Package(s): | qt | CVE #(s): | CVE-2010-1822 | ||||||||||||
| Created: | November 4, 2010 | Updated: | January 25, 2011 | ||||||||||||
| Description: | From the Red Hat bugzilla entry: WebKit, as used in Google Chrome before 6.0.472.62, does not properly perform a cast of an unspecified variable, which allows remote attackers to have an unknown impact via a malformed SVG document. | ||||||||||||||
| Alerts: |
| ||||||||||||||
xcftools: code execution
| Package(s): | gnome-xcf-thumbnailer | CVE #(s): | CVE-2009-2175 | ||||||||||||
| Created: | November 9, 2010 | Updated: | November 10, 2010 | ||||||||||||
| Description: | From the CVE entry:
Stack-based buffer overflow in the flattenIncrementally function in flatten.c in xcftools 1.0.4, as reachable from the (1) xcf2pnm and (2) xcf2png utilities, allows remote attackers to cause a denial of service (crash) and possibly execute arbitrary code via a crafted image that causes a conversion to a location "above or to the left of the canvas." NOTE: some of these details are obtained from third party information. | ||||||||||||||
| Alerts: |
| ||||||||||||||
Page editor: Jake Edge
Kernel development
Brief items
Kernel release status
The current development kernel remains 2.6.37-rc1; no new prepatches have been released over the last week. The merge rate has also been low, with only 148 non-merge changesets merged since 2.6.37-rc1 as of this writing.Stable updates: there have been no stable updates released in the last week.
Quotes of the week
What should be your goal? Privilege escalation? That's impossible, there's no such thing as a privilege escalation vulnerability on Linux. Denial of service? What are you, some kind of script kiddie? No, the answer is obvious. You must read the uninitialized bytes of the kernel stack, since these bytes contain all the secrets of the universe and the meaning of life.
Embedded Linux Flag Version
As a result of discussions held at two recent embedded Linux summits (and reported back to the recent Kernel Summit), the community has decided to identify specific kernel versions as "flag versions" to try to reduce "version fragmentation". On the linux-embedded mailing list, Tim Bird (architecture group chair for the CE Linux Forum) has announced that 2.6.35 will be the first embedded flag version, and it will be supported by (at least) Sony, Google, MeeGo, and Linaro. "First, it should be explained what having a flag version means. It means that suppliers and vendors throughout the embedded industry will be encouraged to use a particular version of the kernel for software development, integration and testing. Also, industry and community developers agree to work together to maintain a long-term stable branch of the flag version of the kernel (until the next flag version is declared), in an effort to share costs and improve stability and quality."
FSFLA: Linux kernel is "open core"
The Free Software Foundation Latin America has released a version of the 2.6.36 kernel with offending firmware (or drivers that need that firmware) stripped out. They also are trying to tap into the ongoing discussion of "open core" business models. "Sad to say, Linux fits the definition of Free Bait or Open Core. Many believe that Linux is Free Software or Open Source, but it isn't. Indeed, the Linux-2.6.36 distribution published by Mr. Torvalds contains sourceless code under such restrictive licensing terms as 'This material is licensed to you strictly for use in conjunction with the use of COPS LocalTalk adapters', presented as a list of numbers in the corresponding driver, and 'This firmware may not be modified and may only be used with Keyspan hardware' and 'Derived from proprietary unpublished source code, Copyright Broadcom' in the firmware subdirectory, just to name a few examples."
Linaro 10.11 released
The Linaro project has announced the release of Linaro 10.11. "10.11 is the first public release that brings together the huge amount of engineering effort that has occurred within Linaro over the past 6 months. In addition to officially supporting the TI OMAP3 (Beagle Board and Beagle Board XM) and ARM Versatile Express platforms, the images have been tested and verified on a total of 7 different platforms including TI OMAP4 Panda Board, IGEPv2, Freescale iMX51 and ST-E U8500."
Netoops
A kernel oops produces a fair amount of data which can be useful in tracking down the source of whatever went wrong. But that data is only useful if it can be captured and examined by somebody who knows how to interpret it. Capturing oops output can be hard; it typically will not make it to any logfiles in persistent storage. That's why we still see oops output posted in the form of a photograph taken of the monitor. Using cameras as a debugging tool can work for a desktop system, but it certainly does not scale to a data center containing thousands of systems. Google is thought to operate a site or two meeting that description, so it's not surprising to see an interest in better management of oops information there.Google has had its own oops collection tool running internally for years; that has recently been posted for merging as netoops. Essentially, netoops is a simple driver which will, in response to a kernel oops, collect the most recent kernel logs and deliver them to a server across the net. The functionality seems useful, but the first version of the patch was questioned: netoops looks somewhat similar to the existing netconsole system, so it wasn't clear that a need for it exists. Why not just add any missing features to netconsole?
Mike Waychison, who posted the patch, responded with a number of reasons which have since found their way into the changelog. Netoops only sends data on an oops, so it is less hard on network bandwidth. The data is packaged in a more structured manner which is easier for machines and people to parse; that has enabled the creation of a vast internal "oops database" at Google. Netoops can cut off output after the first oops, once again saving bandwidth. And so on. There are enough differences that netconsole maintainer Matt Mackall agreed that it made sense for netoops to go in as a separate feature.
That said, there is clear scope for sharing some code between the two and, perhaps, improving netconsole in the process. The current version of the netoops patch includes new work to bring about that sharing. There seems to be no further opposition, but it's worth noting that Mike, in the patch changelog, notes that he's not entirely happy with either the user-space ABI or the data format. So this might be a good time for others interested in this sort of functionality to have a look and offer their suggestions and/or patches.
Kernel development news
Checkpoint/restart: it's complicated
At the recent Kernel Summit checkpoint/restart discussion, developer Oren Laadan was asked to submit a trimmed-down version of the patch which would just show the modifications to existing core kernel code. Oren duly responded with a "naked patch" which, as one might have expected, kicked off a new round of discussion. What many observers may not have expected was the appearance of an alternative approach to the problem which has seemingly been under development for years. Now we have two clearly different ways of solving this problem but no apparent increase in clarity; the checkpoint/restart problem, it seems, is simply complicated.The responses to Oren's patch will not have been surprising to anybody who has been following the discussion. Kernel developers are nervous about the broad range of core code which is changed by this patch. They don't like the idea of spreading serialization hooks around the kernel which, the authors' claims to the contrary notwithstanding, look like they could be a significant maintenance burden over time. It is clear that kernel checkpoint/restart can never handle all processes; kernel developers wonder where the real-world limits are and how useful the capability will be in the end. The idea of moving checkpointed processes between kernel versions by rewriting the checkpoint image with a user-space tool causes kernel hackers to shiver. And so on; none of these worries are new.
Tejun Heo raised all these issues and more. He also called out an interesting alternative checkpoint/restart implementation called DMTCP, which solves the problem entirely in user space. With DMTCP in mind, Tejun concluded:
As one might imagine, this post was followed by an extended conversation between the in-kernel checkpoint/restart developers and the DMTCP developers, who had previously not put in an appearance on the kernel mailing lists. It seems that the two projects were each surprised to learn of the other's existence.
The idea behind DMTCP is to checkpoint a distributed set of processes without any special support from the kernel. Doing so requires support from the processes themselves; a checkpointing tool is injected into their address spaces using the LD_PRELOAD mechanism. DMTCP is able to checkpoint (and, importantly, restart) a wide variety of programs, including those running in the Python or Perl interpreters and those using GNU Screen. DMTCP is also used to support the universal reversible debugger project. It is, in other words, a capable tool with real-world uses.
Kernel developers naturally like the idea of eliminating a bunch of in-kernel complexity and solving a problem in user space, where things are always simpler. The only problem is that, in this case, it's not necessarily simpler. There is a surprising amount that DMTCP can do with the available interfaces, but there are also some real obstacles. Quite a bit of information about a process's history is not readily available from user space, but that history is often needed for checkpoint/restart; consider tracking whether two file descriptors are shared as the result of a fork() call or not. To keep the requisite information around, DMTCP must place wrappers around a number of system calls. Those wrappers interpose significant new functionality and may change semantics in unpredictable ways.
Pipes are hard for DMTCP to handle, so the pipe() wrapper has to turn them into full Unix-domain sockets. There is also an interesting dance required to get those sockets into the proper state at restart time. The handling of signals - not always straightforward even in the simplest of applications - is made more complicated by DMTCP, which also must reserve one signal (SIGUSR2 by default) for its own uses. The system call wrappers try to hide that signal handler from the application; there is also the little problem that signals which are pending at checkpoint time may be lost. Checkpointing will interrupt system calls, leading to unexpected EINTR returns; the wrappers try to compensate by automatically redoing the call when this happens. A second VDSO page must be introduced into a restarted process because it's not possible to control where the kernel places that page. There's a "virtual PID" layer which tries to fool restarted processes into thinking that they are still running with the same process ID they had when they were checkpointed.
There is an interesting plan for restarting programs which have a connection to an X server: they will wrap Xlib (not a small interface) and use those wrappers to obtain the state of the window(s) maintained by the application. That state can then be recreated at restart time before reconnecting the application with the server. Meanwhile, applications talking to an xterm are forced to reinitialize themselves at restart time by sending two SIGWINCH signals to them. And so on.
Given all of that, it is not surprising that the kernel checkpoint/restart developers see their approach as being a simpler, more robust, and more general solution to the problem. To them, DMTCP looks like a shaky attempt to reimplement a great deal of kernel functionality in user space. Matt Helsley summarized it this way:
In contrast, kernel-based cr is rather straight forward when you bother to read the patches. It doesn't require using combinations of obscure userspace interfaces to intercept and emulate those very same interfaces. It doesn't add a scattered set of new ABIs.
Seasoned LWN readers will be shocked to learn that few minds appear to have been changed by this discussion. Most developers seem to agree that some sort of checkpoint/restart functionality would be a useful addition to Linux, but they differ on how it should be done. Some see a kernel-side implementation as the only way to get even close to a full solution to the problem and as the simplest and most maintainable option. Others think that the user-space approach makes more sense, and that, if necessary, a small number of system calls can be added to simplify the implementation. It has the look of the sort of standoff that can keep a project like this out of the kernel indefinitely.
That said, something interesting may happen here. One thing that became reasonably clear in the discussion is that a complete, performant, and robust checkpoint/restart implementation will almost certainly require components in both kernel and user space. And it seems that the developers behind the two implementations will be getting together to talk about the problem in a less public setting. With luck, determination, and enough beer, they might just figure out a way to solve the problem using the best parts of both approaches. That would be a worthy outcome by any measure.
ELCE: Grant Likely on device trees
Device trees are a fairly hot topic in the embedded Linux world as a means to more easily support multiple system-on-chip (SoC) devices with a single kernel image. Much of the work implementing device trees for the PowerPC architecture, as well as making that code more generic so that others could use it, has been done by Grant Likely. He spoke at the recent Embedded Linux Conference Europe (ELCE) to explain what device trees are, what they can do, and to update the attendees on efforts to allow the ARM architecture use them.
All of the work that is going into adding device tree support for various
architectures is not being done for an immediate benefit to users, Likely said. It
is, instead, being done to make it easier to manage embedded Linux
distributions, while simplifying the boot process. It will also make
it easier to port devices (i.e. components and "IP blocks") to different
SoCs. But it is "not going to make your Android phone faster
".
A device tree is just a data structure that came from OpenFirmware. It represents the devices that are part of particular system, such that it can be passed to the kernel at boot time, and the kernel can initialize and use those devices. For architectures that don't use device trees, C code must be written to add all of the different devices that are present in the hardware. Unlike desktop and server systems, many embedded SoCs do not provide a way to enumerate their devices at boot time. That means developers have to hardcode the devices, their addresses, interrupts, and so on, into the kernel.
The requirement to put all of the device definitions into C code is hard to manage, Likely said. Each different SoC variant has to have its own, slightly tweaked kernel version. In addition, the full configuration of the device is scattered over multiple C files, rather than kept in a single place. Device trees can change all of that.
A device tree consists of a set of nodes with properties, which are simple key-value pairs. The nodes are organized into a tree structure, unsurprisingly, and the property values can store arbitrary data types. In addition, there are some standard usage conventions for properties so that they can be reused in various ways. The most important of these is the compatible property that uniquely defines devices, but there are also conventions for specifying address ranges, IRQs, GPIOs, and so forth.
Likely used a simplified example from devicetree.org to show what these trees look like. They are defined with an essentially C-like syntax:
/ {
compatible = "acme,coyotes-revenge";
cpus {
cpu@0 {
compatible = "arm,cortex-a9";
};
cpu@1 {
compatible = "arm,cortex-a9";
};
};
serial@101F0000 {
compatible = "arm,pl011";
};
...
external-bus {
ethernet@0,0 {
compatible = "smc,smc91c111";
};
i2c@1,0 {
compatible = "acme,a1234-i2c-bus";
rtc@58 {
compatible = "maxim,ds1338";
};
};
...
The compatible tags allow companies to define their own namespace
("acme", "arm", "smc", and "maxim" in the example) that they can manage
however they like.
The kernel already knows how to attach an ethernet device to a local bus or
a temperature sensor to an i2c bus, so why redo it in C for every
different SoC, he asked. By parsing the
device tree (or the binary "flattened" device tree), the kernel can set up the
device bindings that it finds in the tree.
One of the questions that he often gets asked is: "why bother
changing what we already have?
" That is a "hard question to
answer
" in some ways, because for a lot of situations, what we have
in the kernel currently does work. But in order to support large numbers
of SoCs with a single kernel (or perhaps a small set of kernels), something
like device tree is required. Both Google (for Android) and Canonical (for
Linaro) are very interested in seeing device tree support for ARM.
Beyond that, "going data-driven to describe our platforms is the
right thing to do
". There is proof that it works in the x86 world
as "that's how it's been done for a long time
". PowerPC
converted to device trees five years ago or so and it works well. There
may be architectures that won't need to support multiple devices
with a single kernel, and device trees may not be the right choice for
those, but for most of the architectures that Linux supports, Likely
clearly thinks that device trees are the right solution.
He next looked at what device trees aren't. They don't replace
board-specific code, and developers will "still have to write drivers
for weird stuff
". Instead, device trees simplify the common case.
Device tree is also not a boot architecture, it's "just a data
structure
". Ideally, the firmware will pass a device tree to the
kernel at boot time, but it doesn't have to be done that way. The device
tree could be included into the kernel image. There are plenty of devices
with firmware that doesn't know about device trees, Likely said, and they
won't have to.
There is currently a push to get ARM devices into servers, as they can provide lots of cores at low power usage. In order to facilitate that, there needs to be one CD that can boot any of those servers, like it is in the x86 world. Device trees are what will be used to make that happen, Likely said.
Firmware that does support device trees will obtain a .dtb
(i.e. flattened device tree binary) file from somewhere in memory, and
either pass it verbatim to the kernel or modify it before passing. Another
option would be for the firmware to create the .dtb on-the-fly,
which is what OpenFirmware does, but that is a "dangerous
"
option. It is much easier to change the kernel than the firmware, so any
bugs in the firmware's .dtb creation code will inevitably be
worked around in the
kernel. In any case, the kernel doesn't care how the .dtb is created.
For ARM, the plan is to pass a device tree, rather than the existing,
rather inflexible ARM device configuration known as ATAGs. The kernel
will set up the memory for the processor and unflatten the .dtb
into memory. It will unpack it into a "live tree
" that can
then be directly dereferenced and used by the kernel to register devices.
The Linux device model is also tree-based, and there is some congruence
between device tree and the device model, but there is not a direct 1-to-1
mapping between them. That was done "quite deliberately
" as
the design goal was "not to describe what Linux wants
",
instead it was meant to describe the hardware. Over time, the Linux device
model will change, so hardcoding Linux-specific values into the device tree
has been avoided. The device tree is meant to be used as support data, and
the devices it describes get registered using the Linux device model.
Device drivers will match compatible property values with device nodes in a device tree. It is the driver that will determine how to configure the device based on its description in a device tree. None of that configuration code lives in the device tree handling, it is part of the drivers which can then be built as loadable kernel modules.
Over the last year, Likely has spent a lot of time making the device tree support be generic. Previously, there were three separate copies of much of the support code (for Microblaze, SPARC, and PowerPC). He has removed any endian dependencies so that any architecture can use device trees. Most of that work is now done and in the mainline. There is some minimal board support that has not yet been mainlined. The MIPS architecture has added device tree support as of 2.6.37-rc1 and x86 was close to getting it for 2.6.37, but some last minute changes caused the x86 device tree support to be held back until 2.6.38.
The ARM architecture still doesn't have device tree support and ARM
maintainer Russell King is "nervous about merging an unmaintainable
mess
". King is taking a wait-and-see approach until a real ARM
board has device tree
support. Likely agreed with that approach and ELCE provided an opportunity
for him and King to sit down and discuss the issue. In the next
six months or so (2.6.39 or 2.6.40), Likely expects that the board support
will be completed and he seems confident that ARM device tree support in
the mainline won't be far behind.
There are other tasks to complete in addition to the board support, of course, with documentation being high on that list. There is a need for documentation on how to use device trees, and on the property conventions that are being used. The devicetree.org wiki is a gathering point for much of that work.
There were several audience questions that Likely addressed, including the suitability of device tree for Video4Linux (very suitable and the compatible property gives each device manufacturer its own namespace), the performance impact (no complaints, though he hasn't profiled it — device trees are typically 4-8K in size, which should minimize their impact), and licensing or patent issues (none known so far, the code is under a BSD license so it can be used by proprietary vendors — IBM's lawyers don't seem concerned). Overall, both Likely and the audience seemed very optimistic about the future for device trees in general and specifically for their future application in the ARM architecture.
A more detailed look at kernel regressions
The number of kernel regressions over time is one measure of the overall quality of the kernel. Over the last few years, Rafael Wysocki has taken on the task of tracking those regressions and regularly reporting on them to the linux-kernel mailing list. In addition, he has presented a "regressions report" at the last few Kernel Summits [2010, 2009, and 2008]. As part of his preparation for this year's talk, Wysocki wrote a paper, Tracking of Linux Kernel Regressions [PDF], that digs in deeply and explains the process of Linux regression tracking, along with various trends in regressions over time. This article is an attempt to summarize that work.
A regression is a user-visible change in the behavior of the kernel between two releases. A program that was working on one kernel version and then suddenly stops working on a newer version has detected a kernel regression. Regressions are probably the most annoying kind of bug that crops up in the kernel development process, as well as the one of the most visible. In addition, Linus Torvalds has decreed that regressions may not be intentionally introduced—to fix a perceived kernel shortcoming for example—and that fixing inadvertent regressions should be a high priority for the kernel developers.
There is another good reason to concentrate on fixing any regressions: if you don't, you really have no assurance that the overall quality of the code is increasing, or at least staying the same. If things that are currently working continue to work in the future, there is a level of comfort that the bug situation is, at least, not getting worse.
Regression tracking process
To that end, various efforts have been made to track kernel regressions, starting with Adrian Bunk in 2007 (around 2.6.20), through Michał Piotrowski, and then to Wysocki during the 2.6.23 development cycle. For several years, Wysocki handled the regression tracking himself, but it is now a three-person operation, with Maciej Rutecki turning email regression reports into kernel bugzilla entries, and Florian Mickler maintaining the regression entries: marking those that have been fixed, working with the reporters to determine which have been fixed, and so on.
The kernel bugzilla is used to track the regression meta-information as well as the individual bugs. Each kernel release has a bugzilla entry that tracks all of the individual regressions that apply to it. So, bug #16444 tracks the regressions reported against the 2.6.35 kernel release. Each individual regression is listed in the "Depends on" field in the meta-bug, so that a quick look will show all of the bugs, and which have been closed.
There is another meta-bug, bug #15790, that tracks all of the release-specific meta-bugs. So, that bug depends on #16444 for 2.6.35, as well as #21782 for 2.6.36, #15310 for 2.6.33, and so on. Those bugs are used by the scripts that Wysocki runs to generate the "list of known regressions" which gets posted to linux-kernel after each -rc release.
Regressions are added to bugzilla one week after they are reported by email, if they haven't been fixed the interim. That's a change from earlier practices to save Rutecki's time as well as to reduce unhelpful noise. Bugzilla entries are linked to fixes as they become available. The bug state is changed to "resolved" once a patch is available and "closed" once Torvalds merges the fix into the mainline.
Regressions for a particular kernel release are tracked through the following two development cycles. For example, when 2.6.36 was released, the tracking of 2.6.34 regressions ended. When 2.6.37-rc1 was released, that began the tracking for 2.6.36, and once 2.6.37 is released in early 2011, tracking of 2.6.35 regressions will cease. That doesn't mean that any remaining regressions have magically been fixed, of course, and they can still be tracked using the meta-bug associated with a release.
Regression statistics
To look at the historical regression data, Wysocki compiled a table that listed the number of regressions reported for each of the last ten kernel releases as well as the number that are still pending (i.e. have not been closed). For the table, he has removed invalid and duplicate reports from those listed in bugzilla. It should also be noted that after 2.6.32, the methodology for adding new regressions changed such that those that were fixed in the first week after being reported were not added to bugzilla. That at least partially explains the drop in reports after 2.6.32.
Kernel # reports # pending 2.6.26 180 1 2.6.27 144 4 2.6.28 160 10 2.6.29 136 12 2.6.30 177 21 2.6.31 146 20 2.6.32 133 28 2.6.33 116 18 2.6.34 119 15 2.6.35 63 28 Total 1374 157 Reported and pending regressions
The number of "pending" regressions reflects the bugs that have been fixed since the release, not just those that were fixed during the two-development-cycle tracking period. In order to look more closely at what happens during the tracking period, Wysocki provides another table. That table separates the two most important events during the tracking period, which are the releases of the subsequent kernel versions (i.e. for 2.6.N, the releases of N+1 and N+2).
For example, once the 2.6.35 kernel was released, that ended the period where the development focus was on fixing regressions in 2.6.34. At that point, the merge window for 2.6.36 opened and developers switched their focus to adding new features for the next release. Furthermore, once 2.6.36 was released, regressions were no longer tracked at all for 2.6.34. That is reflected in the following table where the first "reports" and "pending" columns correspond to the N+1 kernel release, and the second to the N+2 release.
Kernel # reports (N+1) # pending (N+1) # reports (N+2) # pending (N+2) 2.6.30 122 36 170 45 2.6.31 89 31 145 42 2.6.32 101 36 131 45 2.6.33 74 33 114 27 2.6.34 87 31 119 21 2.6.35 61 28 Reported and pending regressions (separated by release)
The table shows that the number of regressions still goes up fairly substantially after the release the next (N+1) kernel. This indicates that the -rc kernels may not be getting as much testing as the released kernel does. In addition, the pending kernel numbers are substantially higher for the N+2 kernel release, at least in the 2.6.30-32 timeframe. Had that trend continued, it could be argued that the kernel developers were paying less attention to regressions in a particular release once the next release was out. But the 2.6.33-34 numbers are fairly substantially down after the N+2 release, and Wysocki says that there are indications that 2.6.35 is continuing that trend.
Reporting and fixing regressions
We can look at the number of outstanding regressions over time in one of the graphs from Wysocki's paper. For each kernel release, there are generally two peaks that indicate where the number of open regressions is highest. These roughly correspond with the end of the merge window and the release date for the next kernel version. Once past those maximums, the graphs tend to level out.
There are abrupt jumps in the number of regressions that are probably an artifact of how the reporting is done. Email reports are generally batched up, with multiple reports being added at roughly the same time. Maintenance on the bugs can happen in much the same way, which results in multiple regressions closed in a short period of time. That leads to a much more jagged graph, with sharper peaks.
In the paper, Wysocki did some curve fitting for the the 2.6.33-34 releases that corresponded reasonably well with the observed data. He noted that the incomplete 2.6.35 curve was anomalous in that it didn't have a sharp maximum and seemed to plateau, rather than drop off. He attributes that to the shortened merge window for 2.6.37 along with the Kernel Summit and Linux Plumbers Conference impacting the testing and debugging of the current development kernels. Nevertheless, he used the same curve fitting equations on the 2.6.35 data to derive a "prediction" that it would end up with slightly more regressions than .33 and .34, but still less than 30. It will be interesting to see if that is borne out in practice.
Regression lifetime
The lifetime of regressions is another area that Wysocki addresses. One of his graphs is reproduced above and shows the cumulative number of regressions whose lifetime is less than the number of days on the x-axis. He separates the regressions into two sets, those from kernel 2.6.26-30 and from 2.6.30-35. In both cases, the curves follow that of radioactive decay, which allows for the derivation of the half-life for a set of kernel regressions: roughly 17 days.
The graph for 2.6.30-35 is obviously lower than that of the earlier kernels, which Wysocki attributes to the change in methodology that occurred in the 2.6.32 timeframe. Because there are fewer short-lived (i.e. less than a week) regressions tracked, that will lead to a higher average regression lifetime. The average for the earlier kernels is 24.4 days, while the later kernels have an average of 32.3 days. Wysocki posits that the average really hasn't changed and that 24.5 days is a reasonable number to use as an average lifetime for regressions over the past two years or so.
Regressions by subsystem
Certain kernel subsystems have been more prone to regressions than others over the last few releases, as is shown in a pair of tables from Wysocki's paper. He cautions that it is somewhat difficult to accurately place regressions into a particular category, as they may be incorrectly assigned in bugzilla. There are also murky boundaries between some of the categories, with power management (PM) being used as an example. Bugs that clearly fall into the PM core, or those that are PM-related but the root cause is unknown, get assigned to the PM category, while bugs in a driver's suspend/resume code get assigned to the category of the driver. Wysocki notes that these numbers should be used as a rough guide to where regressions are being found, rather than as an absolute and completely accurate measure.
Category 2.6.32 2.6.33 2.6.34 2.6.35 Total DRI (Intel) 20 7 10 12 49 x86 9 13 21 6 49 Filesystems 7 12 8 8 35 DRI (other) 10 7 10 5 32 Network 12 8 6 4 30 Wireless 6 6 11 4 27 Sound 8 9 4 2 23 ACPI 7 9 3 2 21 SCSI & ATA 4 2 2 2 10 MM 2 3 4 0 9 PCI 3 4 1 1 9 Block 2 1 3 2 8 USB 3 0 0 3 6 PM 4 2 0 0 6 Video4Linux 1 3 1 0 5 Other 35 30 35 12 112 Reported regressions by category
The Intel DRI driver and x86 categories are by far the largest source of regressions, but there are a number of possible reasons for that. The Intel PC ecosystem is both complex, with many different variations of hardware, and well-tested because there are so many of those systems in use. Other architectures may not be getting the same level of testing, especially during the -rc phase.
It is also clear from the table that those subsystems that are "closer" to the hardware tend to have more regressions. The eight rows with 20 or more total regressions—excepting filesystems and networking to some extent—are all closely tied to hardware. Those kinds of regressions tend to be easier to spot because they cause the hardware to fail, unlike regressions in the scheduler or memory management code, for example, which are often more subtle.
Category 2.6.32 2.6.33 2.6.34 2.6.35 Total DRI (Intel) 1 2 2 5 10 x86 2 2 3 2 9 DRI (other) 1 3 2 3 9 Sound 5 2 0 1 8 Network 2 2 1 2 7 Wireless 1 1 1 2 5 PM 4 1 0 0 5 Filesystems 0 0 0 5 5 Video4Linux 1 3 0 0 4 SCSI + SATA 2 0 1 0 3 MM 1 0 1 0 2 Other 8 2 4 8 22 Pending regressions by category
It is also instructive to look at the remaining pending regressions by category. In the table above, we can see that most of the regressions identified have been fixed, with only relatively few persisting. Those are likely to be bugs that are difficult to reproduce, and thus track down. Some categories, like ACPI, fall completely out of the table, which indicates that those developers have been very good at finding and fixing regressions in that subsystem.
Conclusion
Regression tracking is important so that kernel developers are able to focus their bug fixing efforts during each development cycle. But looking at the bigger picture—how the number and types of regressions change—is also needed. Given the nature of kernel development, it is impossible to draw any conclusions from the data collected for any single release. By aggregating data over multiple development cycles, any oddities specific to a particular cycle are smoothed out, which allows for trends to be spotted.
Since regressions are a key indicator of kernel quality, and easier to track than many others, they serve a key role in keeping Torvalds and other kernel developers aware of kernel quality issues. As the developers get more familiar with the "normal" regression patterns, it will become more obvious that a given release is falling outside of those patterns, which may mean that it needs more attention—or that something has changed in the development process. In any case, there is clearly value in the statistics, and that value is likely to grow over time.
Patches and updates
Architecture-specific
Core kernel code
Development tools
Device drivers
Documentation
Filesystems and block I/O
Memory management
Security-related
Virtualization and containers
Miscellaneous
Page editor: Jonathan Corbet
Distributions
openSUSE Conference 2010: Making testing easier
This year's openSUSE conference had some interesting sessions about testing topics. One of those described a framework to automate testing of the distribution's installation. That way testers don't have to do the repetitive installation steps themselves. Another session described Testopia, which is a test case management extension for Bugzilla. OpenSUSE is using Testopia to guide users that want to help testing the distribution. And last but not least, a speaker from Mozilla QA talked about how to attract new testers. The common thread in all these sessions is that testing should be made as easy as possible, to attract new testers and keep the current testers motivated.
Automated testing
Testing is an important task for distributions, because a Linux distribution is a very complex amalgam of various interacting components, but it would be pretty tiresome and boring for testers to test the openSUSE Factory snapshots daily. Bernhard Wiedemann, a member of the openSUSE Testing Core Team, presented the logical solution to this problem: automate as much as possible. Computers don't get tired and they don't stop testing out of boredom, even with dozens of identical tests.
But why is automation so important for testing? To answer this question, Bernhard emphasized that the three chief virtues of a programmer according to Larry Wall (laziness, impatience, and hubris) also hold for testers. What we don't want is poor testing, which leads to poor quality of the distribution, which leads to frustrated testers, which leads to even poorer testing. This is a vicious circle. What we want instead is good testing and good processes, which leads to high quality for the distribution and to happy testers who make the testing and hence the distribution even better. Testers, as much as programmers, want to automate things because they want to reduce their overall efforts.
So what are possible targets for automated testing? You could consider
automating the testing of a distribution's installation, testing distribution upgrades, application testing, regression testing, localization testing, benchmarking, and so on. But whatever you test, there will always be some limitations. As the Dutch computer scientist and Turing Award winner Edsger W. Dijkstra once famously said: "Testing can only prove the presence of bugs, not their absence.
"
Bernhard came up with a way to automate distribution installation testing using KVM. He now has a cron job that downloads a new ISO for openSUSE Factory daily and runs his Perl script autoinst for the test. This script starts openSUSE from the ISO file in a virtual machine with a monitor interface that accepts commands like sendkey ctrl-alt-delete to send a key to the machine or screendump foobar.ppm to create a screenshot. The script compares the screenshots to known images, which is done by computing MD5 hashes of the pixel data.
When the screen shot of a specific step of the running installer matches the known screen shot of the same step in a working installer, the script marks the test of this step as passed. If they don't match (e.g. because of an error message), the test is marked as failed. The keys that the script sends to the virtual machine can also depend on what is shown on the screen: the script then compares the screen shot to various possible screen shots of the working installer, each them representing a possible execution path.
By using the screen shots, a script can test whether an installation of an openSUSE snapshot worked correctly and whether Firefox or OpenOffice.org can be started on the freshly installed operating system without segfaulting. At the end of the test, all images are encoded into a video, which can be consulted by a human tester in circumstances where a task couldn't be marked automatically as passed or failed. Some examples of installation videos can be found on Bernhard's blog.
It's also nice to see that Bernhard is following the motto of this year's openSUSE conference, "collaboration across borders": while parts of his testing framework are openSUSE-specific, it is written in a modular way and can be used to test any operating system that runs on Qemu and KVM. More information can be found on the OS-autoinst web site.
Test plans with Testopia
Holger Sickenberg, the QA Engineering Manager in charge of openSUSE testing, talked about another way to improve openSUSE's reliability: make test plans available to testers and users with Testopia, a test case management extension for Bugzilla. In the past, openSUSE's Bugzilla bug tracking system only made Testopia available to the openSUSE Testing Core Team, but since last summer it is open to all contributors. Testopia is available on Novell's Bugzilla, where logged-in users can click on "Product Dashboard" and choose a product to see the available test plans, test cases, and test runs. In his talk, Holger gave an overview about how to create your own test plan and how to file a bug report containing all information from a failed test plan.
A test plan is a simple description for the Testopia system and is actually just a container for test cases. Each test plan targets a combination of a specific product and version, a specific component, and a specific type of activity. For example, there is a test plan for installing openSUSE 11.3. A test plan can also have more information attached, e.g. a test document.
A test case, then, is a detailed description of what should be done by the tester. It lists the preparation that is needed before executing the test, a step-by-step list of what should be done, a description of the expected result, and information about how to get the system back into a clean state. Other information can also be attached, such as a configuration file or a test script for an automated test system. Holger emphasized that the description of the expected result is really important: "If you don't mention the expected result exactly, your test case can go wrong because the tester erroneously thinks his result is correct.
"
And then there's a test run, which is a container for test cases for a specific product version and consists of one or more test plans. It also contains test results and a test history. At the end of executing a test run, the user can easily create a bug report if a test case fails by switching to the Bugs tab. The information from the test case is automatically put into the description and summary of the bug report, and when the report is submitted it also appears in the web page of the test run, including its status (e.g. fixed or not).
The benefits of test plans are obvious: users that want to help a project by testing have a detailed description of what and how to test, and the integration with Bugzilla makes reporting bugs as easy as possible. It also lets developers easily see what has been tested and get the results of the tests. These results can also be tracked during the development cycle or compared between different releases. Holger invited everyone with a project in openSUSE to get in touch with the openSUSE Testing Core Team to get a test plan created. The team can be found on the opensuse-testing mailing list and on the #opensuse-testing IRC channel on Freenode.
Mozilla QA
Carsten Book, QA Investigations Engineer at the Mozilla Corporation, gave a talk about how to get involved in the Mozilla Project and he focused on Mozilla QA, which has its home on the QMO web site. This QA portal has a lot of documentation, e.g. for getting started with QA. And there are links to various Mozilla QA tools such as Bugzilla, Crash Reporter, the Litmus system that has test cases written by Mozilla QA for manual software testing, and some tools to automate software testing. For example, Mozilla's test system automatically checks whether performance has degraded after every check-in of a new feature, to try to ensure that Firefox won't get any slower.
People who want to help test can of course run a nightly build and file bug reports. There are also Mozilla test days that teach how to get development builds, how to file bugs, and how to work with developers on producing a fix. Contributors with some technical expertise can join one of the quality teams, each focusing on a specific area: Automation, Desktop Firefox, Browser Technologies, WebQA, and Services. Each of the teams has a short but instructive web page with information about what they do and how you can contact them.
An important point that Carsten made was that it should also be easy for interested people to immediately get an overview of different areas where they can contribute without having to read dozens of wiki pages. Mozilla even has a special Get involved page where you just enter your email address and an area of interest, with an optional message. After submitting the form, you will get an email to put you in touch with the right person.
Low entry barrier
These three projects are all about lowering the barriers for new testers — to be able to attract as many testers as possible and to make the life of existing testers easier — by automating boring and repetitive tasks. In this way you can keep testers motivated. Wiedemann's autoinst project seems especially interesting: at the moment it has just the basic features, but it has a lot of potential, e.g. if the feature for comparing screen shots is refined. From a technical point of view, this is an exciting testing project that hopefully finds its way into other distributions.
Brief items
Distribution quotes of the week
Red Hat Enterprise Linux 6 available
It's official: RHEL6 is available. "Enhancements range from kernel improvements for resource management, RAS, performance, scalability, virtualization and power saving, through a greatly extended and standards-compliant development environment, to a comprehensive range of updated server and desktop applications. It is designed to improve agility, lower costs and reduce IT complexity for customers."
Distribution News
Fedora
Fedora rejects SQLninja
The minutes from the November 8 Fedora Board meeting include a discussion of whether SQLninja - an SQL injection testing tool - should be included in the distribution. The answer was "no," and a new guideline was added to the rules: "Where, objectively speaking, the package has essentially no useful foreseeable purposes other than those that are highly likely to be illegal or unlawful in one or more major jurisdictions in which Fedora is distributed or used, such that distributors of Fedora will face heightened legal risk if Fedora were to include the package, then the Fedora Project Board has discretion to deny inclusion of the package for that reason alone."
Smith: Changing of the seasons
Jared Smith welcomes the Fedora Project's new release engineering team lead, Dennis Gilmore, and the new Fedora Program Manager, Robyn Bergeron. "Just as with nature, we have cyclical changes within the Fedora Project as well. I think it's both useful and healthy to point out a few of those changes, for a couple of reasons. First of all, I want to point out that every person in the Fedora community is a potential leader. Our policies of rotating leadership help ensure that everyone who is so inclined has a chance to lead and serve. Second, I'd like to personally thank those people who have diligently served the Fedora community, and wish them success as they move on to other endeavors."
Mandriva Linux
New Mandriva's cooker manager
Mandriva has a new manager for "cooker", the development branch. "Eugeni Dodonov. Eugeni is well known in the community, a very active Mandriva's contributor, an activist of free software in Brasil, and also a doctor in computer science."
Ubuntu family
Shuttleworth: Unity on Wayland
Mark Shuttleworth has described the next major step for the Unity interface: putting it on the Wayland OpenGL-based display system. "But we don't believe X is setup to deliver the user experience we want, with super-smooth graphics and effects. I understand that it's *possible* to get amazing results with X, but it's extremely hard, and isn't going to get easier. Some of the core goals of X make it harder to achieve these user experiences on X than on native GL, we're choosing to prioritize the quality of experience over those original values, like network transparency."
Other distributions
Mageia: under construction
The Mageia Blog has a progress report on the new Mandriva offshoot, Mageia. Topics include the buildsystem, blog and website, mirrors, wiki, and a roadmap.
Newsletters and articles of interest
Distribution newsletters
- Debian Misc Developer News (#24) (November 7)
- Debian Project News (November 8)
- DistroWatch Weekly, Issue 379 (November 8)
- Fedora Weekly News Issue 250 (November 3)
- openSUSE Weekly News, Issue 148 (November 6)
- Ubuntu Weekly Newsletter, Issue 217 (November 7)
The MeeGo Progress Report: A+ or D-? (Vision Mobile)
Dave Neary has posted a MeeGo progress report on the Vision Mobile site. "Long-awaited MeeGo compliance specifications have resulted in drawn out and sometimes acrimonious debate. Trademark guidelines have been a sticking point for community ports of the MeeGo netbook UX to Linux when these ports do not include required core components. Related to the technical governance of the project, there is some uncertainty around the release process, and the means and criteria which will be used when considering the inclusion of new components. And there are some signs that the 'all open, all the time' message at the project launch has been tempered by the reality of building a commercial device."
What's new in Fedora 14 (The H)
The H takes Fedora 14 for a test drive and finds it a little thin on new features. There is, of course, the usual pile of "up-to-the-minute" versions of various packages. "
Some Fedora users will also feel the benefits of an update, with Fedora 14 including newer versions of many applications. Version 14 includes hundreds of enhancements and bug fixes not explicitly mentioned above the new features to be found in OpenOffice 3.3 are just one example of many. Objectively speaking, Laughlin contains significantly fewer enhancements than previous versions. The likely culprit is the end of the development phase of Red Hat Enterprise Linux 6, which has kept many of the Fedora developers employed by Red Hat busy over recent months."
Pardus 2011 on the way with new goodies (Linux Journal)
Susan Linton takes a look at the upcoming release of Pardus 2011. "Pardus Linux, a popular independent distribution funded and developed by the Scientific & Technological Research Council of Turkey, will be releasing version 2011 in the coming weeks and with it lots of nice updates and improvements."
The Five Best Linux Live CDs (Linux.com)
Joe "Zonker" Brockmeier reveals his favorite live CD distributions. "So how were the distros chosen? You'll notice that none of the major Linux distros (a.k.a. Ubuntu, Debian, Fedora, openSUSE, Slackware, etc.) appear in the list, though most of the picks are derived from one of the major distros. Though Ubuntu, Linux Mint, et al. have perfectly serviceable live CDs or DVDs, they're not really designed for long-term use as a live distro. I'm sure some folks do use them that way, but they're the cream of the crop for installing to a hard drive - not for live media."
Page editor: Rebecca Sobol
Development
FocusWriter is all writing, no distractions
It's November, and all around the world aspiring novelists (including this reporter) have turned their attention to National Novel Writing Month (NaNoWriMo). In the spirit of choosing the right tools for the job, I decided to look for an application more suited to fiction than my trusty Vim, and found FocusWriter. It's certainly not a replacement for Vim, but it's a suitable word processor for prose.
FocusWriter is a "distraction free" word processor that's designed to help writers immerse themselves in their work. When run, it elbows aside everything else on the screen and demands the writer's full attention. Granted, one could achieve a similar effect by simply writing in a full-screen terminal or switching to the console and running Vim or Emacs — but many writers (Neal Stephenson excepted) are not well-versed in the classic text editors. Since I was trying to make a break from my normal mode of writing about technology in Vim using HTML, I wanted to see if a change of pace (or application) could boost creativity. Of the crop of distraction-free word processors (more on those below), FocusWriter looked the most promising.
FocusWriter is written by Graeme Gott, who publishes it and several other applications under the Gott Code brand. FocusWriter is Qt-based, free software (GPLv3), and packages are available from Gott for almost all major Linux distributions. Debian is the notable exception, but source is available if the Ubuntu packages will not install on Debian. It's multi-platform as well, with releases available for Mac OS X, Windows, and even OS/2.
Using FocusWriter
I picked the most recent release, 1.3.1, from Gott's PPA and started logging some writing time with FocusWriter. The default is for a small text area on black background in the middle of the screen, but FocusWriter allows you to modify its theme and amount of text space through the preferences. For example, one might prefer to have a background picture to set the mood, or to enlarge the text area, or place it off-center. Naturally you can change the fonts and color scheme as well.
For those unfamiliar with NaNoWriMo, the goal is to produce a novel-length work of 50,000 words. Quality is not the goal — though it's not discouraged, the idea is to get one's first novel out of the way. For those who participate, tracking word count is of great importance, and that's one of FocusWriter's primary features. The bottom toolbar tracks the number of words, paragraphs, characters, and percentage of the daily goal reached. It is not displayed except when the user hovers the mouse over the bottom of the screen.
The daily goal is set through FocusWriter's preferences. The main menu and toolbar are also hidden until one hovers the mouse over the top of the screen. Users can opt to take FocusWriter out of full-screen mode as well, which will display the toolbar and status bar, but sort of misses the point of the application. The daily goal can be based on word count or the number of minutes (or hours, for the truly dedicated) spent writing.
One feature that's interesting for prose, but would be frustrating for editing system files, is under the "General" tab in preferences: Always vertically center. Basically this means that the cursor will always be centered mid-screen so your eye doesn't have to track the text to the bottom of the screen while writing. When you reach the end of a line, FocusWriter scrolls the text up rather than pushing the cursor down one line. This is helpful when writing steadily, but disorienting when making changes. For example, when making a correction on a line towards the top or bottom of the screen, FocusWriter will re-center the text on the first keypress. Unfortunately, FocusWriter lacks a hot-key to turn this feature on or off without diving into preferences, so it got turned off rather quickly.
Writers who miss (assuming they remember) the clack of a typewriter have the option of enabling typewriter sounds in the 1.3.1 release. It's somewhat buggy, however, and only seemed to work when hitting Enter — not for each keystroke as one would expect. It might sound like a trivial, perhaps annoying, feature, but it's apparently a feature that was in great demand.
FocusWriter's feature set is fairly standard fare for basic writing. It supports plain text or the Rich Text Format (RTF), but not Open Document Format, Word, HTML, or others. So if you produce a masterpiece in FocusWriter, simply saving as RTF and renaming the file .doc will be good enough to submit as a manuscript — but you'll wind up editing revisions in LibreOffice or Word. It has, as should all editors, autosave, but no revision history.
Alternatives
Maybe FocusWriter is too frilly for you. In that case, turn your attention to PyRoom, a even more minimalist editor written in Python. It doesn't support RTF or background pictures — though you can customize the color scheme if relentless black doesn't match your creative mood.
There's also JDarkRoom. It's yet another distraction free word processor, but written in Java rather than Python. It has some interesting features, like exporting to HTML and allowing users to configure keybindings via a keymap file. However, while it's free as in beer, it is not open source and probably not as interesting to most LWN readers.
RubyRoom is free software (GPLv2), but updates have ceased or slowed enormously. Its last release was in 2008. It has a similar set of functionality to PyRoom and FocusWriter, so it is probably only worth checking out for users who have a penchant for Ruby.
Of course, users can simply employ their favorite text editor, as well. Vim, gedit, Emacs, Kate, GNU Nano, and the rest of the lot are or can be set up as minimalist full-screen editors too.
FocusWriter future and community
Though it's open source, FocusWriter is mostly a one-man show. The Gott Code site doesn't have development lists or forums, though developers can easily get access to the code via GitHub. To find out what the future holds for FocusWriter, I contacted Gott via email to ask about its status. Gott acknowledges some contributors for translations and such on the Gott Code site, but he says he hasn't put much effort into making a community project out of it. Gott says he welcomes contributions from others, but is "happy to do the work if no one else steps up.
"
I'm the only programmer currently working on the Gott Code projects. Some of the programs have had other contributors once in a while, to varying degrees of involvement, but mostly it's been my code. These all started as pet programs of mine simply because I love programming, with no larger plans for any of them. In fact, they've stayed pretty small until FocusWriter 1.3 in late September/early October, so I'm still trying to find my footing in the broader open source world.
What started out as an application for his wife has grown into a much
more full-featured program. FocusWriter is about two years old —
development started in October 2008 — but development has waxed and
waned depending on whether Gott was paying attention to FocusWriter or one
of his other games and applications. The 1.0 release, says Gott, was
"a pale shadow
" of the current release. The 1.3.1 release may
look like a shadow in a year or so if Gott fits in the features he'd
like. Gott says that he would like to add grammar checking (in addition to its spell checking, based on Hunspell), additions to the theme configuration, scene management, and "some kind of file history
".
Eventually, I want to have more overall writing project management, with different kinds of notes and outlines to help organize users' work. Details are still sketchy, mostly because it's important to me to maintain FocusWriter's lightweight and out-of-the-way interface, and to let users continue to write in a minimalist environment if they're not interested in anything fancy. I don't have a time frame in mind yet — I'm taking a small break to work on other projects that got neglected in the 1.3 prep.
Perhaps by NaNoWriMo 2011, authors will be able to use FocusWriter to track characters, story revisions, and more. Fuller featured writing tools abound for Windows and Mac OS X, but the options for writing fiction are very limited on Linux. Not that features necessarily contribute to better writing. So far, FocusWriter has shown no danger of turning this reporter into the next Stephen King or Kurt Vonnegut. But it does live up to its description. It provides a minimalist set of features, just enough to write productively while eliminating distractions. FocusWriter not only hides all the menus and clutter of typical word processing software, it also hides the various visual distractions that plague modern desktops.
Experienced Linux users might prefer to stick with their text editor of choice, even if participating in something like NaNoWriMo. But this is the sort of application that makes Linux attractive to a wider set of users who may not be interested in using a "text editor" — even one as simple as gedit — to try to produce the next Great American Novel.
Brief items
Quotes of the week
The Documentation Foundation offers a "preview"
The Document Foundation has sent out a press release seemingly talking about the great stuff that is coming to LibreOffice in the near future. "In addition, each single module of LibreOffice will be undergoing an extensive rewrite, with Calc being the first one to be redeveloped around a brand new engine - code named Ixion - that will increase performance, allow true versatility and add long awaited database and VBA macro handling features. Writer is going to be improved in the area of layout fidelity and Impress in the area of slideshow fidelity. Most of the new features are either meant to maintain compatibility with the market leading office suite or will introduce radical innovations. They will also improve conversion fidelity between formats, liberate content, and reduce Java dependency."
Glibc change exposing bugs
People experiencing sound corruption or other strange bugs on recent distribution releases may want to have a look at this Fedora bugzilla entry. It seems that the glibc folks changed the implementation of memcpy() to one which, in theory, is more highly optimized for current processors. Unfortunately, that change exposes bugs in code where developers ignored the requirement that the source and destination arrays passed to memcpy() cannot overlap. Various workarounds have been posted, and the thread includes some choice comments from Linus Torvalds, but the problem has been marked "not a bug." So we may see more than the usual number of problems until all the projects with sloppy mempcy() use get fixed. (Thanks to Sitsofe Wheeler).GNOME Shell 2.91.2 released
GNOME Shell 2.91.2 is now available. This version has some visual enhancements in addition to the bug and build fixes.LyX version 2.0.0 (beta 1)
The first beta for the LyX 2.0.0 release is out; the developers are looking
for testers to try out the new features and ensure that their existing
documents still work properly. The list of new features is
extensive, including a smarter search facility, on-the-fly spelling
checking, a document comparison mechanism, XHTML output, multiple index
support, improved table handling, and a much-needed editor for Feynman
diagrams.
MythTV 0.24 released
The MythTV 0.24 release has been announced. Enhancements include a new on-screen user interface, Blu-ray support, HD audio support, multi-angle support, and more; see the release notes for details.Renoise 2.6
Renoise is "a sophisticated music sequencer and audio processing application for Windows, Macintosh, and Linux. It's a unique all-in-one music production environment for your personal computer." The 2.6 release is out; the big change appears to be the addition of a Lua-based scripting engine. See the what's new page for more information.
Newsletters and articles
Development newsletters from the last week
- Caml Weekly News (November 9)
- LibreOffice development summary (November 8)
- Linux Audio Monthly Roundup (November)
- PostgreSQL Weekly News (November 7)
Weekend Project: Get to Know Your Source Code with FOSSology (Linux.com)
Nathan Willis digs into source code with FOSSology. "FOSSology was originally built as an internal tool at HP, to help engineers follow the large company's IT governance policies when working with open source software written elsewhere. Even if your company or project isn't as big as HP, any time you blend code from different authors or want to borrow a routine from another open source project, it can get tricky to maintain all the rules. Certain licenses are compatible to combine in one executable, while others need to be separate processes. If you customize an open source application for internal use, you may also need to keep track of authorship - even more so if you send patches upstream."
Page editor: Jonathan Corbet
Announcements
Non-Commercial announcements
Apache on participation in the JCP
The Apache Software Foundation board has issued a statement on its participation in the Java Community process. In particular, the organization is getting increasingly frustrated with its inability to gain access to the TCK test kit. "The ASF will terminate its relationship with the JCP if our rights as implementers of Java specifications are not upheld by the JCP Executive Committee to the limits of the EC's ability. The lack of active, strong and clear enforcement of those rights implies that the JSPA agreements are worthless, confirming that JCP specifications are nothing more than proprietary documentation."
New Books
Seven Languages in Seven Weeks--New from Pragmatic Bookshelf
Pragmatic Bookshelf has released "Seven Languages in Seven Weeks - A Pragmatic Guide to Learning Programming Languages" by Bruce A. Tate.
Resources
FSFE : Newsletter November 2011
The Free Software Foundation Europe newsletter for November is out. "This edition explains how we counter the lobby work of proprietary organisations at the European level, what we do at the United Nations level to inform more people about the dangers of software patents, what we are doing to get rid of non-free software advertisement on public websites, and what you can do to make a change."
Contests and Awards
Icelandic developer receives Nordic Free Software Award
The Free Software Foundation Europe has announced that Bjarni Rúnar Einarsson has received the Nordic Free Software Award. "Einarsson has been a leading figure in Iceland's Free Software movement for more than a decade. He has been driving the country's Free Software and free culture community, founding and participating in various groups such as Vinix, contributing to the KDE project, and starting netverjar.is, an organisation fighting for civil rights on the Internet."
Calls for Presentations
CFP Now Open: Free Java @ FOSDEM 2011
There will be a Free Java Developer Room at FOSDEM 2011 (February 5-6, 2011). The Call For Participation for the dev room sessions is open until December 3, 2010.SCALE 9x Call for Papers
The call for papers for SCALE 9x (Southern California Linux Expo) is open until December 13, 2010. The conference takes place in Los Angeles, California, February 25-27, 2011.
Upcoming Events
FOSS.IN/2010: Final List of Selected Talks
FOSS.IN (December 15-17, 2010 in Bangalore, India) has finalized the list of selected talks & mini-confs.Events: November 18, 2010 to January 17, 2011
The following event listing is taken from the LWN.net Calendar.
| Date(s) | Event | Location |
|---|---|---|
| November 18 November 21 |
Piksel10 | Bergen, Norway |
| November 20 November 21 |
OpenFest - Bulgaria's biggest Free and Open Source conference | Sofia, Bulgaria |
| November 20 November 21 |
Kiwi PyCon 2010 | Waitangi, New Zealand |
| November 20 November 21 |
WineConf 2010 | Paris, France |
| November 23 November 26 |
DeepSec | Vienna, Austria |
| November 24 November 26 |
Open Source Developers' Conference | Melbourne, Australia |
| November 27 | Open Source Conference Shimane 2010 | Shimane, Japan |
| November 27 | 12. LinuxDay 2010 | Dornbirn, Austria |
| November 29 November 30 |
European OpenSource & Free Software Law Event | Torino, Italy |
| December 4 | London Perl Workshop 2010 | London, United Kingdom |
| December 6 December 8 |
PGDay Europe 2010 | Stuttgart, Germany |
| December 11 | Open Source Conference Fukuoka 2010 | Fukuoka, Japan |
| December 13 December 18 |
SciPy.in 2010 | Hyderabad, India |
| December 15 December 17 |
FOSS.IN/2010 | Bangalore, India |
| January 16 January 22 |
PyPy Leysin Winter Sprint | Leysin, Switzerland |
If your event does not appear here, please tell us about it.
Page editor: Rebecca Sobol
