LWN.net Weekly Edition for May 15, 2008
Debian, OpenSSL, and a lack of cooperation
A rather nasty security hole in the Debian OpenSSL package has generated a lot of interest—along with a fair amount of controversy—amongst Linux users. The bug has been lurking for up to two years in Debian and other distributions, like Ubuntu, based on it. There are a number of lessons to be learned here about distributions and projects working together or, as in this case, failing to work together.
Back in April 2006, a Debian user reported a problem using the OpenSSL library with valgrind, a tool that can check programs for memory access problems. It was reporting that OpenSSL was using uninitialized memory in parts of the random number generator (RNG) code. Using memory before it is initialized to a known value is a well known way to create hard-to-find bugs, so it is not surprising that the valgrind report caused some consternation.
Debian hacker Kurt Roeckx tracked the problem down to what he thought were two offending lines of code and posted a question on the openssl-dev mailing list:
What do you people think about removing those 2 lines of code?
There were few responses, but they were not opposed to removing the lines,
including one from Ulf
Möeller using an openssl.org email address: "If it helps
with debugging, I'm in favor of removing them.
" Unfortunately, as
was discovered recently, removing one of the two lines was harmless, the
other essentially crippled the RNG so that OpenSSL-generated cryptographic
keys were easy to predict.
(For more technical details on the bug and what should be done to respond to
it, see our article on this
week's Security page.)
It turns out, at
least according to OpenSSL core team member Ben Laurie, that openssl-dev is not for discussing
development of OpenSSL. That may be true in practice, but the OpenSSL support web page describes
it as: "Discussions on development of the OpenSSL library. Not for
application development questions!
" In addition, the address
suggested by Laurie (openssl-team-AT-openssl.org) does not appear in any of
the OpenSSL
documentation or web pages. If it wasn't the right place, it would seem
that the OpenSSL developers could have provided a helpful pointer to the right address, but that did not occur.
It probably was not clear that Roeckx was asking the questions in an official Debian capacity, nor that he was planning to change the Debian package based on the answer to his questions. As Laurie rightly points out, he should have submitted a patch, proposing that it be accepted into the upstream OpenSSL codebase. That probably would have garnered more attention, even if it was only posted to openssl-dev. It seems very unlikely that the patch in question would have ever made it into an OpenSSL release.
It is in the best interests of everyone, distributions, projects, and users, for changes made downstream to make their way back upstream. In order for that to work, there must be a commitment by downstream entities—typically distributions, but sometimes users—to push their changes upstream. By the same token, projects must actively encourage that kind of activity by helping patch proposals and proposers along. First and foremost, of course, it must be absolutely clear where such communications should take place.
Another recently reported security vulnerability also came about because of a lack of cooperation between the project and distributions. It is vital, especially for core system security packages like OpenSSH and OpenSSL, that upstream and downstream work very closely together. Any changes made in these packages need to be scrutinized carefully by the project team before being released as part of a distribution's package. It is one thing to let some kind of ill-advised patch be made to a game or even an office application package that many use; SSH and SSL form the basis for many of the tools used to protect systems from attackers, so they need to be held to a higher standard.
Another of Laurie's points, which also bears out the need for a higher standard, is the timing of the check-in to a public repository when compared to that of the advisory. Any alert attacker could have made very good use of the five or six day head start, they could have gotten by monitoring the repository, to exploit the vulnerability. While it is certainly possible that some of malicious intent already knew about the flaw, though no exploits have been reported, alerting potential attackers to this kind of hole well in advance of alerting the vulnerable users is unbelievably bad security protocol.
This is the kind of problem that could have been handled quickly and quietly by all concerned. All affected distributions—though it might be difficult to list all of the Debian-derived distributions out there—could have been contacted so that the advisory and updates to affected packages could have been coordinated. One of these days, one of these problems is going to give Linux a security black eye unless the community can do a better job of working together.
Release synchronization
For those who have not seen it, Mark Shuttleworth's recent The Art of Release posting is worth a look. He starts with some rather self-congratulatory talk about the Ubuntu 8.04 release, saying:
One could quibble with this claim in a number of ways, but it is true that Ubuntu got out a release designed to be supported for a number of years, and they did it when they said they would. That, of course, is only part of the job; now they have to follow through on that little promise of supporting this distribution into 2011. The initial signs are good: Ubuntu's support thus far has been solid, and it would appear that the distribution will not be going away anytime soon.
One might well question whether the timely release of 8.04 is noteworthy. As a community we are increasingly spoiled; an increasingly large number of projects and distributions manage to get out regular releases on a reasonably predictable schedule. Even kernel releases, once known to slip for a year or more, are now predictable to within a couple of weeks. Now that free software releases are rather more predictable and reliable than, say, airline departures, why is the Ubuntu 8.04 release noteworthy?
The answer is the long-term support commitment. Theoretically, a distribution intended for this sort of long lifetime will have had a degree of extra care put into its preparation. Important components will have been given extra time to stabilize so that the distribution will be more reliable from the outset. Some thought will have gone into the selection of packages shipped with an emphasis on supportability over the long term. The whole process requires more effort and a higher degree of assurance that all of the pieces are truly ready.
The degree to which Ubuntu has done all of that work should become clear over time. Certainly the software selected for this release is rather less seasoned than the packages found in a Red Hat or SUSE enterprise release. But "older" does not necessarily mean "better" or "more stable," so the real proof will be in how well this distribution holds up for the next three years.
Meanwhile, Mr. Shuttleworth has already stated that the next long-term support release will be happening in April, 2010. Ubuntu's success with 8.04, he says, allows this commitment to be made almost two years in advance. There is, however, a possibility that things could change:
This idea is not new, but Mr. Shuttleworth seems to be particularly attached to it. There is no doubt that there would be advantages to aligning schedules in this way. The kernel developers, who have been known to make a special effort for a release destined to be used by a major enterprise distributor, could focus especially hard on a stable release knowing that it would be widely used. Higher-level projects could do the same. The distributors could also, perhaps, find a way to collaborate on the long-term maintenance of these components, rather than duplicating the effort of backporting patches into older code. Perhaps they could even get together for a joint release party, saving even more money.
Or perhaps this is all a nice idea which fails to survive its encounter with reality. Enterprise distribution releases tend to be highly-publicized events. Ubuntu might be happy to share its limelight with the larger distributors, but that feeling might not be reciprocated on the other side. It is hard to imagine Red Hat or Novell wanting to have their big enterprise distribution release be just one of many happening during the same month.
It is also hard to see Ubuntu making an agreement with the enterprise distributors which specifies both a release date and the versions of the major components. 8.04 released with the 2.6.24 kernel, which was almost exactly three months old at the time. Red Hat Enterprise Linux 5 released in mid-March, 2007, when the 2.6.20 kernel was current - but Red Hat shipped the six-month-old 2.6.18 kernel instead. Aligning schedules would require more than picking a date; it would also require adopting similar stabilization periods. It is far from clear that Ubuntu would want to fall that far behind the leading edge for the sake of alignment.
And, frankly, it's hard to imagine Debian making a credible commitment (within one month) to a release date at all.
So the aligned schedules for enterprise distributions seems like a hard sell. A better approach might be to try to wean these distributions off the "freeze and backport" model of support; this model is expensive to sustain, brings risks of its own, and doesn't always fit the needs of enterprise customers.. If the enterprise distributors were able to track more current software - rather than backporting pieces of it into older software - better alignment of releases might just come naturally.
Distributed bug tracking
It is fair to say that distributed source code management systems are taking over the world. There are plenty of centralized systems still in use, but it is a rare project which would choose to adopt a centralized SCM in 2008. Developers have gotten too used to the idea that they can carry the entire history of their project on their laptop, make their changes, and merge with others at their leisure.But, while any developer can now commit changes to a project while strapped into a seat in a tin can flying over the Pacific Ocean, that developer generally cannot simultaneously work with the project's bug database. Committing changes and making bug tracker changes are activities which often go together, but bug tracking systems remain strongly in the centralized mode. Our ocean-hopping developer can commit a dozen fixes, but updating the related bug entries must wait until the plane has landed and network connectivity has been found.
There are a number of projects out there which are trying to change this situation through the creation of distributed bug tracking systems. These developments are all in a relatively early state, but their potential - and limitations - can be seen.
One of the leading projects in this area is Bugs Everywhere, which has recently moved to a new home with Chris Ball as its new maintainer. Bugs Everywhere, like the other systems investigated by your editor, tries to work with an underlying distributed source code management system to manage the creation and tracking of bug entries. In particular, Bugs Everywhere creates a new directory (called .be) in the top level of the project's directory. Bugs are stored as directories full of text files within that directory, and the whole collection is managed with the underlying SCM.
The advantages to an approach like this are clear. The bug database can now be downloaded along with the project's code itself. It can be branched along with the code; if a particular branch contains a fix for a bug, it can also contain the updated bug tracker entry. That, in turn, ensures that the current bug tracking information will be merged upstream at exactly the same time as the fix itself. Contemporary projects are characterized by large numbers of repositories and branches, each of which can contain a different set of bugs and fixes; distributing the bug database into these repositories can only help to keep the code and its bug information consistent everywhere.
There are also some disadvantages to this scheme, at least in its current form. Changes to bug entries don't become real until they are committed into the SCM. If a bug is fixed, committing the fix and the bug tracker update at the same time makes sense; in cases where one is trying to add comments to a bug as part of an ongoing conversation the required commit is just more work to do. That fact that, in git at least, one must explicitly add any new files created by the bug tracker (which have names like 12968ab9-5344-4f08-9985-ef31153e504f/comments/97f56c43-4cf2-4569-9ef4-3e8f2d9eb1fe/body) does not help the situation.
Beyond that, tracking bugs this way creates two independent sets of metadata - the bug information itself, and whatever the developer added when committing changes. There is currently no way of tying those two metadata streams together. Then, there is the issue of merging. Bugs Everywhere appears to reflect some thought about this problem; most changes involve the creation of new, (seemingly) randomly-named files which will not create conflicts at merge time. It did not take long, however, for your editor to prove that changing the severity of a bug in two branches and merging the result creates a conflict which can only be resolved by hand-editing the bug tracker's files. Said files are plain text, but that is less comforting than one might think.
[PULL QUOTE: All of this can make distributed bug tracking look like a source of more work for developers, which is not the path to world domination. END QUOTE] All of this can make distributed bug tracking look like a source of more work for developers, which is not the path to world domination. What is needed, it seems, is a combination of more advanced tools and better integration with the underlying SCM. Bugs Everywhere, by trying to work with any SCM, risks not being easily usable with any of them.
A project which is trying for closer integration is ticgit, which, as one might expect, is based on git. Ticgit takes a different approach, in that there are no files added to the project's source tree, at least not directly; instead, ticgit adds a new branch to the SCM and stores the bug information there. That allows the bug database to travel with the source (as long as one is careful to push or pull the ticgit branch!) while keeping the associated files out of the way. Ticgit operations work on the git object database directory, so there is no need for separate commit operations. On the other hand, this approach loses the ability to have a separate view of the bug database in each branch; the connection between bug fixes and bug tracker changes has been made weaker. This is something which can be fixed, and it would appear (from comments in the source) that dealing with branches is on the author's agenda.
Ticgit clearly has potential, but even closer integration would be worthwhile. Wouldn't it be nice if a git commit command would also, in a single operation, update the associated entry in the bug database? Interested developers could view a commit which is alleged to fix a bug without the need for anybody to copy commit IDs back and forth. Reverting a bugfix commit could automatically reopen the bug. And so on. In the long run, it is hard to see how a truly integrated, distributed bug tracker can be implemented independently of the source code management system.
There are some other development projects in this area, including:
- Scmbug is a relatively
advanced project which aims "to solve the integration problem once and
for all." It is not truly a distributed bug tracker, though; it
depends on hooks into the SCM which talk to a central server.
Regardless, this project has done a significant amount of thinking
about how bug trackers and source code management systems should work
together.
- DisTract is a
distributed bug tracker which works through a web interface. To that
end, it uses a bunch of Firefox-specific JavaScript code to run local
programs, written
in Haskell, which manipulate bug entries stored in a Monotone
repository. Your editor confesses that he did not pull together all
of the pieces needed to make this tool work.
- DITrack is a set of Python
scripts for manipulating bug information within a Subversion
repository. It is meant to be distributed (and, eventually,
"backend-agnostic"), but its use of Subversion limits how distributed
it can be for now.
- Ditz is a set of Ruby scripts for manipulating bug information within a source code management system; it has no knowledge of the SCM itself.
As can be seen, there is no shortage of work being done in this area, though few of these projects have achieved a high level of usability. Only Scmbug has been widely deployed so far. A few of these projects have the potential to change the way development is done, though, once various integration and user interface issues are addressed.
There is one remaining problem, though, which has not been touched upon yet. A bug tracker serves as a sort of to-do list for developers, but there is more to it than that. It is also a focal point for a conversation between developers and users. Most users are unlikely to be impressed by a message like "set up a git repository and run these commands to file or comment on a bug." There is, in other words, value in a central system with a web interface which makes the issue tracking system accessible to a wider community. Any distributed bug tracking system which does not facilitate this wider conversation will, in the end, not be successful. Creating a distributed tracker which also works well for users could be the biggest challenge of them all.
Security
Debian vulnerability has widespread effects
The recent Debian advisory for OpenSSL could lead to predictable cryptographic keys being generated on affected systems. Unfortunately, because of the way keys are used, especially by ssh, this can lead to problems on systems that never installed the vulnerable library. In addition, because the OpenSSL library is used in a wide variety of services that require cryptography, a very large subset of security tools are affected. This is a wide-ranging vulnerability that affects a substantial fraction of Linux systems.
For a look at the chain of errors that led to the vulnerability, see our front page article. Here, we will concentrate on some of the details of the code, the impact of the vulnerability, and what to do about it.
An excellent tool for finding memory-related bugs, Valgrind was used on an application that used the OpenSSL library. It complained about the library using uninitialized memory in two locations in crypto/rand/md_rand.c:
247: MD_Update(&m,buf,j); 467: #ifndef PURIFY MD_Update(&m,buf,j); /* purify complains */ #endifWhile the lines of code look remarkably similar (modulo the pre-processor directive), their actual effect is very different.
The first is contained in the ssleay_rand_add() function, which is normally called via the RAND_add() function. It adds the contents of the passed in buffer to the entropy pool of the pseudo-random number generator (PRNG). The other is contained in ssleay_rand_bytes(), normally called via RAND_bytes(), which is meant to return random bytes. It adds the contents of the passed in buffer—before filling it with random bytes to return—to the entropy pool as well. The major difference is that removing the latter might marginally reduce the entropy in the PRNG pool, while removing the former effectively stops any entropy from being added to the pool.
For both RAND_add() and RAND_bytes(), the buffer that gets passed in may not have been initialized. This was evidently known by the OpenSSL folks, but remained undocumented for others to trip over later. The "#ifndef PURIFY" is a clue that someone, at some point, tried to handle the same kind of problem that Valgrind was reporting for the similar, but proprietary, Purify tool. While it isn't necessarily wrong to add these uninitialized buffers to the PRNG pool, it is something that tools like Valgrind will rightly complain about. Since it is dubious whether it adds much in the way of entropy, while constituting a serious hazard for uninitiated, some kind of documentation in the code would seem mandatory.
The major response from the OpenSSL team seems to be from core team member Ben Laurie's weblog, where he has a rant entitled "Vendors Are Bad For Security". In it, and its follow-up, he makes some good points about mistakes that were made, while seeming to be unwilling for OpenSSL to take any share of the blame.
The end result is that OpenSSL would create predictable random numbers, which would then result in predictable cryptographic keys. According to the advisory:
A program that can detect some weak keys has also been released. It uses 256K hash values to detect the bad keys, which would imply 18-bits of entropy in the PRNG pool of vulnerable OpenSSL libraries. By using hashes of the keys in the detection program, the authors do not directly give away the key values that get generated, but it should not be difficult for an attacker to generate and use that list.
For affected Debian-derived systems, the cleanup is relatively straightforward, if painful. The SSLkeys page on the Debian wiki has specific information on how to remove weak keys along with how to generate new ones for a variety of services affected. Obviously, none of those steps should be taken until the OpenSSL package itself has been upgraded to a version that fixes the hole.
A bigger problem may be for those installations based on distributions that were not directly affected because they did not distribute the vulnerable OpenSSL library. Those machines may very well have weak keys installed in user accounts as ssh authorized_keys. A user who generated a key pair on some vulnerable host may have copied the public key to a host that was not vulnerable. This would allow an attacker to access the account of that user by brute forcing the key from the 256K possibilities.
Because of that danger, the Debian project suspended public key authentication on debian.org machines. In addition, all passwords were reset because of the possibility that an attacker could have captured them by decrypting the ssh traffic using one of the weak keys. One would guess that debian.org machines would have a higher incidence of weak keys, but any host that allows users to use ssh public key authentication is potentially at risk.
The weak key detector (dowkd) has some fairly serious limitations:
In order to ensure that there are no weak keys installed as public keys on other hosts, it may be necessary to remove all authorized_keys (and/or authorized_keys2) entries for all users. It may also be wise to set all passwords to something unknown. Until that is done, there still remains a chance that a weak key may allow access to an attacker. It is a unpleasant task that needs to be done for those who administer a multi-user system.
Brief items
Brute-Force SSH Server Attacks Surge (InformationWeek)
InformationWeek reports on an increase in attacks against SSH servers. "The paper focuses on the vulnerability of Linux systems to brute-force SSH attacks... 'Linux systems face a unique threat of compromise from brute-force attacks against SSH servers that may be running without the knowledge of system owners/operators. Many Linux distributions install the SSH service by default, some without the benefit of an effective firewall.'"
Mozilla ships a compromised extension
From the Mozilla security blog: "The Vietnamese language pack for Firefox 2 contains inserted code to load remote content. This code is the result of a virus infection, but does not contain the virus itself. This usually results in the user seeing unwanted ads, but may be used for more malicious actions. Everyone who downloaded the most recent Vietnamese language pack since February 18, 2008 got an infected copy." Presumably this is only an issue for Windows users, but it is still scary. More information can be found in bugzilla.
Cryptographic weakness on Debian systems
The Debian project has sent out an advisory stating that, due to a Debian-specific modification to the openssl package, cryptographic keys generated on affected systems may be guessable. "It is strongly recommended that all cryptographic key material which has been generated by OpenSSL versions starting with 0.9.8c-1 on Debian systems is recreated from scratch. Furthermore, all DSA keys ever used on affected Debian systems for signing or authentication purposes should be considered compromised." The project has disabled public key logins on its internal infrastructure in response.
Tech Insight: Finding & Prioritizing Web Application Vulnerabilities (Dark Reading)
Dark Reading analyzes web application vulnerabilities with an eye towards triage—choosing the right ones to address first. "While there are many Web application vulnerabilities, they aren't all the same. Each one may represent different levels of danger to different organizations, depending on the sensitivity of the data available through the site, user access privileges, and the accessibility of other internal systems from the Web and database servers."
New vulnerabilities
bugzilla: multiple vulnerabilities
Package(s): | bugzilla | CVE #(s): | CVE-2008-2103 CVE-2008-2105 | ||||||||||||
Created: | May 12, 2008 | Updated: | May 14, 2008 | ||||||||||||
Description: | From the Red Hat bugzilla: CVE-2008-2103: Cross-site scripting (XSS) vulnerability in Bugzilla 2.17.2 and later allows remote attackers to inject arbitrary web script or HTML via the id parameter to the "Format for Printing" view or "Long Format" bug list. CVE-2008-2105: email_in.pl in Bugzilla 2.23.4, and later versions before 3.0, allows remote authenticated users to more easily spoof the changer of a bug via a @reporter command in the body of an e-mail message, which overrides the e-mail address as normally obtained from the From e-mail header. NOTE: since From headers are easily spoofed, this only crosses privilege boundaries in environments that provide additional verification of e-mail addresses. | ||||||||||||||
Alerts: |
|
cdf: buffer overflow
Package(s): | cdf | CVE #(s): | CVE-2008-2080 | ||||
Created: | May 14, 2008 | Updated: | May 14, 2008 | ||||
Description: | Versions of the Common Data Format library prior to 3.2.1 suffer from a buffer overflow which could be exploitable via a specially-crafted CDF file. | ||||||
Alerts: |
|
egroupware: denial of service
Package(s): | egroupware | CVE #(s): | CVE-2008-2041 CVE-2008-1502 | ||||||||||||||||||||||||||||||||
Created: | May 8, 2008 | Updated: | October 13, 2009 | ||||||||||||||||||||||||||||||||
Description: | From the Gentoo alert:
A vulnerability has been reported in FCKEditor due to the way that file uploads are handled in the file editor/filemanager/upload/php/upload.php when a filename has multiple file extensions (CVE-2008-2041). Another vulnerability exists in the _bad_protocol_once() function in the file phpgwapi/inc/class.kses.inc.php, which allows remote attackers to bypass HTML filtering (CVE-2008-1502). | ||||||||||||||||||||||||||||||||||
Alerts: |
|
gforge: temporary file vulnerability
Package(s): | gforge | CVE #(s): | CVE-2008-0167 | ||||
Created: | May 14, 2008 | Updated: | May 14, 2008 | ||||
Description: | GForge opens files for writing in an insecure manner, leaving open the possibility of file overwrite attacks by a local user. | ||||||
Alerts: |
|
firebird: information disclosure
Package(s): | firebird | CVE #(s): | CVE-2008-1880 | ||||
Created: | May 9, 2008 | Updated: | May 14, 2008 | ||||
Description: | From the Gentoo advisory: Viesturs reported that the default configuration for Gentoo's init script ("/etc/conf.d/firebird") sets the "ISC_PASSWORD" environment variable when starting Firebird. It will be used when no password is supplied by a client connecting as the "SYSDBA" user. | ||||||
Alerts: |
|
imagemagick: heap-based buffer overflows
Package(s): | ImageMagick | CVE #(s): | CVE-2008-1096 CVE-2008-1097 | ||||||||||||||||||||||||
Created: | May 9, 2008 | Updated: | November 19, 2013 | ||||||||||||||||||||||||
Description: | From the Mandriva advisory: A heap-based buffer overflow vulnerability was found in how ImageMagick parsed XCF files. If ImageMagick opened a specially-crafted XCF file, it could be made to overwrite heap memory beyond the bounds of its allocated memory, potentially allowing an attacker to execute arbitrary code on the system running ImageMagick (CVE-2008-1096). Another heap-based buffer overflow vulnerability was found in how ImageMagick processed certain malformed PCX images. If ImageMagick opened a specially-crafted PCX image file, an attacker could possibly execute arbitrary code on the system running ImageMagick (CVE-2008-1097). | ||||||||||||||||||||||||||
Alerts: |
|
inspIRCd: buffer overflow
Package(s): | inspircd | CVE #(s): | CVE-2008-1925 | ||||
Created: | May 9, 2008 | Updated: | May 14, 2008 | ||||
Description: | From the CVE entry: Buffer overflow in InspIRCd before 1.1.18, when using the namesx and uhnames modules, allows remote attackers to cause a denial of service (daemon crash) via a large number of channel users with crafted nicknames, idents, and long hostnames. | ||||||
Alerts: |
|
libid3tag: infinite loop
Package(s): | libid3tag | CVE #(s): | CVE-2008-2109 | ||||||||||||||||||||
Created: | May 13, 2008 | Updated: | May 20, 2008 | ||||||||||||||||||||
Description: | From the CVE entry: field.c in the libid3tag 0.15.0b library allows context-dependent attackers to cause a denial of service (CPU consumption) via an ID3_FIELD_TYPE_STRINGLIST field that ends in '\0', which triggers an infinite loop. | ||||||||||||||||||||||
Alerts: |
|
libvorbis: multiple vulnerabilities
Package(s): | libvorbis | CVE #(s): | CVE-2008-1419 CVE-2008-1420 CVE-2008-1423 | ||||||||||||||||||||||||||||||||||||||||||||||||
Created: | May 14, 2008 | Updated: | August 25, 2009 | ||||||||||||||||||||||||||||||||||||||||||||||||
Description: | The libvorbis library contains several vulnerabilities exploitable by way of a specially-crafted Ogg file. | ||||||||||||||||||||||||||||||||||||||||||||||||||
Alerts: |
|
licq: denial of service
Package(s): | licq | CVE #(s): | CVE-2008-1996 | ||||||||||||||||
Created: | May 13, 2008 | Updated: | July 31, 2008 | ||||||||||||||||
Description: | From the CVE entry: licq before 1.3.6 allows remote attackers to cause a denial of service (file-descriptor exhaustion and application crash) via a large number of connections. | ||||||||||||||||||
Alerts: |
|
ltsp: multiple vulnerabilities
Package(s): | ltsp | CVE #(s): | |||||
Created: | May 9, 2008 | Updated: | May 14, 2008 | ||||
Description: | The Linux Terminal Server Project ships copies of packages with vulnerabilities. Many of these vulnerabilities have likely been fixed by your distribution provider in the individual packages, but not in the ltsp bundled copies. | ||||||
Alerts: |
|
moinmoin: privilege escalation
Package(s): | moinmoin | CVE #(s): | CVE-2008-1937 | ||||
Created: | May 12, 2008 | Updated: | May 14, 2008 | ||||
Description: | From the Gentoo advisory: It has been reported that the user form processing in the file userform.py does not properly manage users when using Access Control Lists or a non-empty superusers list. A remote attacker could exploit this vulnerability to gain superuser privileges on the application. | ||||||
Alerts: |
|
nagios: cross-site scripting
Package(s): | nagios | CVE #(s): | CVE-2007-5803 CVE-2008-1360 | ||||||||||||||||
Created: | May 9, 2008 | Updated: | September 14, 2009 | ||||||||||||||||
Description: | From the CVE entry: Cross-site scripting (XSS) vulnerability in Nagios before 2.11 allows remote attackers to inject arbitrary web script or HTML via unknown vectors to unspecified CGI scripts, a different issue than CVE-2007-5624. | ||||||||||||||||||
Alerts: |
|
openssl: predictable random number generator
Package(s): | openssl | CVE #(s): | CVE-2008-0166 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Created: | May 13, 2008 | Updated: | June 19, 2008 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Description: | This Debian advisory states that, due to a Debian-specific modification to the openssl package, cryptographic keys generated on affected systems may be guessable. See also this brief article for more information. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Alerts: |
|
php5: multiple vulnerabilities
Package(s): | php5 | CVE #(s): | CVE-2007-3806 CVE-2008-1384 CVE-2008-2050 CVE-2008-2051 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Created: | May 12, 2008 | Updated: | January 22, 2009 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Description: | From the Debian advisory: CVE-2007-3806: The glob function allows context-dependent attackers to cause a denial of service and possibly execute arbitrary code via an invalid value of the flags parameter. CVE-2008-1384: Integer overflow allows context-dependent attackers to cause a denial of service and possibly have other impact via a printf format parameter with a large width specifier. CVE-2008-2050: Stack-based buffer overflow in the FastCGI SAPI. CVE-2008-2051: The escapeshellcmd API function could be attacked via incomplete multibyte chars. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Alerts: |
|
php: PATH_TRANSLATED miscalculation
Package(s): | php | CVE #(s): | CVE-2008-0599 | ||||||||||||||||||||||||||||||||||||
Created: | May 8, 2008 | Updated: | November 17, 2008 | ||||||||||||||||||||||||||||||||||||
Description: | From the National Vulnerability Database: CVE-2008-0599: cgi_main.c in PHP before 5.2.6 does not properly calculate the length of PATH_TRANSLATED, which has unknown impact and attack vectors. | ||||||||||||||||||||||||||||||||||||||
Alerts: |
|
rdesktop: multiple vulnerabilities
Package(s): | rdesktop | CVE #(s): | CVE-2008-1801 CVE-2008-1802 CVE-2008-1803 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Created: | May 12, 2008 | Updated: | September 19, 2008 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Description: | From the Debian advisory: CVE-2008-1801: Remote exploitation of an integer underflow vulnerability allows attackers to execute arbitrary code with the privileges of the logged-in user. CVE-2008-1802: Remote exploitation of a BSS overflow vulnerability allows attackers to execute arbitrary code with the privileges of the logged-in user. CVE-2008-1803: Remote exploitation of an integer signedness vulnerability allows attackers to execute arbitrary code with the privileges of the logged-in user. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Alerts: |
|
sarg: stack buffer overflows
Package(s): | sarg | CVE #(s): | CVE-2008-1922 | ||||||||||||
Created: | May 9, 2008 | Updated: | September 8, 2010 | ||||||||||||
Description: | Multiple stack buffer overflows have been fixed in the Squid logfile analyzer sarg. | ||||||||||||||
Alerts: |
|
sipp: arbitrary code execution
Package(s): | sipp | CVE #(s): | CVE-2008-1959 | ||||||||
Created: | May 12, 2008 | Updated: | May 14, 2008 | ||||||||
Description: | From the Fedora advisory: Stack-based buffer overflow in the get_remote_video_port_media function in call.cpp in SIPp 3.0 allows remote attackers to cause a denial of service and possibly execute arbitrary code via a crafted SIP message. | ||||||||||
Alerts: |
|
X11 terms: privilege escalation
Package(s): | aterm | CVE #(s): | CVE-2008-1142 CVE-2008-1692 | ||||||||||||||||||||
Created: | May 8, 2008 | Updated: | October 30, 2008 | ||||||||||||||||||||
Description: | Privilege escalation vulnerabilities have been found in the following
X11 terminal emulators: aterm, Eterm, Mrxvt, multi-aterm, RXVT, rxvt-unicode, and wterm.
From the Gentoo alert: Bernhard R. Link discovered that Eterm opens a terminal on :0 if the "-display" option is not specified and the DISPLAY environment variable is not set. Further research by the Gentoo Security Team has shown that aterm, Mrxvt, multi-aterm, RXVT, rxvt-unicode, and wterm are also affected. | ||||||||||||||||||||||
Alerts: |
|
xen: multiple vulnerabilities
Package(s): | xen | CVE #(s): | CVE-2007-5730 CVE-2008-1943 CVE-2008-1944 CVE-2008-2004 | ||||||||||||||||||||||||
Created: | May 13, 2008 | Updated: | May 13, 2009 | ||||||||||||||||||||||||
Description: | From the Red Hat advisory:
Tavis Ormandy found that QEMU did not perform adequate sanity-checking of data received via the "net socket listen" option. A malicious local administrator of a guest domain could trigger this flaw to potentially execute arbitrary code outside of the domain. (CVE-2007-5730) Markus Armbruster discovered that the hypervisor's para-virtualized framebuffer (PVFB) backend failed to validate the frontend's framebuffer description. This could allow a malicious user to cause a denial of service, or to use a specially crafted frontend to compromise the privileged domain (Dom0). (CVE-2008-1943) Daniel P. Berrange discovered that the hypervisor's para-virtualized framebuffer (PVFB) backend failed to validate the format of messages serving to update the contents of the framebuffer. This could allow a malicious user to cause a denial of service, or compromise the privileged domain (Dom0). (CVE-2008-1944) Chris Wright discovered a security vulnerability in the QEMU block format auto-detection, when running fully-virtualized guests. Such fully-virtualized guests, with a raw formatted disk image, were able to write a header to that disk image describing another format. This could allow such guests to read arbitrary files in their hypervisor's host. (CVE-2008-2004) | ||||||||||||||||||||||||||
Alerts: |
|
zoneminder: arbitrary code execution
Package(s): | zoneminder | CVE #(s): | CVE-2008-1381 | ||||||||||||
Created: | May 12, 2008 | Updated: | May 14, 2008 | ||||||||||||
Description: | From the Red Hat bugzilla: ZoneMinder prior to version 1.23.3 contains unescaped PHP exec() calls which can allow an authorised remote user the ability to run arbitrary code as the Apache httpd user. | ||||||||||||||
Alerts: |
|
Page editor: Jake Edge
Kernel development
Brief items
Kernel release status
The current 2.6 prepatch is 2.6.26-rc2, released on May 12. This release is dominated by fixes, but also turns the big kernel lock (back) into a spinlock; see the article below for more information.The current -mm tree is 2.6.26-rc2-mm1. Recent changes to -mm include a rebasing on top of the linux-next tree, a memory initialization debugging framework, the removal of the v850 port, a new set of system calls with additional flags arguments (see below), a WARN() macro which takes printk()-style arguments, ext4 online defragmentation, ext4 FIEMAP support, and the OMFS filesystem.
The current stable 2.6 kernel is 2.6.25.3, released on May 9. This release came out earlier than planned due to the inclusion of a couple of security fixes.
As of this writing, the 2.6.25.4 update - containing a few dozen fixes - is in the review process.
Kernel development news
Quotes of the week
19:29 | * | dilinger sends his patch to lkml for a proper flaming |
19:32 | Quozl> dilinger: it's called "patch hardening"? ;-} the heat of the flames makes the patch stronger. |
The big kernel lock strikes again
When Alan Cox first made Linux work on multiprocessor systems, he added a primitive known as the big kernel lock (or BKL). This lock, originally, ensured that only one processor could be running kernel code at any given time. Over the years, the role of the BKL has diminished as increasingly fine-grained locking - along with lock-free algorithms - have been implemented throughout the kernel. Getting rid of the BKL entirely has been on the list of things to do for some time, but progress in that direction has been slow in recent years. A recent performance regression tied to the BKL might give some new urgency to that task, though; it also shows how subtle algorithmic changes can make a big difference.The AIM benchmark attempts to measure system throughput by running a large number of tasks (perhaps thousands of them), each of which is exercising some part of the kernel. Yanmin Zhang reported that his AIM results got about 40% worse under the 2.6.26-rc1 kernel. He took the trouble to bisect the problem; the guilty patch turned out to be the generic semaphores code. Reverting that patch made the performance regression go away - at the cost of restoring over 7,000 lines of old, unlamented code. The thought of bringing back the previous semaphore implementation was enough to inspire a few people to look more deeply at the problem.
It did not take too long to narrow the focus to the BKL, which was converted to a semaphore a few years ago. That part of the process was easy - there aren't a whole lot of other semaphores left in the kernel, especially in performance-critical places. But the BKL stubbornly remains in a number of core places, including the fcntl() system call, a number of ioctl() implementations, the TTY code, and open() for char devices. That's enough for a badly-performing BKL to create larger problems, especially when running VFS-heavy benchmarks with a lot of contention.
Ingo Molnar tracked down the problem in the new semaphore code. In short: the new semaphore code is too fair for its own good. When a semaphore is released, and there is another thread waiting for it, the semaphore is handed over to the new thread (which is then made runnable) at that time. This approach ensures that threads obtain the semaphore in something close to the order in which they asked for it.
The problem is that fairness can be expensive. The thread waiting for the semaphore may be on another processor, its cache could be cold, and it might be at a low enough priority that it will not even begin running for some time. Meanwhile, another thread may request the semaphore, but it will get put at the end of the queue behind the new owner, which may not be running yet. The result is a certain amount of dead time where no running thread holds the semaphore. And, in fact, Yanmin's experience with the AIM benchmark showed this: his system was running idle almost 50% of the time.
The solution is to bring in a technique from the older semaphore code: lock stealing. If a thread tries to acquire a semaphore, and that semaphore is available, that thread gets it regardless of whether a different thread is patiently waiting in the queue. Or, in other words, the thread at the head of the queue only gets the semaphore once it starts running and actually claims it; if it's too slow, somebody else might get there first. In human interactions, this sort of behavior is considered impolite (in some cultures, at least), though it is far from unknown. In a multiprocessor computer, though, it makes the difference between acceptable and unacceptable performance - even a thread which gets its lock stolen will benefit in the long run.
Interestingly, the patch which implements this change was merged into the mainline, then reverted before 2.6.26-rc2 came out. The initial reason for the revert was that the patch broke semaphores in other situations; for some usage patterns, the semaphore code could fail to wake a thread when the semaphore became available. This bug could certainly have been fixed, but it appears that things will not go that way - there is a bit more going on here.
What is happening instead is that Linus has committed a patch which simply turns the BKL into a spinlock. By shorting out the semaphore code entirely, this patch fixes the AIM regression while leaving the slow (but fair) semaphore code in place. This change also makes the BKL non-preemptible, which will not be entirely good news for those who are concerned with latency issues - especially the real time tree.
The reasoning behind this course of action would appear to be this: both semaphores and the BKL are old, deprecated mechanisms which are slated for minimization (semaphores) or outright removal (BKL) in the near future. Given that, it is not worth adding more complexity back into the semaphore code, which was dramatically simplified for 2.6.26. And, it seems, Linus is happy with a sub-optimal BKL:
So the end result of all this may be a reinvigoration of the effort to remove the big kernel lock from the kernel. It still is not something which is likely to happen over the next few kernel releases: there is a lot of code which can subtly depend on BKL semantics, and there is no way to be sure that it is safe without auditing it in detail. And that is not a small job. Alan Cox has been reworking the TTY code for some time, but he has some ground to cover yet - and the TTY code is only part of the problem. So the BKL will probably be with us for a while yet.
Getting a handle on caching
Memory management changes (for the x86 architecture) have caused surprises for a few kernel developers. As these issues have been worked out, it has become clear that not everybody understands how memory caching works on contemporary systems. In an attempt to bring some clarity, Arjan van de Ven wrote up some notes and sent them to your editor, who has now worked them into this article. Thanks to Arjan for putting this information together - all the useful stuff found below came from him.As readers of What every programmer should know about memory will have learned, the caching mechanisms used by contemporary processors are crucial to the performance of the system. Memory is slow; without caching, systems will run much slower. There are situations where caching is detrimental, though, so the hardware must provide mechanisms which allow for control over caching with specific ranges of memory. With 2.6.26, Linux is (rather belatedly) starting to catch up with the current state of the art on x86 hardware; that, in turn, is bringing some changes to how caching is managed.
It is good to start with a definition of the terms being used. If a piece of memory is cachable, that means:
- The processor is allowed to read that memory into its cache at
any time. It may choose to do so regardless of whether the
currently-executing program is interested in reading that memory.
Reads of cachable memory can happen in response to speculative
execution, explicit prefetching, or a number of other reasons. The
CPU can then hold the contents of this memory in its cache for an
arbitrary period of time, subject only to an explicit request to
release the cache line from elsewhere in the system.
- The CPU is allowed to write the contents of its cache back to memory at any time, again regardless of what any running program might choose to do. Memory which has never been changed by the program might be rewritten, or writes done by a program may be held in the cache for an arbitrary period of time. The CPU need not have read an entire cache line before writing that line back.
What this all means is that, if the processor sees a memory range as cachable, it must be possible to (almost) entirely disconnect the operations on the underlying device from what the program thinks it is doing. Cachable memory must always be readable without side effects. Writes have to be idempotent (writing the same value to the same location several times has the same effect as writing it once), ordering-independent, and size-independent. There must be no side effects from writing back a value which was read from the same location. In practice, this means that what sits behind a cachable address range must be normal memory - though there are some other cases.
If, instead, an address range is uncachable, every read and write operation generated by software will go directly to the underlying device, bypassing the CPU's caches. The one exception is with writes to I/O memory on a PCI bus; in this case, the PCI hardware is allowed to buffer and combine write operations. Writes are not reordered with reads, though, which is why a read from I/O memory is often used in drivers for PCI devices as a sort of write barrier.
A variant form of uncached access is write combining. For read operations, write-combined memory is the same as uncachable memory. The hardware is, however, allowed to buffer consecutive write operations and execute them as a smaller series of larger I/O operations. The main user of this mode is video memory, which often sees sequential writes and which offers significant performance improvements when those writes are combined.
The important thing is to use the right cache mode for each memory range. Failure to make ordinary memory cachable can lead to terrible performance. Enabling caching on I/O memory can cause strange hardware behavior, corrupted data, and is probably implicated in global warming. So the CPU and the hardware behind a given address must agree on caching.
Traditionally, caching has been controlled with a CPU feature called "memory type range registers," or MTRRs. Each processor has a finite set of MTRRs, each of which controls a range of the physical address space. The BIOS sets up at least some of the MTRRs before booting the operating system; some others may be available for tweaking later on. But MTRRs are somewhat inflexible, subject to the BIOS not being buggy, and are limited in number.
In more recent times, CPU vendors have added a concept known as "page attribute tables," or PAT. PAT, essentially, is a set of bits stored in the page table entries which control how the CPU does caching for each page. The PAT bits are more flexible and, since they live in the page table entries, they are difficult to run out of. They are also completely under the control of the operating system instead of the BIOS. The only problem is that Linux doesn't support PAT on the x86 architecture, despite the fact that the hardware has had this capability for some years.
The lack of PAT support is due to a few things, not the least of which has been problematic support on the hardware side. Processors have stabilized over the years, though, to the point that it is possible to create a reasonable whitelist of CPU families known to actually work with PAT. There have also been challenges on the kernel side; when multiple page table entries refer to the same physical page (a common occurrence), all of the page table entries must use the same caching mode. Even a brief window with inconsistent caching can be enough to bring down the system. But the code on the kernel side has finally been worked into shape; as a result, PAT support was merged for the 2.6.26 kernel. Your editor is typing this on a PAT-enabled system with no ill effects - so far.
On most systems, the BIOS will set MTRRs so that regular memory is cachable and I/O memory is not. The processor can then complicate the situation with the PAT bits. In general, when there is a conflict between the MTRR and PAT settings, the setting with the lower level of caching prevails. The one exception appears to be when one says "uncachable" and the other enables write combining; in that case, write combining will be used. So the CPU, through the management of the PAT bits, can make a couple of effective changes:
- Uncached memory can have write combining turned on. As noted above,
this mode is most useful for video memory.
- Normal memory can be made uncached. This mode can also be useful for video memory; in this case, though, the memory involved is normal RAM which is also accessed by the video card.
Linux device drivers must map I/O memory before accessing it; the function which performs this task is ioremap(). Traditionally, ioremap() made no specific changes to the cachability of the remapped range; it just took whatever the BIOS had set up. In practice, that meant that I/O memory would be uncachable, which is almost always what the driver writer wanted. There is a separate ioremap_nocache() variant for cases where the author wants to be explicit, but use of that interface has always been rare.
In 2.6.26, ioremap() was changed to map the memory uncached at all times. That created a couple of surprises in cases where, as it happens, the memory range involved had been cachable before and that was what the code needed. As of 2.6.26, such code will break until the call is changed to use the new ioremap_cache() interface instead. There is also a ioremap_wc() function for cases where a write-combined mapping is needed.
It is also possible to manipulate the PAT entries for an address range explicitly:
int set_memory_uc(unsigned long addr, int numpages); int set_memory_wc(unsigned long addr, int numpages); int set_memory_wb(unsigned long addr, int numpages);
These functions will set the given pages to uncachable, write-combining, or writeback (cachable), respectively. Needless to say, anybody using these functions should have a firm grasp of exactly what they are doing or unpleasant results are certain.
Extending system calls
Getting interfaces right is a hard, but necessary, task, especially when that interface has to be supported "forever". Such is the case with the system call interface that the kernel presents to user space, so adding features to it must be done very carefully. Even so, when Ulrich Drepper set out to remove a hole that could lead to a race condition, he probably did not expect all the different paths that would need to be tried before closing in on an acceptable solution.
The problem stems from wanting to be able to create file descriptors with new properties—things like close-on-exec, non-blocking, or non-sequential descriptors. Those features were not considered when the system call interface was developed. After all, many of those system calls are essentially unchanged from early Unix implementations of the 1970s. The open() call is the most obvious way to request a file descriptor from the kernel, but there are plenty of others.
In fact, open() is one of the easiest to extend with new features because of its flags argument. Calls like pipe(), socket(), accept(), epoll_create() and others produce file descriptors as well, but don't have a flags argument available. Something different would have to be done to support additional features for the file descriptors resulting from those calls.
The close-on-exec functionality is especially important to close a security hole for multi-threaded programs. Currently, programs can use fcntl() to change an open file descriptor to have the close-on-exec property, but there is always a window in time between the creation of the descriptor and changing its behavior. Another thread could do an exec() call in that window, leaking a potentially sensitive file descriptor into the newly run program. Closing that window requires an in-kernel solution.
Back in June of last year, after some false starts, Linus Torvalds suggested adding an indirect() system call, as a way to pass flags to system calls that don't currently support them. The indirect() call would apply a set of flags to the invocation of an existing system call. This would allow existing calls to remain unchanged, with only new uses calling indirect(). User space programs would be unlikely to call the new function directly, instead they would call glibc functions that handled any necessary indirect() calls.
Davide Libenzi created a sys_indirect() patch in July, but Drepper saw it as "more complex than warranted". So Drepper created his own "trivial" implementation, that was described on this page in November. It was met with a less than enthusiastic response on linux-kernel for being, amongst other things, an exceedingly ugly interface.
The alternative to sys_indirect() is to create a new system call for each existing call that needed a flags argument. This was seen as messy by some, including Torvalds, leading some kernel hackers into looking for alternatives. The indirect approach also had some other potential benefits, though, because it was seen as something that could be used by syslets to allow asynchronous system calls. No decision seemed to be forthcoming, leading Drepper to ask Torvalds for one:
To bolster his argument that sys_indirect() was the way to go, Drepper also created a patch to add some of the required system calls. He started with the socket() family, by adding socket4(), socketpair5(), and accept4()—tacking the number of arguments onto the function name a la wait3() and wait4(). Drepper's intent may not have been well served by choosing those calls as Alan Cox immediately noted that the type argument could be overloaded:
socket(PF_INET, SOCK_STREAM|SOCK_CLOEXEC, ...)that would be far far cleaner, no new syscalls on the socket side at all.
Michael Kerrisk looked over the set of system calls that generate file descriptors, categorizing them based on whether they needed a flag argument added. He observed that roughly half of the file descriptor producing calls need not change because they could either use an overloading trick like the socket calls, the glibc API already added a flags argument, or there were alternatives available to provide the same functionality along with flags.
In response, Drepper made one last attempt to push the indirect approach, saying:
Even though the indirect approach has some good points, Torvalds liked the approach advocated by Cox, saying:
Ultimately, developers will only use these new interfaces if they can easily test for the existence of the new code. Torvalds gives an example of how that might be done using the O_NOATIME flag to open(), which has only been available since 2.6.8. It is this testability issue that makes him believe the flags-based approach is the right one:
This new approach, with a scaled down number of new system calls rather than adding a general-purpose system call extension mechanism like sys_indirect(), is now being pursued by Drepper. In the explanatory patch at the start of the series, he lays out which of the system calls will require a new user space interface: paccept(), epoll_create2(), dup3(), pipe2(), and inotify_init1(), as well as those that do not: signalfd4(), eventfd2(), timerfd_create(), socket(), and socketpair().
Drepper has already made several iterations of patches addressing most of the concerns expressed by the kernel developers along the way. There have been some architecture specific problems, but Drepper has been knocking those down as well. If no further roadblocks appear, it would seem a likely candidate for inclusion in 2.6.27.
Patches and updates
Kernel trees
Architecture-specific
Core kernel code
Device drivers
Filesystems and block I/O
Memory management
Virtualization and containers
Benchmarks and bugs
Miscellaneous
Page editor: Jonathan Corbet
Distributions
News and Editorials
A Talk with Fedora Project Leader Paul Frields
Late last week I had the pleasure of talking with Fedora Project Leader Paul Frields. Our conversation covered a range of Fedora Project topics, including Fedora 9, the latest Fedora release.One thing Paul is passionate about is getting people to volunteer. There are many ways to get involved with the Fedora Project, lots of sub-projects and Special Interest Groups (SIGs) that people can join depending on their interests and talents. The Fedora Project wiki is a good starting point for finding out more. The Join Fedora page also goes into the various roles that a Fedora contributor might be suited for, with easy links to setting up a Fedora account and using the Fedora Account system. You don't have to be a programmer or a computer expert to contribute to the project.
Joining the Fedora Project is easier now than it ever was during Fedora's five year history. As a result Fedora now has over 2000 registered account holders. That includes about 350 ambassadors who promote Fedora in their local area. In addition to making it easier to become a Fedora contributor, a variety of new web applications/collaborative tools are now available for contributors. Of course all Fedora infrastructure is Free Software, available in the Fedora repository, and running on Fedora.
All registered account holders may vote in Fedora elections, which is worth noting because there is an election coming up in June.
The composition of the Fedora board was recently changed to five elected members of the nine board seats. Four of those seats will be voted on in the next election. The other board seats are appointed by Red Hat, but are not necessarily Red Hat employees. Red Hat retains some control by employing and appointing the Project Leader. Paul took a job with Red Hat when he was offered the position of Project Leader.
Paul mentioned that former Fedora Project Leader Max Spevack is moving to the Netherlands to organize and manage Fedora volunteers in Europe. Paul also mentioned that Fedora has many Brazilian contributors. Of course Red Hat employs some Fedora engineers. There are fourteen Red Hat employees working full time on Fedora, mostly acting as team leaders and organizing the volunteers. In addition all Red Hat engineers will spend some fraction of their time working on Fedora in areas where Red Hat Enterprise Linux in involved.
Some people think of Fedora as a beta for Red Hat Enterprise Linux, but its more realistic to think of Fedora as the upstream source for its enterprising cousin and spin-offs such as CentOS. So even though Fedora is a community project, Red Hat is still very involved in its development.
FUDCon (Fedora User & Developer Conference) is an event held on an irregular schedule several times per year. Some are smaller events held in conjunction with a larger event, such as the May 30, 2008 FUDCon, which will be held at LinuxTag in Berlin, Germany. Further out, there is some talk of having a mini-FUDCon at the 2009 linux.conf.au. The Boston FUDCon coming up in June, will run for several days. Co-located with the Red Hat Summit, the Boston FUDCon will feature hackfests, a barcamp and technical talks.
The Red Hat Summit will bring in Red Hat customers, and include talks about actual use cases. These talks should be interesting for Fedora developers, who will have a chance to see what people are doing with their work downstream. FUDCon is open to anyone, so stop by if there is a FUDCon in your area.
On to the just released Fedora 9 and the upcoming Fedora 10. Fedora 9 is one of the first major releases to feature KDE 4 by default. To make this work, the KDE SIG has built a compatibility library to keep KDE 3 applications running properly. For Fedora 10 Casey Dahlin is working on replacing the init system with upstart, the system developed for Ubuntu.
Some other items that we touched on briefly: Fedora maintains an open build system and works at getting patches upstream. The project also strives to cooperate with other distributions. From what I've seen, Fedora 9 looks very good, attractive and functional. Now that rawhide has moved on to Fedora 10 it will be a rough ride for at least a few days. So stick with Fedora 9, or get it from a mirror near you.
Fedora 9 is Paul's first release as Project Leader and he had a few words to add. "It's been less than
five years since the first release of Fedora (back when it was called
Fedora Core), and in that time Fedora has become not just a vibrant,
innovative, and extremely popular Linux distribution, but also a thriving
community. A community that believes that free and open source software is
not just something you *use*, it's something you *do* -- something to which
you *contribute*.
"
New Releases
Fedora 9 released
Fedora 9 is out. "First to hit were the live USB keys. The heathens cried out for mercy, but were powerless to resist. The sticks were damn persistent and non-destructively formatted - non-destructively! They showed up everywhere, casting out demons from computers infected by the dark one of the interwebs and rescuing lost data from the influence of the evil crackers." See the release notes for a rather more sober description of Fedora 9.
BLFS-6.3-rc1 has been released!
Beyond Linux From Scratch (BLFS) has released the first release candidate for version 6.3. The final version is expected before the end of May. See the release notes for more information.
Distribution News
Debian GNU/Linux
Surveying the Debian community!
Several weeks ago we mentioned a couple of surveys for the Debian community. These surveys will be closing soon - 1am UTC time on the 1st of June 2008. There is a Debian user survey and a Debian DM/AM/NM survey.
Fedora
Red Hat Board Appointments
Paul Frields has a note on the upcoming Fedora Board elections. "As everyone probably knows, the Fedora Board is moving into an election season due to the release of another Fedora. In advance of the election, Red Hat appoints one seat, and the final seat is appointed afterward to make sure the Board is fairly balanced to represent the Board's many constituents." Red Hat has named Harald Hoyer, a Senior Software Engineer in Red Hat's Stuttgart office, to occupy one of the two open appointed seats.
Rawhide moving on to Fedora 10
Starting tomorrow, with the release of Fedora 9, Fedora "rawhide" will be composed of Fedora 10 content. "This will likely fail in spectacular ways due to all the pent up builds so it should be interesting." Click below for more information and remember: "
Keep all body parts inside the cart at all times. Buckle up and enjoy the ride!"
rpm.livna.org repositories for Fedora 9 (Sulphur) now available
The Livna.org package repository for Fedora 9 (Sulphur) is open for business. "The Livna repository hosts software as RPM packages which cannot be shipped in the official Fedora repository for various reasons and support8s the i386, x86_64 and ppc architectures."
Gentoo Linux
Gentoo council meeting summary for 8 May 2008
Click below for a summary of the May 8th meeting of the Gentoo council. Some topics include Active-developer document, ChangeLog entries, Ignored arch-team bugs, 8-digit versions, Enforced retirement and New meeting process.
Ubuntu family
Xubuntu Meeting
The Xubuntu community had a meeting to resolve some issues. Click below for a summary of the that meeting. "I'd first like to start off this e-mail by announcing the Xubuntu community meeting was a *huge* success. We had roughly two dozen people take part (including old, current, and new faces) and a number of other individuals who sent in e-mails or left a quick IRC message to let us know that they were unable to attend but would be following up with much interest."
Distribution Newsletters
OpenSUSE Weekly News/21
This week's edition of openSUSE Weekly News covers openSUSE 11.0 Beta 2, People of openSUSE: Greg Kroah-Hartman, Jigish Gohil: Sliced sphere in compiz-fusion-git packages, and much more.PCLinuxOS Magazine May 2008 Released
The May 2008 edition of PCLinuxOS Magazine is out. This week's highlights include Manage your Ipod with Amarok, PCLinuxOS Based Distros, Quick Fix for Damaged Xorg, Don't Complain, Something Completely Different, and more.Ubuntu Weekly Newsletter, Issue 90
The Ubuntu Weekly Newsletter for May 10, 2008 covers Ubuntu Brainstorm Growing, Ubuntu Finland receives award from Finland's Minister of Communications, Ubuntu Featured on Italian TV, submit questions for Launchpad podcast, Forums News and Interviews, Ubuntu UK Podcast Episode 5, and much more.DistroWatch Weekly, Issue 252
The DistroWatch Weekly for May 12, 2008 is out. "It's a Fedora week here at DistroWatch. A new version of the popular distribution will be released later this week, complete with the usual cutting edge features, such as KDE 4, dramatic speed improvements, support for the ext4 file system and many others. One popular application set missing from the distro, however, is KDE 3.5, now relegated to the dustbins of history by the project. If you are a Fedora KDE user, should you upgrade or should you not? Read our first-impressions review of Fedora 9 KDE to obtain some answers. In the news section, openSUSE presents several user interface improvements for its package manager, Ubuntu prepares to deliver cool new features in Intrepid Ibex, Attila Craciun introduces the Slackware-based Bluewhite64 Linux, and PC-BSD updates its artwork and fixes bugs in preparation for the 7.0 release. Also included are several resources to help you manage your OpenSolaris system better and an interesting update on Oracle Enterprise Linux."
Newsletters and articles of interest
Hats off to Fedora 9 (DesktopLinux)
DesktopLinux takes a look at Fedora 9 and includes a mini-interview with Paul Frields. "Many people mistakenly believe that Red Hat started Fedora. In fact, the project began independently in 2003, as a "community" version of the popular Linux distribution. The idea was to emulate the "freeness" and community involvement of the Debian distribution, while still leveraging Red Hat's testing and integration work -- not to mention its more regular release cycle schedule."
Fedora 9 Released with KDE 4.0.3 (KDE.News)
KDE.News looks at KDE 4.0.3 in Fedora 9. "In addition to the inclusion of KDE 4 as the default KDE, Fedora 9 also comes with other major new features, such as the switch to Upstart to handle system startup, an improved NetworkManager including support for mobile broadband and systemwide configuration, a new, fast version of X.Org X11, TexLive replacing tetex, unified spellchecking dictionaries and much more."
CNR supports Linux Mint, adds Weatherbug (DesktopLinux)
DesktopLinux covers an announcement from Linspire. "Linspire has upgraded its CNR.com (Click'N'Run) download site for Linux software to support the Ubuntu-based, consumer-friendly Linux Mint distribution. CNR.com will also add a Linux version of Weatherbug's weather service, which offers live, local weather information and severe weather alerts."
Distribution reviews
Top 5 Tiny Distros (TuxMachines)
TuxMachines compares five tiny distributions. "I was cleaning up my /home partiton when I noticed I had several tiny distros hanging around waiting to be tested. So I thought this might be a good time to write an updated Mini-distro Roundup. Unlike last time, the five contestants are all less than 88 MB in download size. The five contestants are CDlinux 0.6.1, Damn Small Linux 4.3r2, Puppy 4.0rc, Slitaz 1.0, and Austrumi 1.6.5. All of these are the latest stable except Damn Small and Puppy, that are release candidates. So, we'll cut them just a bit of slack in the stability department if need be."
Fedora 9 - an OS that even the Linux challenged can love (The Register)
The Register takes a look at Fedora 9. "Fedora 9, the latest release from the Fedora Project, goes up for download on Tuesday. The ninth release of Fedora ushers in a number of changes aimed at making the venerable distribution a more newbie-friendly desktop, but longtime users needn't fear a great dumbing down; version 9 packs plenty of power user punch as well."
Page editor: Rebecca Sobol
Development
The Freedom of Fork
One of the important rights that Free Software gives you is the ability to take the source code of any software, modify it, and release it again under a compatible Free Software license. It is a very important freedom, as it allows not only users to customize the software they use to better suit their requirements, but also enables distributions to patch software to build in their environment. Environmental changes include new architectures and different versions of system tools and libraries. As with other important freedoms, this ability can prove to be a huge problem if not handled properly. There can be problems for the original author, the person doing the fork, and the users of the various versions of the software.
The story of Free Software is full of good examples of forks handled correctly, like the EGCS fork that transformed the GNU C Compiler into the GNU Compiler Collection (GCC), or more recently the replacement of Jörg Schilling's cdrtools with the cdrkit package that is now found in most distributions. Unfortunately, the list of bad examples is longer.
Historically, forking a project was a difficult task for most single developers: handling version control repositories (especially with CVS) was not something done easily. It limited the task of forking to experienced developers, who usually had enough common sense to know when forking was not an option.
Nowadays, forking is much easier, Subversion allows to developers to easily fetch the whole history of a project. Distributed version control systems (DVCS) like git, Mercurial, Bazaar-NG and others remove the need for a central repository, making forking and branching two very similar activities. Recently, the GitHub hosting site has made this action even more prominent by adding a "fork" button on the pages for the repository hosted on their servers, allowing anybody to create a new branch (or fork) of a project in a simple mouse click.
The Downsides of Forking
Forking is not always the best option. It should probably be considered the last resort. Forking divides efforts as the two projects often take slightly different turns. The result of the fork is that the two versions of the code diverge, even though they share the same interface and most of the background logic. This creates a series of problems, of a technical nature, that reflects on the non-technical attributes of a program.
A forked project reuses a big part of the code from the original project. This causes code duplication, with its usual problems, and one in particular: security risks. A forked project is usually vulnerable to the problems the original project had, unless that part of the code has been rewritten or modified with time. As the forks evolve, authors often miss the security issues fixed by their ancestor, making it harder for developers to track the issues down.
Another common problem is the division of users' contributions. Users usually just report issues to one project, the one they use. So either the developers of the two projects exchange information about the bugs they fix in the common code, or the problems will likely be ignored by one of the two projects, making the distance between the projects increase.
You can find this very problem with software like Ghostscript, the omnipresent PostScript processor, used to generate, view and convert PostScript files. Its development is currently divided into multiple forks which do not always give their code back to the originating project. You can find one version released under the AFPL (Aladdin Free Public License), one released under the GPL, a commercial/proprietary one, and one version that used to be developed by Easy Software Products, the authors of the CUPS printing system.
The reasons for the forks here were mostly related to licensing issues. And, in the case of ESP, to better support CUPS. In the end, the development of different bloodlines for the project caused, and still causes, problems for distribution maintainers. Distribution issues include keeping packages aligned, which means doubling the effort needed to fix the code if it breaks or if it doesn't follow policy.
Another case where dividing the development effort has caused problems is in the universe of Logitech mouse control software. The lmctl project was started as a tool to control some settings of Logitech devices, like resolution and cordless channels. The code has to know which devices have which settings available. To do this, it keeps a table of USB identifiers. As new devices started appearing on the market, and Linux users started using them and the table became outdated. Distributions patched this up, but in different ways, creating inconsistent tables. Some users started releasing their own modified version of lmctl with an extended table to support different devices.
While explicit forks of entire projects have problems, the fact that they delineate where they took the code from makes it easier to track down the source of bugs and handle security vulnerabilities. On the other hand, when a project borrows some code and imports it in its source distribution, this kind of tracking becomes more difficult. Free Software licenses explicitly allow, and push for, importing code between projects; cross-pollination also improves general code quality over time.
For most distributions, an internal imported copy of a library inside another project is also a violation of policy. For this reason the developers will most likely try to make the project use a shared, external copy of the code. This works fine when the other library is simply bundled together untouched, but it becomes a nuisance if there are subtle changes which might not be apparent at a first impression. One thing to take into account when you want to have an internal copy of a library is to consider it as an untouchable piece of code. instead of spending time fixing bugs inside that copy of the code, the developers should try to fix the bugs in the original sources, so that everybody (including themselves) can make use of the improvement.
In the real world, one example of this can be the FFmpeg source code. FFmpeg is imported by many different Free Software projects in the area of multimedia: xine, MPlayer, GStreamer. While it is a very wide common ground for all these projects, as well for some others that aren't importing a copy of it like VLC, some of the imports change the source code, in more or less subtle ways. In the case of xine, the whole build system is replaced to integrate it with the automake-based build system used by the rest of the library. Further patching is done to the sources themselves so that they behave in a slightly different way than the original. The code rots quickly and bugs that were already fixed in the in-development sources of FFmpeg still sprout in xine-lib.
Maintaining such an import is a difficult and boring task, to the point that the developers, in the past two years, have spent a lot of energy toward the goal of not using an internal copy of FFmpeg anymore. The result is that the difference between the original FFmpeg and the internal copy is quite smaller, mostly limited to the build system. Instead of advising against using an external copy of FFmpeg, it is advised not to use the internal one. For the next minor version of xine-lib, FFmpeg is being used pristine, entirely unpatched, and it will probably not even be bundled with the library in the next future.
Successful Forks
Of course it's not all bad. There are successful forks in Free Software, and many of them are now more famous than their parents. I've already named the GNU Compiler Collection, which is the GCC that almost all Free Software users have at hand at the moment. Most people use GCC version 3 and later, which started as a fork of the other GCC (the GNU C Compiler), version 2. The original development of GCC was, like many other GNU projects, very closed to the community.
As Eric S. Raymond defined it in his book The Cathedral and the Bazaar, it was a Cathedral-style development that often prepares the ground for forks, and this was no exception. Multiple forks of the GCC code were created. Their goals, while different, often didn't clash, but could have easily been worked on at the same time. Some of the forks were then merged into the EGCS project, which eventually replaced the original GCC.
Again citing GNU's Cathedral-style of development, it's difficult not to talk about GNU Emacs and its brother XEmacs. Created originally to support one particular product, the XEmacs project is nowadays a mostly standalone project. XEmacs is kept at an arm's length from GNU Emacs, mostly because of licensing and copyright assignment issues. Neither version can be considered a superset of the other because they both implement features in their own way.
Better is the state of Claws Mail, started as a different branch of Sylpheed, with the name Sylpheed Claws. Originally the intention was to develop new features that could one day find their way back to the original code. Claws Mail has since declared itself independent and is now a stand-alone project. In this case, the exchange of code between the two projects has basically halted, as the code bases have diverged so much that they retain very little in common.
In the case of the Ultima Online server emulators, forks became daily events, and cross-pollination had grown to the point where at least five projects were linked by family ties. The UOX3 source code has been forked, reused, imported and cut down so that it is present in WolfPack, LoneWolf, NoX-Wizard and Hypnos. Almost all of the UOX3 forks involved re-writing parts of the code, as it had stratified to the point of not being maintainable. The forks continued copying one from the other to make use of the best features available.
Forking vs. Branching
There are a few good reasons why you might want to detach, temporarily, from a given development track. Development of experimental features, new interfaces, backend rewrites or resurrection of a project whose original authors are unavailable. In most of these cases, forking is not the best solution but branching most likely is. Although the border between these two actions started slimming down thanks to distributed VCS, branching usually doesn't involve setting up a new web page for the project, changing its name or finding a new goal. And a branch is usually related, tightly or not, to the original project. Merges between the two code bases often happen at more or less regular intervals, and ideas and bug reports are shared.
Branches usually have the target of being merged in the main development track, sooner for small, testing branches, or later for huge rewrites. They don't usually require dividing of the efforts as the problems affecting the main branch get their fixes propagated to the other branches when they merge back the original code.
One common problem with developing through branches involved bad support in the Subversion version control system. In Subversion the branches are represented as a different path in the repository, with almost no help for branches in the merge operations. With a modern distributed VCS, branches are so cheap that any checkout is, from some points of view, a different branch, and the merge operations are one of the main focuses. Projects like the Linux kernel or xine-lib rely heavily on an above-average number of branches. These are often short-lived and used for testing purposes.
Looking to the Future
Forks will never end in Free Software as they are supported by one of the freedoms that make Free Software what we all want it to be. The future will, of course, bring new forks. Recently there has been a lot of talk about Funpidgin, a fork of the widespread Pidgin Instant Messaging client (formerly Gaim). Again it seems like it was the Cathedral-style development of the original code that motivated a fork that could give (some of) the users what they wanted.
And even though GNU Emacs opened its process quite a lot, its forks haven't stopped sprouting. This is despite the fact that Richard Stallman, original author and mastermind behind the GNU project, stepped down as maintainer, putting in place Stefan Monnier and Chong Yidong. The Aquamacs Emacs is still diverging from the original GNU Emacs for supporting Apple's Mac OS X, while different versions are being developed to support the multiple user interfaces one can use on that operating system. Similarly, although the Windows port of Emacs is already pretty solid, there are extensions being written to make it easier for users to adapt it to the Microsoft environment.
Forks are usually the effect of a closed-circle development, a Cathedral, where some of the developers or users can't see their objective being fulfilled, will all their energy being poured in. So just look for the projects that don't seem to be getting much love from a community, and you might find a fork starting to make its first leaves.
Then there is the Poppler project, which merged together the modified versions of the XPDF code imported by projects like GNOME and KDE for their PDF viewers. Poppler is soon going to be a nearly omnipresent PDF viewer on Free Software desktops and beyond. This summer's milestone KDE 4.1 release will include the release of the new oKular document viewer, oKular will use Poppler for PDF rendering on the (stable) KDE users' desktops.
Conclusions
I'd suggest that anybody thinking about creating a fork should think twice. Forking is rarely a good choice, better choices can be branching, or if you need just part of a code, working together like Poppler developers did to separate the code to share the common parts.
When you want to make some changes to a software project, propose branching it, show the results to the original developers and discuss with them on how to improve the code. Most of the times you'll find authors are open to the changes.
A fork is a grave matter. It might bring innovation to the Free Software community, but it could also separate developers that could otherwise work together, maybe in a better way. In this light, GitHub's one click forking capability seems like a dangerous feature.
The ever-increasing ease of forking everything, from small projects to part of, or even entire distributions (think about Debian's repositories and Gentoo's overlays) is increasing the fragmentation of Free Software projects. Biodiversity in software can be a very good thing, just like in nature, but people should first try their best to work together, rather than one against the other.
System Applications
Database Software
PostgreSQL Weekly News
The May 11, 2008 edition of the PostgreSQL Weekly News is online with the latest PostgreSQL DBMS articles and resources.
Device Drivers
LIRC 0.8.3 released
Version 0.8.3 of LIRC, an Infrared remote control interface, has been announced. A bug with the Irman remote has been fixed.
Embedded Systems
BusyBox 1.10.2 released
Stable version 1.10.2 of BusyBox, a collection of command line tools for embedded systems, has been announced. "Bugfix-only release for 1.10.x branch. It contains fixes for echo, httpd, pidof, start-stop-daemon, tar, taskset, tab completion in shells, build system."
Web Site Development
BencHTTP: 1.0 released (SourceForge)
Version 1.0 of BencHTTP has been announced. "BencHTTP is a utility to test the performance of HTTP servers under load. It is highly configurable via simple script files and allows extensive customisation of HTTP requests. The current server performance is continuously displayed during the test."
nginx 0.6.31 released
Version 0.6.31 of nginx, an HTTP server and mail proxy server, has been announced, this is a bug fix release. See the CHANGES file for more details.OpenKM 2.0 released
Version 2.0 of OpenKM, a multi-platform Web 2.0 document management application, has been announced. "This new version entails the following improvements: the previsualization of multimedia elements as images and videos, an improved an rewritten administration interface, a centralized management of templates, an exclusive area to allow users to store their private documentation, a tool for massive import and output data from ZIP files, searches by date ranks as well as translations to more languages."
Desktop Applications
Audio Applications
Audacity 1.3.5 beta released
Version 1.3.5 beta of Audacity, an audio editor, has been announced. "Changes include improvements and new features for recording, import/export and the user interface. Because it is a work in progress and does not yet come with complete documentation or translations into foreign languages, it is recommended for more advanced users."
Business Applications
Openbravo ERP: 2.35 maintenance pack 4 is available (SourceForge)
Version 2.35 maintenance pack 4 of Openbravo ERP, a web-based ERP for SMEs, has been announced. "This a stabilization release with no additional functionality. It is intended for production usage."
Announcing feature update v1.3 of YaMA
Version 1.3 of YaMA has been announced. "Yet Another Meeting Assistant (YaMA), will help you with the Agenda, Meeting Invitations, Minutes of a Meeting as well as Action Items. If you are the assigned minute taker at any meeting, this tool is for you."
Desktop Environments
Announcement for GNOME SlackBuild GNOME 2.22.1 Desktop for Slackware 12.1
GNOME 2.22.1 is now available for Slackware 12.1, with the release of SlackBuild GNOME 2.22.1. "There have been a lot of improvements in this latest GSB release, including the move to PulseAudio, fewer package replacements, a GNOME-integrated Compiz-Fusion setup, the latest NetworkManager, Abiword 2.6, and OpenOffice2.4 built for GNOME, a richer Mono C# suite, as well as all the great features of GNOME 2.22."
GNOME Software Announcements
The following new GNOME software has been announced this week:- Banshee 1.0 Beta 1 (new features and bug fixes)
- cheese 2.23.2 (new features, bug fixes and translation work)
- Deskbar-Applet 2.23.2 (new features, bug fixes and translation work)
- Eye of GNOME 2.23.2 (new features, bug fixes and translation work)
- GDM2 2.20.6 (new features and bug fixes)
- gnome-applets 2.23.2 (new features, bug fixes and translation work)
- Gnome Subtitles 0.8 (new features, bug fixes and translation work)
- Gossip 0.29 (new features, bug fixes and translation work)
- gstreamermm 0.9.5 (new features and documentation work)
- gtk-engines 2.15.1 (new features, bug fixes and translation work)
- libgnomekdb 2.23.2 (unstable release)
- mousetweaks 2.23.2 (bug fixes and translation work)
- Orca 2.23.2 (bug fixes and translation work)
- Pango-1.21.1 (new features and bug fixes)
- PyPoppler 0.8.0 (bug fixes and API update)
- PySwfdec 0.6.6.1 (bug fixes)
- Tomboy 0.11.0 (new features and bug fixes)
- Vala 0.3.2 (new features, bug fixes and documentation work)
KDE Software Announcements
The following new KDE software has been announced this week:- Application Launcher-Desktop Organizer 1.0 (initial release)
- beagle KIO slave 0.4.0 (new features and code cleanup)
- cb2Bib 0.9.5 (new features and bug fixes)
- gambas 2 2.6 (new features and bug fixes)
- Kalasnikof 1.5.1 (bug fix)
- KDelicious 3.2.1 (translation work)
- KDVD-RAM Tools 0.5-RC1 (bug fix)
- KMyMoney 0.9 (new features, bug fixes and translation work)
- KRadioRipper 0.2.1 (new features and bug fixes)
- Krypt 0.2 (unspecified)
- KsirK 4.00.73 (v2 preview release)
- Kumblr 0.1 (initial release)
- 'Q' DVD-Author 1.2.0 (new features)
- QtiPlot 0.9.6 (new features and bug fixes)
- report rc 5 (bug fixes)
- report rc 6 (new features and code cleanup)
- SYS 0.22 (new packages)
- TaskJuggler 2.4.1 (bug fixes)
Xorg Software Announcements
The following new Xorg software has been announced this week:- fonttosfnt 1.0.4 (bug fixes)
- mkfontscale 1.0.5 (bug fixes)
- xf86-video-nv 2.1.9 (new features and bug fixes)
- xtrans 1.2 (bug fixes)
Desktop Publishing
LyX 1.5.5 is released
Version 1.5.5 of LyX, a GUI front-end to the TeX typesetter, is out. "We are pleased to announce the release of LyX 1.5.5. Being the fourth maintenance release in the 1.5.x cycle, this release further improves the stability and usability of the application. Besides this, it also introduces some new features. Most notably, LyX is now prepared to be compiled with Qt 4.4 that has just been released: the stability issues that occured in previous versions of LyX when compiled against Qt 4.4 have been resolved."
Financial Applications
Buddi: 3.2 stable released (SourceForge)
Version 3.2 stable of Buddi has been announced. "Buddi is a simple budgeting program targeted for users with little or no financial background. It allows users to set up accounts and categories, record transactions, check spending habits, etc. I am happy to announce the first minor stable release on the 3.x branch. This version fixes a few bugs (including a bug when copying your files from OSX 10.4 Tiger to OSX 10.5 Leopard), and resolves a few feature requests. No major changes here, but it is recommended that all users upgrade."
KMyMoney 0.9 released
The long-awaited KMyMoney 0.9 release is out. There's a lot of new stuff here, including charts, budgets, forecasts ("Sadly, we are unable to accurately predict the future value of your investments") a whole new set of wizards, better transaction auto-filling, and more. Note that this is a development release, but it's still an important step forward for a promising project.
Interoperability
Wine 1.0-rc1 released
Release 1.0-rc1 of Wine has been announced. "This is the first release candidate for Wine 1.0. Please give it a good testing to help us make 1.0 as good as possible. In particular please help us look for apps that used to work, but don't now."
Office Suites
KOffice 2.0 Alpha 7 released (KDE.News)
KDE.News covers the release of KOffice 2.0 Alpha 7. "The KDE Project today announced the release of KOffice version 2.0 Alpha 7, a technology preview of the upcoming version 2.0. This version adds a lot of polish, some new features in Kexi and KPresenter and especially better support for the OpenDocument format. It is clear that the release of KOffice 2.0 with all the new technologies it brings is drawing nearer. This is mainly a technology preview for those that are interested in the new ideas and technologies of the KOffice 2 series."
Miscellaneous
Accelerator 1.0.0 released
Version 1.0.0 of Accelerator has been announced. "Accelerator is a GUI program that shows where keyboard accelerators should go in menu option texts and dialog labels. The program produces optimal results on the basis that the best accelerator is the first character, the second best is the first character of a word, the third best is any character, the worst is no accelerator at all, and no accelerator should be used more than once. With this program developers can help improve usability for users who can't use the mouse and for fast typists who don't want to use the mouse."
Task Coach: Release 0.70.0 available (SourceForge)
Version 0.70.0 of Task Coach, a todo manager for managing personal tasks and todo lists, has been announced. Changes include: "Small feature enhancements, more translations and several bug fixes. Task Coach is now distributed under the GPLv3+."
Languages and Tools
C
GCC 4.2.4 Status Report
The May 12, 2008 edition of the GCC 4.2.4 Status Report has been published. "GCC 4.2.4 will follow in about a week in the absence of any significant problems with 4.2.4-rc1. If you believe a problem should delay the release, please file a bug in Bugzilla, mark it as a regression, note there the details of the previous 4.2 releases in which it worked and CC me on it. Anything that is not a regression from a previous 4.2 release is unlikely to delay this release."
Caml
Caml Weekly News
The May 13, 2008 edition of the Caml Weekly News is out with new articles about the Caml language. Topics include: Core godi package, GODI Search: Includes now sources and uint64lib release (again), plus help request.
Perl
This Week on perl5-porters (use Perl)
The 28 April-3 May, 2008 edition of This Week on perl5-porters is out with the latest Perl 5 news.
Python
Python 2.6a3 and 3.0a5 announced
Python versions 2.6a3 and 3.0a5 have been released. "Please note that these are alpha releases, and as such are not suitable for production environments. We continue to strive for a high degree of quality, but there are still some known problems and the feature sets have not been finalized. These alphas are being released to solicit feedback and hopefully discover bugs, as well as allowing you to determine how changes in 2.6 and 3.0 might impact you."
Pyrex 0.9.7 released
Version 0.9.7 of Pyrex has been announced, several new capabilities have been added. "Pyrex is a language for writing Python extension modules. It lets you freely mix operations on Python and C data, with all Python reference counting and error checking handled automatically."
Python-URL! - weekly Python news and links
The May 12, 2008 edition of the Python-URL! is online with a new collection of Python article links.
Tcl/Tk
Tcl-URL! - weekly Tcl news and links
The May 8, 2008 edition of the Tcl-URL! is online with new Tcl/Tk articles and resources.Tcl-URL! - weekly Tcl news and links
The May 14, 2008 edition of the Tcl-URL! is online with new Tcl/Tk articles and resources.
Build Tools
Paver 0.7 announced
Version 0.7 of Paver has been announced. "Paver is a "task" oriented build, distribution and deployment scripting tool. It's similar in idea to Rake, but is geared toward Python projects and takes advantage of popular Python tools and libraries. Paver can be seen as providing an easier and more cohesive way to work with a variety of proven tools. With Version 0.7, Paver is now a full stand-in for the traditional distutils- or setuptools-based setup.py. Need to perform some extra work before an sdist runs?"
IDEs
Pydev and Pydev Extensions 1.3.17 announced
Version 1.3.17 of Pydev and Pydev Extensions is out with bug fixes. "PyDev is a plugin that enables users to use Eclipse for Python and Jython development -- making Eclipse a first class Python IDE -- It comes with many goodies such as code completion, syntax highlighting, syntax analysis, refactor, debug and many others."
Page editor: Forrest Cook
Linux in the news
Recommended Reading
Matthew Garrett on the race to idle
Matthew Garrett talks about power saving strategy in his unique manner. "Some people write software that lets you choose different power profiles depending on whether you're on AC or battery. Typically, one of the choices lets you reduce the speed of your processor when you're on battery. This is bad. It is wrong. The people who implement these programs are dangerous. Do not listen to them. Do not endorse their product and/or newsletter. Do not allow your eldest child to engage in conjugal acts with them. Doing this will reduce your battery life. It will heat up your home. It will kill baby seals. The sea will rise and your car will float away. If you are already running it, make sure that it always sets your cpufreq governor to ondemand and does not limit the frequencies in use. Failure to do so will result in me setting you on fire."
Sic Transit Gloria Laptopi
Ivan Krstić has a strongly worded essay about OLPC, education, and free software. He has a great deal to say about the history and future of the project that could only come from an insider. "The whole 'we're investing into Sugar, it'll just run on Windows' gambit is sheer nonsense. Nicholas knows quite well that Sugar won't magically become better simply by virtue of running on Windows rather than Linux. In reality, Nicholas wants to ship plain XP desktops. He's told me so. That he might possibly fund a Sugar effort to the side and pay lip service to the notion of its 'availability' as an option to purchasing countries is at best a tepid effort to avert a PR disaster."
Companies
Microsoft vies for budget laptop market with XP price cuts (ars technica)
Microsoft is offering reduced prices on Windows XP to combat the threat of Linux on low-cost ultra-mobile PCs, according to this ars technica article. "According to IDG, which obtained details about the price cuts from hardware vendors, Microsoft will offer Windows XP licenses for $26 for developing countries and $32 for the rest of the world. In order to qualify for these deep discounts, products will have to be limited to a maximum of 1GB of RAM, 10.2 inch screens, and single-core processors clocked no higher than 1GHz (though there are apparently some exceptions). Products must also not have hard drives exceeding 80 GB in capacity and cannot have touch-screen technology."
Red Hat releases software for Windows, Linux management (BetaNews)
BetaNews covers Red Hat's release of the JBoss Operations Network 2.0 management platform. "Made generally available during this week's JavaOne conference, JBoss ON 2.0 is designed for managing cross-platform application development, testing, deployment and monitoring. Its modular architecture includes components for inventory, administration, and software updates, plus an optional monitoring module. Although some developers find that the JBoss ON Software Update feature works in a similar way to Microsoft's Windows Update service, the feature is used for distributing patches and other updates to JBoss Enterprise Platform software. But the inventory module enables cataloging of IT assets spanning Windows, Linux, Sun Solaris, HP-UX, and IBM AIX, along with a range of middleware services and servers."
Interviews
AbiWord team interview (Red Hat Magazine)
Red Hat Magazine has an interview with the AbiWord team. "AbiWord just had a great 2.6 release and the developers took several hours of their spare time over a few weeks period answering questions and providing information. Thanks to the team and especially MarcMaurer for his time and patience. We present you a detailed interview with the AbiWord team on a broad range of topics."
Q&A: Sun exec ponders OpenSolaris, Linux (ComputerWorld)
Computerworld talks with Ian Murdock at JavaOne. "What do you do at Sun? I see the OpenSolaris project seems to fall onto your plate. Initially, I was working on OpenSolaris and started Project Indiana, which culminated this week [with] the first version of the OpenSolaris binary distribution. These days, I am running the developer and community marketing organization, so I am responsible for marketing Sun's developer tools, the developer programs like Sun Developer Network and Tech Days Events, our open-source projects and communities. [Also, I do marketing for] StarOffice, OpenOffice, Network.com. So basically anything that relates to the developer community in some way, I run the marketing piece of that."
Interview with Neil Young on Music Piracy, MP3 Hell and Finding Freaks on the Web (ReadWriteWeb)
Marshall Kirkpatrick had a chance to talk to Neil Young at the JavaOne conference in San Francisco. "Here at the JavaOne conference in San Francisco Neil Young just announced that his whole life's work will be made available on in a dynamically updating collection delivered on Blu-ray disk. After his Keynote announcement I was fortunate enough to participate in a small group interview with a handful of other bloggers. Young offered interesting replies to questions about Trent Reznor and music piracy, about MP3 sound quality and about the way the web enables his extensive work on electric cars." (Thanks to Norman Gaywood)
Resources
Running a small business on desktop Linux (DesktopLinux)
DesktopLinux has an article by guest author Howard Fosdick. "This paper surveys Linux's suitability for use by owners of very small businesses and the self-employed. It was written by Howard Fosdick, a self-employed database consultant who finds Linux fairly well-suited to his needs, and reckons it has saved him thousands of dollars in recent years."
Reviews
Fedora 9 and the road to KDE4 (Red Hat Magazine)
Red Hat Magazine reviews KDE 4 as seen on Fedora 9. "Those who remember the days of KDE or GNOME 2.0 wont be disappointed at the current state. Todays new audience might have different expectations, and it is unlikely the majority has the patience to deal with a major rewrite like this one. Even the Linux kernel has moved towards incremental progress over major rewrites in a development branch. The KDE project has taken a big risk, hoping to jump-start innovation. I hope they get it right. Along with the interesting acquisition of Trolltech by Nokia, the future is exciting and uncertain and thats just the way I like it."
Miscellaneous
HIEs consider move to Mirth (HealthIT)
HealthIT looks at the increasing popularity of Mirth in the health care world. "Mirth, an open source middleware solution, is gaining ground among health information exchanges. WebReach, a health information technology consultancy in Irvine, Calif., rolled out Mirth 1.0 as HL7-supporting messaging middleware in July 2006. Jon Teichrow, president of WebReach, refers to Mirth as a standards-based interface engine for transferring information among systems. The list of supported standards now includes HL7 v2 and v3, X12, EDI, XML, and the National Council for Prescription Drug Programs." (Found on LinuxMedNews).
Page editor: Forrest Cook
Announcements
Non-Commercial announcements
European software patents via treaty?
FFII has sent out an alert warning that representatives from the U.S. and the European Union are in talks over the creation of a new patent treaty. "Parts of the proposed treaty will contain provision on software patents, and could legalise them on both sides of the Atlantic." This would certainly not be the first time that international trade treaties have been used to get around domestic democratic roadblocks.
The GPL wins in Germany - again
Harald Welte has posted an update on the GPL-compliance case hearing held on May 8. In the end, Skype dropped all of its arguments and gave up its appeal. "This means that the previous court decision is legally binding to Skype, and we have successfully won what has probably been the most lengthy and time consuming case so far."
OWASP sponsors application security research projects
The Open Web Application Security Project has announced the award of 31 grants to application security researchers. "The Open Web Application Security Project (OWASP) has awarded 31 grants to promising application security researchers as part of the OWASP Summer of Code 2008. The results of all these projects will be free and open source for everyone."
Commercial announcements
Adaptec launches new series 2 RAID controller for Linux
Adaptec has announced its new Linux-compatible entry-level Series 2 Unified Serial RAID controller family. "The Series 2 controllers eliminate the limitations of software RAID-based hardware solutions commonly found in entry-level systems, delivering a wide range of advantages for inexpensive SATA and SAS disk and tape drive systems."
CNR.com announces support For Linux Mint
Linspire, Inc. has announced click-n-run support for the Linux Mint 4.0 operating system. ""We are excited to introduce the CNR.com service to the growing Linux Mint user base," stated Larry Kettler, President & CEO of Linspire, Inc. "With CNR.com we will continue to focus on offering users the best value in both software and services available in the desktop Linux marketplace.""
NnorTuX launches NTX collaboration system
NnorTuX has announced its CentOS-based NTX collaboration system. "Open-xchange is a advanced feature rich collaboration system, that could be challenge to install. NnorTuX want to make Open-xchange available as a out of box solution for medium and small companies. NTX is distributed as a single ISO image, the ISO image include all that is needed."
Wind River Joins the OpenSAF Foundation
The OpenSAF Foundation has announced that Wind River Systems, Inc. is their newest member. "As part of the OpenSAF project, Wind River joins other industry leaders to develop and create a reference implementation of high availability (HA) base platform middleware based on SA Forum specifications and SCOPE Alliance recommendations. Wind River will also support the efforts of its developers who choose to contribute time and expertise to accelerate the completion of the OpenSAF project."
SourceForge.net launches standard-setting implementation
SourceForge has announced new OpenID functionality on the SourceForge.net site. "OpenID is an open, decentralized, framework for digital identity that eliminates the need for multiple usernames across different websites. SourceForge.net users can now log in with an OpenID and receive a corresponding SourceForge.net identity for use at other sites that support OpenID logins."
Zenoss announces sponsorship of Twisted
Zenoss Inc. has announced its Gold Sponsorship of the non-profit Twisted Software Foundation. ""As early as two years ago, many people still questioned the use of open source software in the enterprise," said Glyph Lefkowitz, founder of Twisted. "Forward-thinking companies like Zenoss are proving that through collaboration and community involvement open source software is not only ready for, but can excel in, an enterprise environment. The Twisted project enables companies like Zenoss to benefit from the collective input of a highly technical and savvy community." Twisted is an event-driven network-programming framework written in Python, a dynamic object-oriented programming language."
New Books
The Book of IMAP--New from No Starch Press
No Starch Press has published the book The Book of IMAP by Peer Heinlein and Peer Hartleben.MySQL in a Nutshell, Second Edition--New from O'Reilly
O'Reilly has published the book MySQL in a Nutshell, Second Edition by Russell J. T. Dyer.Simply Rails 2--New from SitePoint
SitePoint has published the book Simply Rails 2 by Patrick Lenz.
Education and Certification
Python Bootcamp at Big Nerd Ranch
Another Python Bootcamp will be held at Big Nerd Ranch near Atlanta, GA on June 9-13.
Meeting Minutes
Perl 6 Design Meeting Minutes (use Perl)
The minutes from the May 6, 2008 Perl 6 Design Meeting have been published. "The Perl 6 design team met by phone on 07 May 2008. Larry, Allison, Patrick, Will, Jerry, Nicholas, Jesse, and chromatic attended."
April PSF Board meeting minutes available
The meeting minutes are available for the April 14, 2008 Python Software Foundation Board meeting.
Calls for Presentations
Akademy 2008 Embedded and Mobile Day - Call for Participation (KDE.News)
KDE.News has posted a call for participation for the Akademy 2008 Embedded and Mobile Day. "The EmSys research group is hosting an "Embedded and Mobile Day" at Akademy 2008, this year in Sint-Katelijne-Waver, Belgium at Campus De Nayer. We welcome you to join the presentations and panel discussions about Open Source and Open Desktop technologies in embedded systems and mobile devices on Tuesday 12 August 2008. We are looking for your contribution in the form of presentations, papers or demos..." The submission deadline is June 16.
Upcoming Events
Linuxtag 2008 - latest information
An information update for Linuxtag 2008 has been sent out. "Please don't forget the biggest Linux event in Germany, just a few weeks away! For the second time it will be in Berlin, from 28-31.05.2008. The location is slightly differ[]ent, still at Messehalle Funkturm, but this time it's Hall7 (south instead of east last year)."
Events: May 22, 2008 to July 21, 2008
The following event listing is taken from the LWN.net Calendar.
Date(s) | Event | Location |
---|---|---|
May 19 May 23 |
AFS and Kerberos Best Practices Workshop 2008 | Newark, NJ, USA |
May 20 May 23 |
PGCon 2008 | Ottawa, Ontario, Canada |
May 21 May 22 |
EUSecWest 2008 | London, England |
May 21 May 22 |
linuxdays.ch Genève | Genève, Switzerland |
May 28 May 31 |
LinuxTag 2008 where .com meets .org | Berlin, Germany |
May 29 June 1 |
RailsConf 2008 | Portland, OR, USA |
May 29 May 30 |
SyScan08 Hong Kong | Hong Kong, China |
May 30 May 31 |
eLiberatica 2008 - The benefits of Open and Free Technologies | Bucharest, Romania |
June 2 June 5 |
VON.x Europe | Amsterdam, the Netherlands |
June 3 June 4 |
Nordic Nagios Meet | Stockholm, Sweden |
June 6 June 7 |
Portuguese Perl Workshop | Braga, Portugal |
June 6 June 7 |
European Tcl/Tk User Meeting 2008 | Strasbourg, France |
June 9 June 13 |
Python Bootcamp with David Beazley | Atlanta, Georgia, USA |
June 10 June 15 |
REcon 2008 | Montreal, Quebec, Canada |
June 11 June 13 |
kvm developer's forum 2008 | Napa, CA, USA |
June 16 June 18 |
YAPC::NA 2008 | Chicago, IL, USA |
June 17 June 22 |
Liverpool Open Source City | Liverpool, England |
June 18 June 20 |
Red Hat Summit 2008 | Boston, MA, USA |
June 18 June 20 |
National Computer and Information Security Conference ACIS 2008 | Bogota, Columbia |
June 19 June 21 |
Fedora Users and Developers Conference | Boston, MA, USA |
June 22 June 27 |
2008 USENIX Annual Technical Conference | Boston, MA, USA |
June 23 June 24 |
O'Reilly Velocity Conference | San Francisco, CA, USA |
June 28 June 29 |
Rockbox Euro Devcon 2008 | Berlin, Germany |
July 1 July 5 |
Libre Software Meeting 2008 | Mont-de-Marsan, France |
July 3 July 4 |
SyScan08 Singapore | Novotel Clarke Quay, Singapore |
July 3 | Penguin in a Box 2008: Embedded Linux Seminar | Herzelia, Israel |
July 5 | Open Tech 2008 | London, England |
July 7 July 12 |
EuroPython 2008 | Vilnius, Lithuania |
July 7 July 12 |
GUADEC 2008 | Istanbul, Turkey |
July 14 July 18 |
PHP 5 & PostgreSQL Bootcamp at the Big Nerd Ranch | Atlanta, USA |
July 18 July 20 |
RubyFringe | Canada, Toronto |
July 19 | Firebird Developers Day | Piracicaba-SP, Brazil |
July 19 July 25 |
Ruby & Ruby on Rails Bootcamp at the Big Nerd Ranch | Atlanta, USA |
July 19 July 20 |
LugRadio Live 2008 - UK | Wolverhampton, United Kingdom |
July 20 | OSCON PDXPUG Day | Portland, OR, USA |
If your event does not appear here, please tell us about it.
Audio and Video programs
Embedded Linux Conference videos and report
Free Electrons presents some new videos and a report from the Embedded Linux Conference. "In our report, written by Thomas Petazzoni, we tried to complement Jake Edge's excellent coverage of this event, so that LWN.net readers have new things to learn. Note that we are also releasing videos from the embedded developer room from the FOSDEM conference, which took place in Brussels last February. We also have a video from Keith Packard about X driver development."
Page editor: Forrest Cook