LWN.net Weekly Edition for April 9, 2009
ELC2009: Ubiquitous Linux
The theme of this year's Embedded Linux Conference (ELC) is "Ubiquity" and Dirk Hohndel opened the conference with a keynote on just that topic. Hohndel, Intel's chief Linux and open source technologist, looked at how widespread Linux is in consumer electronics, but also how many other, far less obvious devices it has been embedded into. In addition, he discussed some of the problems caused by vendors and manufacturers not engaging with the community and how that can lead to suboptimal devices; he also reviewed the value proposition of Linux, pointing out that the "zero-cost" is not zero and even if it were, that's not where Linux's strengths lie.
Tim Bird, who organized the conference for the CE Linux Forum (CELF), introduced
Hohndel by calling him "something of a legend in the open source
community
" for his work in the community over the last 15 years or
more. Bird recalled some advice Hohndel had given him some years ago that
the secret to having an organization that works well with open source is to
keep everything open. It seems tautological, but that is exactly what Bird
and CELF have done with great success. CELF keeps all of its information
available to everyone, not just members, and welcomes the participation of
the community. In large part, that came straight out of Hohndel's advice.
Hohndel opened his presentation by contrasting the ELC participants with those that had attended his talk at the Open Source Business Conference (OSBC) earlier this year. Based on the traditional show of hands, he noted that there were far fewer lawyers and far more people who had used Linux at ELC than at OSBC. But part of the point he was making is that even though the folks at OSBC didn't think they were using Linux, they almost certainly are—and on a daily basis.
He noted that based on his title, "Ubiquitous Linux", and the
dictionary definition, he could give the shortest keynote on record:
"Linux is everywhere, thank you very much
". He pointed out
that servers were the first commercial success for Linux, but were not
really the purpose for which Linux was created. Linus Torvalds's lack of a
desktop Unix
was what really spawned Linux. Now, though, you can hardly do anything on
the Internet without bumping into Linux.
If you want to search (Google), buy a book (Amazon), book a flight, trade
NASDAQ stocks, or participate in an auction (eBay), you are dealing with
Linux. In fact, Hohndel says, "if you can spend five minutes on the
Internet and do not run Linux, you're a genius
". He asked for a
show of hands to see how many Eee PC owners there were in the audience; he was
disappointed to only see about seven. He claimed that even the lawyers at
OSBC had more of these systems. But computers are
boring, Hohndel says, everyone knows Linux runs on computers.
With about three minutes invested in a Google search, Hohndel was able to
come up with 22 phone vendors with a Linux phone before he stopped
looking. Most of those vendors are in Asia and he was not sure why the US
was "behind the curve
". Part of it is that vendors don't talk
about Linux on their phones, he said. Half of the Motorola Razr phones run
Linux, but you wouldn't know that based on the marketing.
He went on to list some of the more common, even well-known, Linux devices
including VoIP phones, digital video recorders, camcorders, digital
cameras, set top boxes, network attached storage (NAS) controllers, and so
on. GPS devices run Linux as well: "Microsoft was kind enough to
point that out to us
". He did lament the poor support of standards
in phone browsers, though, pointing out that there is a full Linux system
underneath, so it should be relatively easy to produce a browser that runs
Javascript and so forth—but many vendors do not. That
complaint was a bit of a preview of a theme he would come to later: if the
companies would engage with the community, they would get that browser for
"free".
Hohndel then started listing things that run Linux but are far less known, starting with MRI and CAT scan medical devices. Vehicles of all types use Linux in their avionics (airplanes), in-vehicle entertainment systems (planes, trains, and cars), repair centers (cars). He noted that for redundancy purposes, the avionics vendor's system had a matching implementation in Windows, which wasn't necessarily very comforting to Hohndel. He noted that he had written kernel code along the way and wasn't completely comfortable with that code running planes he flies on either.
Hohndel challenged the audience to see how long they could go without
interacting with a Linux system, listing the kinds of things one would have
to do without. He also related a conversation he had with an executive
from a large, unnamed, software company (perhaps located in the Pacific
Northwest) who claimed he never used Linux. By
the end of the conversation, they had come up with a dozen different Linux
systems he used on a daily basis. "
He then turned to the question of why Linux is everywhere. He noted that
"
Intel has hundreds of Linux developers, he said, but "
The strength of Linux is that if you run into a problem, you can solve it
yourself or hire someone to do it for you. He contrasted that with a
proprietary solution where you pay $20,000 up front and some small per-unit
royalty, which might actually be cheaper, at least on paper, but if there
is a problem, you have no leverage with the vendor. How can you meet your
market window when you have no way to fix problems that you encounter, he
asked.
Choosing Linux is about customizability as well as security, Hohndel says.
You can control the footprint of the system because you have the source
code and can customize it as needed. Anyone who says they are going with
Linux because of zero cost have proven to Hohndel that they don't
understand what you can do with Linux.
This leads to a problem with the current crop of consumer devices that run
Linux: they don't take advantage of the strengths of the OS. Hohndel
thinks that most vendors using Linux in embedded devices are doing it
wrong. They are focused on the price to the exclusion of building a
community around the device. They don't make any money on a device they
have already sold, so they focus on the next device and ignore the idea
that just by opening things up, they could build a community that would
help them sell that next device.
Hohndel said that being open to the community will reap many benefits. New
features and functionality will be added by others to "
Intel is trying to lead by example, to some extent, with its Moblin
efforts. Intel has turned the "
Hohndel's talk didn't cover too much in the way of new ground—much of
what he said has been bandied about before—but he tied the ubiquity
of Linux and the foot-dragging of vendors with respect to the community in
an interesting way. For a number of years, folks have been talking about
ways to get Linux into more devices of various sorts; that battle has been
won to a large extent. The next step is to bring the device manufacturers
into the community; that battle has only recently started. One senses that
Linux and the community will win that one as well.
The GNOME desktop environment made its 2.0 release in
June of 2002, and quickly established a six-month cycle between stable
releases. Now the release team has drafted a plan for GNOME 3.0,
tentatively to arrive two cycles from now in March of 2010. A few
user-visible changes are slated to appear, accompanied by far more
refinements in the dependencies, language bindings, and the structure of
what constitutes the core of GNOME. Vincent Untz sent a planning
statement to the GNOME desktop development mailing list on April 2 to
outline the big issues and the release team's plan. The discussion
is of course ongoing, but the basic idea consists of three components: new
technologies that will directly affect the user experience, structural
changes to the modules and module sets that define GNOME, and ways to promote
GNOME in hopes of growing the surrounding community. The two major user-facing technologies scheduled to debut in GNOME 3.0
are GNOME Shell and GNOME Zeitgeist. GNOME Shell is a new desktop layer that will handle displaying
application windows, notifications, and other objects. It is intended to
take the place of both the GNOME panel and the window manager, combining
those functions into a unified "scene graph." GNOME Shell will use the
OpenGL-based Clutter for
rendering, and is written primarily in Javascript. Owen Taylor explained
on his blog that the choice of Javascript allows a lower barrier to entry
for applet authors. GNOME Shell will be an option in the GNOME 2.28 stable
release scheduled for the fall of 2009. GNOME Zeitgeist is a non-hierarchical file management system. Rather
than finding files based on their location in the filesystem, Zeitgeist
provides a suite of alternative interfaces to use: a
last-accessed-on calendar, an easy-to-use bookmark system, tags, and
content-type filters. Other entry points are in the works, including
attached comments and location awareness (as is "files last opened when I
was in Barcelona"). Developers will probably be more interested in changes to the platform
itself, including shuffling out of old libraries, inclusion of new ones,
and possible changes to GNOME's module sets. According to Andre Klapper, several deprecated libraries will be removed
in 3.0, such as the sound server esound, the file system layer libgnomevfs,
2-D graphics library libart_lgpl, and printing library libgnomeprint. Some
new libraries will be introduced, including the aforementioned Clutter, GeoClue for location-awareness,
and libchamplain for
easy rendering of maps. Also new is the idea of "staging" level libraries;
components hopefully in transition towards full support in a future GNOME
release, but still making API or ABI changes. The key example here is GStreamer, which is widely
used and enjoys tremendous support form the GNOME community, but still
undergoing rapid development. The project also hopes to encourage
developers to increase the usage of non-GNOME dependencies like D-Bus and Avahi. More subtle changes are likely to come in the way GNOME is packaged.
The current and long standing scheme divides the code into "module sets,"
as Untz and Lucas Rocha explained. Each set contains libraries and
applications for a particular usage profile: the desktop, mobile, developer
tools, platform bindings, and so on. One concern is that the current
module sets make unnecessary divisions. Untz noted that the "developer
platform" modules set (which contains C bindings) should really be combined
with the "platform bindings" module set (which contains bindings for other
languages); keeping them divided adds to the perception that C is blessed,
but other languages are not. Another problem is that the module sets have slowly become too rigid
over the course of the 2.x releases, no longer encompassing all of the
applications in the GNOME community, and perhaps unintentionally
communicating that some applications are "official" while others are
not. Untz and Rocha cited several examples where two or more excellent
applications of the same type seem to compete, including Rhythmbox and Banshee, Empathy and Pidgin, and gThumb and F-Spot. If the project selects one for
inclusion in the desktop module set, it inadvertently slights the other,
which is not the intention. " " The third focal point for 3.0 is promoting GNOME better. To be sure,
widening the community through reorganizing the module sets will draw in
more developers, but as Untz observed, developers are only one of three
target audiences. The others are users and vendors — a group that
includes Linux distributions and mobile device makers. In recent years, Untz felt that the project has not had a " Exactly how to proceed is not as clear as identifying the issue. GNOME
is a large project, but it is in the middle of the stack — neither a
single application that a user can try out with a simple package install,
nor a full operating system comprising a complete solution. GNOME Foundation executive director
Stormy Peters observed that the vast majority of GNOME users use the
environment courtesy of an install-time option from their Linux
distribution. " The promotional effort will entail revamping the gnome.org web site, and
a concerted effort from the GNOME marketing team. Untz summarized the
challenge, " With two full release cycles in which to work, the discussion is just
the beginning. There are several additional changes that could make their
way into GNOME 3.0, including replacing the aging gconf configuration system
with dconf, and an emphasis on
"social desktop" technologies like the Telepathy framework. Rocha and Ken VanDine both observed that they hope the 3.0 cycle widens
the GNOME community. Rocha said he would like to see more space for
experimentation, and VanDine added that he would like to see GNOME project
infrastructure (such as git and bugzilla access) opened up to individual
developers, so that they could create their own branches, propose merges,
and participate. However it plays out, Peters is confident it will reflect the wishes of
the GNOME community. "
Anybody who has administered Unix-like systems for long enough has probably
ended up swinging from that rope at least once. So one would think that
there might be support for work which reduces the potential for
self-hanging. And indeed there is, but that doesn't mean that all such
changes are welcome.
Readers with a lot of spare time and a desire to wander into email
flamewars could probably occupy themselves with this
fedora-devel thread for quite some time. It seems that the X.org developers recently
decided that the three-finger salute (alt-control-backspace) should no longer, by
default, immediately kill the X server. The reasoning behind this change
is clear enough: it can be really irritating to hit the wrong key sequence
and watch all of one's work evaporate before one's eyes. Besides, the
environmental costs of replacing all of those thrown-across-the-room
keyboards is increasingly hard to justify.
Unfortunately for the polar bear population, the change inspired a rather
severe storm of flying keyboards in its own right. A certain Gerry Reno complained on fedora-devel that Fedora should
have overridden X.org's decision regarding this key sequence. Unsatisfied
with the hundreds of responses found there, he took the discussion to the X.org development
list, wherein he claimed:
So, it seems, we have a conspiracy of Emacs users working to deprive the
wider user community of a useful tool. Daniel Stone, the developer who
committed this change, denies
this charge:
(It's worth noting that the Fedora Weekly Webcomic blames
a different conspiracy for this change).
In truth, it's clear that a number of reasonably capable users have, at
times, lost work as a result of hitting this key sequence by mistake.
Enough of those users complained that the X.org developers looked at the
issue, and, according to Matthew Garrett,
"
A reversal of this decision is unlikely. But the development community
would still like to accommodate users who feel the need for the full length
of rope. Said users can reverse the default in their xorg.conf file now,
of course. The openSUSE approach has been to require that the sequence be
hit twice before bringing the world to an end, but it's not clear that
other distributors will follow suit. There has been discussion of moving
the action to a key sequence which is harder to hit by accident. There may
eventually be a per-user configuration option to enable this behavior as
well, though that will require some X server changes first.
Meanwhile, Ubuntu developers have cut
off a classic piece of Unix rope by boldly disabling the
"rm -rf /" command. It seems that the rm
command has a --preserve-root option which prevents the removal
of the root directory. In Ubuntu, this option was not enabled by default,
leading to the bug filed by a concerned user. The distribution's
developers agreed that the ability to remove the root directory was not a
particularly useful feature, and, additionally, that issuing an
"rm -rf /" command was easier than one might expect -
poorly-written scripts are evidently a common source of that kind of
mistake. So, in October, 2008, they made
--preserve-root the default for the Intrepid and Hardy releases.
Some months later, we have started to see complaints like this:
and this:
Those who are concerned about this change have more to worry about: it
would appear that Fedora has followed suit. Even so, the rope has not been
shortened by any great length; those wishing to hang themselves can use any
of a number of alternatives, including:
and so on. And, of course, the --no-preserve-root option remains
available for those to can't think of any other way to destroy their
systems.
But is this contrary to the Unix philosophy? If so, one should certainly
complain about the much more obnoxious
.bashrc entries that Fedora has been inflicting on the root account for years.
That is the sort of change that trains users to blindly agree to
anything the system asks; your editor (who immediately removes such things)
feels that overall user safety is not improved by asking "really do this?"
questions all the time.
The truth of the matter, though, is that Linux has moved beyond the "hardy
pioneers on the dangerous frontier" stage. Simple ability to hang one's
self is of limited value even to pioneers; it is positively detrimental to
those who come after. It is not surprising that developers and
distributors are trying to disarm some of the most surprising and least
useful booby traps in the system. That process is likely to continue. But
this is still Linux, so those of us who feel the desire will always be able
to break out the full length of rope; we'll just have to remove the warning
label first.
Recently, Openmoko CEO Sean
Moss-Pultz announced at OpenExpo in Bern that the company was reducing
staff and postponing the development of the GTA03, its first
consumer-oriented phone, in favor of an undefined Project B. Available as a
YouTube video, the
announcement was a confirmation of recent rumors that the company was in
trouble. In fact, many concluded that the announcement meant the end of the
company, or at least the beginning of the end.
Either conclusion is premature, but the announcement does highlight the
problems Openmoko faces as a business, as well as its uncertain
future. These problems are evident not only in the company's history, but
in Moss-Pultz's announcement and the company's web site as well.
Openmoko began in 2006 as a project within First International Computer
(FIC), a Taiwanese computer manufacturer. Soon spun off into a separate
company, Openmoko became the center of a small but active community, due largely to its intention
of using only free software and free hardware. Its popularity was helped by
the fact that, prior to the announcement of Android in November 2007, it
was the first effort to introduce free software into the mobile phone
market.
The company and community began work on Openmoko Linux and the hardware to
run it on. As a development community, Openmoko has had some success, with
GNU/Linux, FreeBSD, and L4 kernels ported to its devices, as well as
versions of the Google Android operating system and a number of utilities
and games.
However, as a commercial manufacturer, Openmoko has struggled continuously
to coordinate its software and hardware in all its products, up to and
including the GTA03. As Moss-Pultz explained
in February 2007, "
Despite such difficulties, in July 2007, the company produced the Neo 1973,
a development phone, following it with the Neo FreeRunner in June 2008.
According to
Moss-Pultz in his announcement last week, the Neo sold 3,000 units, and the
FreeRunner 10,000 units. These are modest numbers that, more than anything
else, indicate just how small a player Openmoko is.
Openmoko's progress has not been helped by the countless complaints and
problems about the FreeRunner, all of which also affect the development of
its successor, the GTA03. For one thing, the phone does not support 3G standards for
telecommunication hardware. In his announcement, Moss-Pultz explains this
lack as being due partly to the difficulty of implementing 3G without using
proprietary software and hardware, and partly due to the fact that doing so
would increase the cost by at least two-thirds. But, although these are
sound reasons, without 3G support, Openmoko's products are inevitably going
to be seen by customers as inferior to other mobile devices.
Moreover, if you look through the Openmoko community mailing list over
the last few months, very few aspects of the FreeRunner have escaped being
mentioned in bug reports.
Many of these bugs have been collected on the Neo
FreeRunner Hardware Issues page on the community wiki. Active bugs
include poor audio quality, the inability to boot without a charger, the
corruption of the SD card's partition table when using the suspend
function, incompatibility with SIM cards, problems with the GPS feature,
unreliable reporting of the battery charge, and short battery life —
and this is far from a complete list. Workarounds exist for some of these
problems, but the disheartening cumulative effect is suggested by the
desperate-sounding plea near the top of the page: "
With such a history, nobody should be surprised that the company has
recently seen an exodus
of many of its employees. To what extent these departures were voluntary or
layoffs is uncertain.
But, either way, they increase the difficulties for the company. Harald
Welte, the former Lead System Architect at OpenMoko, wrote in his
blog, "
In the video, Moss-Pultz talks candidly about
the challenges that Openmoko faces and the mistakes it has made.
Some of the challenges are ones that no company can do much about. For
example,
Moss-Pultz began by explaining that, while small companies or individuals
can disrupt software markets, the expense of developing new computer chips
and the difficulty of finding a place to manufacture them means that
existing companies have a practical monopoly on hardware.
Later in the video, he revealed that, although Openmoko had enough monthly
sales to break even by the end of 2008, the recession has caused a serious
decline in sales in 2009. Having not anticipated this downturn, the company
is left with a large inventory of unused hardware components, a
depreciating investment that can only be recouped by sales. Meanwhile,
extra inventory may incur storage costs if the company is like many
high-tech startups and lacks its own warehouses.
At the same time, Moss-Pultz acknowledged that the company has made
tactical errors. He suggested that the company has been slow to realize
that "
More specifically, Moss-Pultz pointed to two direct mistakes. First,
Openmoko could have sold more than 3000 Neos if it had not been overly
conservative in ordering components (a situation that might lead observers
to wonder whether the overstocking of the FreeRunner was over-compensation
for this earlier error). Second, in wishing to honor its commitments, the
company spent months directing what Moss-Pultz estimates as 90% of its
resources to an unspecified single contract. This situation was a
particular drain on resources because it occurred after Openmoko became a
separate company and could no longer draw upon the resources of FIC.
All these events, both external and internal, have taken place against a
background of both too little and too much publicity, according to
Moss-Pultz. On the one hand, outside of the free software community,
Openmoko remains little known among mobile manufacturers and
distributors. This admission suggests that the company has been doing
little or no advertising in its market niche. On the other hand, within the
free software community, Openmoko was widely hailed as "the iPhone-killer,"
even after the company tried to explain that it was not trying to compete
against Apple's popular device. Such a view created inflated expectations
that a company of less than sixty employees could have no hope of matching,
even if it made no mis-steps. It may also have pressured Openmoko
executives into making hasty decisions, although Moss-Pultz did not mention
such a possibility.
Listening to Moss-Pultz, what seems clear is that, for all the surrounding
buzz, Openmoko has suffered largely from an inexperienced team that was
learning as it staggered towards market. This impression is confirmed by
the company web site, which is
surprisingly sparse and unprofessional for a tech company shipping
products.
Even by the sometimes eccentric standards of free software, the site seems
strangely incomplete. The site's front page has no explanation of what the
company does. Even the About page contains
only the translation of a Chinese poem and a few flowery generalities, and
no mention whatsoever of the management team.
Nor can you buy directly from the company. Instead, clicking on the Store
link in the menu takes you to the distribution page, where you find that
Openmoko has only six distributors in the United States and Canada, fifteen
in Europe, and one in Asia. Many of these distributors are obviously
minor. That brings up another problem for Openmoko: As countless other
startups have found, you cannot get major distributors to carry your
products unless you have a track record, but you can hardly hope to get a
track record unless the major distributors carry your products.
The point is, the company web site creates an impression of a company that
is not ready to do business. By comparison, the Openmoko community page is far
more detailed, which suggests that Openmoko executives are far more
comfortable in a community of developers than in a board-room. In this
light, the company's problems and mistakes seem completely understandable.
The necessity for Moss-Pultz's announcement was spelled out by an email
to the Openmoko community by vice-president of marketing Steve Mosher. The
company can only make money through completing development on either the
GTA03 or the mysterious Project B, but cannot afford to complete both just
now. Given that completing the GTA03 would cost three times as much, the
sensible choice is to focus first on Project B.
This logic seems clear enough. Yet it would be clearer still if anyone gave
an indication of what Project B actually was. All Moss-Pultz said on the
subject in his video is that "
No matter what Project B turns out to be, whether it can turn Openmoko
around remains to be seen. Welte commented
that, "
Certainly, the attempts by Openmoko executives to reassure everyone have
not convinced most observers. Many, such as Nilay Patel at engadget, wondered
whether news indicate the lack of a market for free hardware devices in the
mobile market.
However, such speculations are based on too little evidence. At least two-thirds
of all startups fail within ten years, and the rate is generally
assumed to be even higher in technology companies. Openmoko is only a
single company in the mobile space, and, if you look at Openmoko's past
record, nothing indicates that the company's problems are due specifically
to its business plan. A simpler explanation is that the problems are due
simply to inexperience and poor decisions. In the end, Openmoko's future
depends far more on its executives' ability to learn from past mistakes
than on their choice of ideals.
I don't think the general public
realizes how much of this scary stuff they have around them
", he
said.
because it's free
" is the "worst possible
answer
" but this supposed zero cost is an answer that is frequently
given by other audiences of his
talks. "If you are using Linux because it's free, you are in for a
very rude awakening
".
we didn't get
the memo, we actually pay these guys
". He started listing some of
the costs associated with using Linux: hardware likely doesn't come with
Linux drivers, or the drivers only work with a different version of the
kernel than the one needed to get other kernel features necessary to the
product, etc. And "then you talk to your lawyers
" about
licenses and such. None of that is free.
do things you
never thought possible
". He mentioned Linksys wireless routers and
the DD-WRT and OpenWRT communities that have sprung up around them.
Linksys got many things wrong in the early going, but eventually turned
that around. Companies need to recognize that there many more smart
people outside of their company.
stewardship
" over to the Linux
Foundation, but it is in no way abandoning it. According to Hohndel,
engineers have been added to Moblin and the company would like to see what
else the community can do with it. There are lots of things Intel hasn't
thought of, "but the community will, and we hope they do
".
The road to GNOME 3.0
Zeitgeist and the Shell
Library work
My personal opinion is that we should
have a very small 'desktop core' module set and that's what we build and
package 'officially,' then we have a separate process of certificating apps
as 'GNOME-compliant.'
" said Rocha. He continued:
There's also the fact that we can create a 'brand' that can
easily be recognized by users: if it's a GNOME app, then it's good,
"
added Untz. Both agreed that the project should take steps to be more
welcoming to outside projects and contributors.Promotion
crystal
clear message.
" It has a good message with respect to usability,
accessibility, and internationalization, but is not as coherent at
presenting the GNOME desktop platform as a whole.That said, if a user has a question about a GNOME
feature or application and search for it on the web, they are likely to end
up either on the GNOME pages or a random distributor site (not necessarily
the distribution they are using.) We want to make sure we are ready for
those people.
"the hardest part (setting goals) is done. Now, it's
about achieving them, and even doing more than that if we can.
"Just the beginning
The interesting thing from my perspective is
how this is done in an open fashion. Before plans are even finalized we are
reaching out to the GNOME community, our partners, the distributions and
even users to hear everyone's input and to involve them in the process as
much as they wish to be. It's likely that we'll hear a lot of dissenting
opinions in the process — that's part of a good discussion —
but the end product will be even better for it.
" The members of the
release team are tracking the process in public; you can
follow the roadmap as it develops from the GNOME project's wiki.Shortening the rope
There are many things which could be said to be a part of the Unix
philosophy. One of those, certainly, is that the operating system should
stay out of the user's way to the greatest extent possible, even if said
user is intent on doing something harmful. There is a classic quote
attributed to Eric Allman:
Everyone involved agreed that not having a keystroke that caused
immediate data loss was a sensible idea.
" So, while many of the
world's ills can legitimately be blamed on Emacs users, that would not
appear to be the case this time around.
rm -rf /.
rm -rf ~
rm -rf *
alias rm='rm -i'
Openmoko hits the wall
each hardware revision takes at least one month of
time. Each month without stable hardware means serious delays for
software. One time we received the wrong memory from our vendors and we
failed to catch this before production. Another time some key components
ran out of supply.
"
Please DON'T PANIC
when reading this page. Please give Openmoko employees time to investigate
these issues and to develop a solution.
" Even making allowances for
the fact that the FreeRunner is not intended for general consumers, such
problems give it the appearance of having been released before it was
ready.
There used to be really great engineers at Openmoko some time
ago, but at least a number of good, senior folks are no longer working
there at this point in time, or are working on a much smaller scope for
Openmoko Inc.
" In addition, Welte suggested that, by not making any
public statements about the departure of key staff, Openmoko is
contributing to the rumors and uncertainties that already surround
it. Increasingly, the impression of a struggling company is becoming
impossible to avoid.
The business environment and decisions
you can't compile hardware
" — by which he
apparently meant that fixing errors in hardware is much more expensive and
time-consuming than debugging software. In addition, Moss-Pultz said that
the company had tried to develop too many markets at the same time. It has
also attempted to manage direct world sales by itself, rather than going
through an established distributor, an effort that has caused it endless
time and effort in dealing with custom duties and varying regulations.
The web site evidence
Next, Plan B
I always have a backup plan, no matter
what I do
" and that this focus is a "short-term
adjustment.
" Observers have suggested that Project B is a "non-mobile
/ non-smartphone" or, alternatively, that it will involve using Android
to reduce development costs. But the truth is that no one has any concrete
information.
Over time, I have started to have severe doubts whether
Openmoko Inc. is really the most productive and/or best environment to do
this kind of development. Priorities and directions changed a lot.
"
But at least with this announcement, he added, "I no longer have to
hope that Openmoko Inc. gets their act together to actually get an (to my
standards) acceptable product out into the market.
" Possibly, Welte
can be dismissed as a disgruntled former employee, but, by the time that a
company makes the sorts of cuts that Openmoko has made, the chances of
reversing the slide into bankruptcy seem small.
Security
Attacks on package managers
It is common for users to use package managers to update their system without considering the security implications. Unfortunately, the security of the package manager matters a great deal because it runs as root and a poor implementation might lead to installation of insecure or malicious packages. Last year, Justin Samuel of the University of Arizona and Justin Cappos of the University of Washington did an extensive research on vulnerabilities of the most common package managers for Linux. In the February 2009 issue of the USENIX magazine ;login:, they published an overview of their findings [PDF]. Although none of the attack methods and vulnerabilities they talk about are particularly new or surprising, the issues are serious enough to merit some attention.
Essentially, there is just one underlying method to all vulnerabilities in package managers: an attacker, rather than a legitimate repository of the distribution, responds to a client when the package manager downloads files. The simplest way to do this is of course becoming a public mirror for a distribution's repositories. When an attacker runs a mirror, he can reply to client requests with malicious content. Thus one question every user has to ask himself is: do you trust the mirrors you use? There was a lot of discussion about the security of mirrors last year and some responses by Linux distributions that were mentioned in the study.
The researchers looked at APT (used by Debian and Ubuntu), APT-RPM (used by PCLinuxOS), Pacman (used by Arch Linux), Portage (used by Gentoo), Slaktool(used by Slackware), Stork (a research project of the University of Arizona), URPMI (used by Mandriva), YaST (used by openSUSE and SUSE Linux Enterprise) and YUM (used by Fedora, Red Hat Enterprise Linux and CentOS). They didn't look at BSD systems for this research, although they are currently investigating the Portsnap tool for securely distributing the FreeBSD ports tree.
Cryptographic signatures
A vulnerability in a package manager can mean two things: either it installs a package the user doesn't want to install, such as an older, vulnerable version of the package or a dependency that isn't needed but hosts a Trojan horse; or it doesn't install a package the user wants to install, such as a security update. To make these events impossible, package managers can use cryptographic signatures. The differences in package managers largely come down to what is actually signed: the root metadata, the metadata that describes the packages, or the packages themselves.
The root metadata is the first file that a package manager downloads, which describes the contents and the layout of the repository, usually with the names and hashes of the files that have detailed information. If the root metadata is signed, the package manager can check if someone is tricking it into downloading a fake root metadata file. By verifying the integrity of the root metadata and then verifying the secure hashes of each downloaded file, the package manager is able to check the integrity of the information. This model is used by APT, APT-RPM and YaST.
Another approach is placing signatures on the files that directly contain the metadata of the packages, as Portage does. A third way is to not have signatures on the metadata, but on each individual package. Package managers like YUM and URPMI download a package and then check the signature before installing it. But even if the package manager uses one of these three types of cryptographic signatures, this doesn't mean it is safe.
The vulnerabilities
The possible vulnerabilities of package managers fall into three main categories: replay and freeze attacks, metadata manipulation attacks and denial-of-service attacks. A replay attack comes down to the following: when a package manager requests signed metadata, a malicious party responds with an old signed file. This is possible without the need to compromise the signing key, because once a file is signed, it is always trusted by clients. This works even after vulnerabilities are discovered in a package that was once considered safe: the attacker just has to respond with old metadata that lists package versions the attacker knows how to exploit. A freeze attack works in a similar way: an attacker keeps giving the client the same version of the metadata, essentially "freezing" the metadata at one point in time to prevent updates to vulnerable packages.
With a replay attack, the attacker is still rather limited in what he can do: he can only respond with packages from a single point in time. This means that if two packages were vulnerable at different times, the attacker cannot make both vulnerable versions available to the client with a replay attack. If the package manager does not use signed metadata, an attacker doesn't have to bother with a replay attack: the system is at risk because the attacker can just make up his own metadata. In one attack, he can make as many vulnerable packages available to the client as he wants, and thus he can increase significantly the chances of a client installing and using a vulnerable package.
If an attacker has the chance to manipulate metadata, he can also lie to the clients about what a package requires. If a package has a vulnerability the attacker knows how to exploit, he can provide metadata that says every package depends on this package. This ensures that the client installs it when installing any other package. Only a user with intimate knowledge of the packages and their dependencies will spot this.
The last category of vulnerabilities are the endless data attacks, which essentially are a form of denial-of-service. This attack is dead simple: the malicious party responds to a client request (metadata or a package) with an endless stream of data. Possible results are filling up the partition or exhausting memory.
What are the distributions doing about it?
Samuel and Cappos give a good overview of the security of the different package managers and distributions. Their paper outlines the situation from last year, so it's good to look at what has been done in the meantime. At one end of the spectrum, Slaktool on Slackware, Pacman on Arch Linux and Ports in the BSD world don't make any pretense of being secure. Justin Samuel said that he has the impression that the Slackware community is not currently focused on securing their package manager, but Arch Linux has an open bug related to adding support for signed packages and there has been community discussion following the publication of their research from last year.
Then there are the package managers that don't use signatures on root metadata, such as YUM, URPMI and Portage. These are all vulnerable to metadata manipulation attacks and hence not very secure. Since the study YUM developers have added the ability to use signed root metadata, but Fedora 10 is not using it yet and the mirrors don't contain root metadata signatures. The YUM developers are working on some of the other security issues, but it's not yet clear whether the changes will be part of Fedora 11.
Gentoo's Portage developers have also begun to address these vulnerabilities and are in the planning stage for adding a signed root metadata file to Portage. Gentoo Linux Enhancement Proposal 58 discusses the addition of what Gentoo calls a MetaManifest. The proposal is listed as having Draft status.
During the research of Samuel and Cappos, all package managers were vulnerable to replay and freeze attacks, but package managers that signed either the root metadata or package metadata don't require many changes to address this issue. This includes YaST and APT. Debian's APT developers have begun planning the necessary changes. A first attempt at a solution to replay attacks was added to experimental, and will probably be implemented in an upcoming point release of Lenny.
The endless data attacks also affected all package managers, and none has solved this at the moment. Samuel and Cappos have implemented protections against all of the attacks mentioned for Stork, their own package management system.
The authors warn that using a secure package manager doesn't mean that the distribution is using these security measures. One notable example is PCLinuxOS, which uses APT-RPM but doesn't make use of its support for root metadata signatures. And although the YUM developers have responded quickly by adding support for signed root metadata, distributions using YUM do not yet sign this metadata.
The verdict: use an enterprise Linux distribution
The best way to stay secure is to choose a distribution that takes these vulnerabilities seriously. The findings of the study show, unsurprisingly, that using an enterprise Linux distribution is a serious security advantage: SUSE Linux Enterprise and Red Hat Enterprise Linux are not vulnerable for these attacks, not because they have specifically protected against these attacks, but because of their use of SSL (which blocks man-in-the-middle attacks if correctly implemented) and by not using public mirrors.
The authors also praise openSUSE for offering almost the same level of protection as the enterprise distributions. According to Justin Samuel, openSUSE is the most secure community distribution due to only being vulnerable to man-in-the-middle attacks, and not from malicious mirrors. YaST supports expiry of metadata in openSUSE 11.1, but the metadata in the official update repository has no expiry time set at this time. According to Ludwig Nussel from the openSUSE security team, this will be fixed soon.
The authors have an accompanying web site with more background about package manager security and access to more papers.
New vulnerabilities
bugzilla: cross-site request forgery
| Package(s): | bugzilla | CVE #(s): | CVE-2009-1213 | ||||||||||||
| Created: | April 7, 2009 | Updated: | June 4, 2010 | ||||||||||||
| Description: | From the CVE entry: Cross-site request forgery (CSRF) vulnerability in attachment.cgi in Bugzilla 3.2 before 3.2.3, 3.3 before 3.3.4, and earlier versions allows remote attackers to hijack the authentication of arbitrary users for requests that use attachment editing. | ||||||||||||||
| Alerts: |
| ||||||||||||||
clamav: denial of service
| Package(s): | clamav | CVE #(s): | |||||
| Created: | April 8, 2009 | Updated: | April 8, 2009 | ||||
| Description: | From the Ubuntu advisory: It was discovered that ClamAV did not properly verify its input when processing TAR archives. A remote attacker could send a specially crafted TAR file and cause a denial of service via infinite loop. | ||||||
| Alerts: |
| ||||||
device-mapper-multipath: incorrect permissions
| Package(s): | device-mapper-multipath | CVE #(s): | CVE-2009-0115 | ||||||||||||||||||||||||
| Created: | April 8, 2009 | Updated: | June 2, 2010 | ||||||||||||||||||||||||
| Description: | From the Red Hat advisory: It was discovered that the multipathd daemon set incorrect permissions on the socket used to communicate with command line clients. An unprivileged, local user could use this flaw to send commands to multipathd, resulting in access disruptions to storage devices accessible via multiple paths and, possibly, file system corruption on these devices. | ||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||
horde3: multiple vulnerabilities
| Package(s): | horde3 | CVE #(s): | CVE-2009-0932 CVE-2008-3330 CVE-2008-5917 | ||||||||||||||||
| Created: | April 8, 2009 | Updated: | April 1, 2010 | ||||||||||||||||
| Description: | From the Debian advisory:
Gunnar Wrobel discovered a directory traversal vulnerability, which allows attackers to include and execute arbitrary local files via the driver parameter in Horde_Image. CVE-2009-0932 It was discovered that an attacker could perform a cross-site scripting attack via the contact name, which allows attackers to inject arbitrary html code. This requires that the attacker has access to create contacts. CVE-2008-3330 It was discovered that the horde XSS filter is prone to a cross-site scripting attack, which allows attackers to inject arbitrary html code. This is only exploitable when Internet Explorer is used. CVE-2008-5917 | ||||||||||||||||||
| Alerts: |
| ||||||||||||||||||
kernel: denial of service
| Package(s): | kernel | CVE #(s): | CVE-2009-1046 | ||||||||||||||||||||||||
| Created: | April 3, 2009 | Updated: | August 20, 2009 | ||||||||||||||||||||||||
| Description: | The console selection feature in the Linux kernel when the UTF-8 console is used, allows physically proximate attackers to cause a denial of service (memory corruption) by selecting a small number of 3-byte UTF-8 characters, which triggers an an off-by-two memory error. It is is not clear if this can be exploited at all. | ||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||
kernel: multiple vulnerabilities
| Package(s): | kernel, linux, linux-source-2.6.22 | CVE #(s): | CVE-2008-4307 CVE-2008-6107 CVE-2009-0605 CVE-2009-0834 CVE-2009-0835 CVE-2009-0859 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Created: | April 7, 2009 | Updated: | February 3, 2010 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Description: | From the Ubuntu advisory:
NFS did not correctly handle races between fcntl and interrupts. A local attacker on an NFS mount could consume unlimited kernel memory, leading to a denial of service. Ubuntu 8.10 was not affected. (CVE-2008-4307) Sparc syscalls did not correctly check mmap regions. A local attacker could cause a system panic, leading to a denial of service. Ubuntu 8.10 was not affected. (CVE-2008-6107) The page fault handler could consume stack memory. A local attacker could exploit this to crash the system or gain root privileges with a Kprobe registered. Only Ubuntu 8.10 was affected. (CVE-2009-0605) The syscall interface did not correctly validate parameters when crossing the 64-bit/32-bit boundary. A local attacker could bypass certain syscall restricts via crafted syscalls. (CVE-2009-0834, CVE-2009-0835) The shared memory subsystem did not correctly handle certain shmctl calls when CONFIG_SHMEM was disabled. Ubuntu kernels were not vulnerable, since CONFIG_SHMEM is enabled by default. (CVE-2009-0859) | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
lcms, java: denial of service
| Package(s): | lcms java | CVE #(s): | CVE-2009-0793 CVE-2009-0794 | ||||||||||||||||||||||||||||||||||||||||||||||||||||
| Created: | April 8, 2009 | Updated: | January 12, 2011 | ||||||||||||||||||||||||||||||||||||||||||||||||||||
| Description: | From the Red Hat advisory: A null pointer dereference flaw was found in LittleCMS. An application using color profiles could crash while converting a specially-crafted image file. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||
mapserver: multiple vulnerabilities
| Package(s): | mapserver | CVE #(s): | CVE-2009-0839 CVE-2009-0840 CVE-2009-0841 CVE-2009-0842 CVE-2009-0843 CVE-2009-1176 CVE-2009-1177 | ||||||||||||||||||||
| Created: | April 7, 2009 | Updated: | October 23, 2009 | ||||||||||||||||||||
| Description: | From the Red Hat bugzilla:
Stack-based buffer overflow in mapserv.c in mapserv in MapServer 4.x before 4.10.4 and 5.x before 5.2.2, when the server has a map with a long IMAGEPATH or NAME attribute, allows remote attackers to execute arbitrary code via a crafted id parameter in a query action. CVE-2009-0839 Heap-based buffer underflow in the readPostBody function in cgiutil.c in mapserv in MapServer 4.x before 4.10.4 and 5.x before 5.2.2 allows remote attackers to have an unknown impact via a negative value in the Content-Length HTTP header. CVE-2009-0840 Directory traversal vulnerability in mapserv.c in mapserv in MapServer 4.x before 4.10.4 and 5.x before 5.2.2, when running on Windows with Cygwin, allows remote attackers to create arbitrary files via a .. (dot dot) in the id parameter. CVE-2009-0841 mapserv in MapServer 4.x before 4.10.4 and 5.x before 5.2.2 allows remote attackers to read arbitrary invalid .map files via a full pathname in the map parameter, which triggers the display of partial file contents within an error message, as demonstrated by a /tmp/sekrut.map symlink. CVE-2009-0842 The msLoadQuery function in mapserv in MapServer 4.x before 4.10.4 and 5.x before 5.2.2 allows remote attackers to determine the existence of arbitrary files via a full pathname in the queryfile parameter, which triggers different error messages depending on whether this pathname exists. CVE-2009-0843 mapserv.c in mapserv in MapServer 4.x before 4.10.4 and 5.x before 5.2.2 does not ensure that the string holding the id parameter ends in a '\0' character, which allows remote attackers to conduct buffer-overflow attacks or have unspecified other impact via a long id parameter in a query action. CVE-2009-1176 Multiple stack-based buffer overflows in maptemplate.c in mapserv in MapServer 4.x before 4.10.4 and 5.x before 5.2.2 have unknown impact and remote attack vectors. CVE-2009-1177 | ||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||
moodle: arbitrary file access
| Package(s): | moodle | CVE #(s): | CVE-2009-1171 | ||||||||||||||||||||||||
| Created: | April 2, 2009 | Updated: | June 25, 2009 | ||||||||||||||||||||||||
| Description: | moodle can allow access to arbitrary files. From the National vulnerability database: The TeX filter in Moodle 1.6 before 1.6.9+, 1.7 before 1.7.7+, 1.8 before 1.8.9, and 1.9 before 1.9.5 allows user-assisted attackers to read arbitrary files via an input command in a "$$" sequence, which causes LaTeX to include the contents of the file. | ||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||
openfire: multiple vulnerabilities
| Package(s): | openfire | CVE #(s): | CVE-2009-0496 CVE-2009-0497 CVE-2008-6508 CVE-2008-6509 CVE-2008-6510 CVE-2008-6511 | ||||
| Created: | April 3, 2009 | Updated: | April 8, 2009 | ||||
| Description: | From the Gentoo advisory:
Two vulnerabilities have been reported by Federico Muttis, from CORE IMPACT's Exploit Writing Team: * Multiple missing or incomplete input validations in several .jsps (CVE-2009-0496). * Incorrect input validation of the "log" parameter in log.jsp (CVE-2009-0497). Multiple vulnerabilities have been reported by Andreas Kurtz: * Erroneous built-in exceptions to input validation in login.jsp (CVE-2008-6508). * Unsanitized user input to the "type" parameter in sipark-log-summary.jsp used in SQL statement. (CVE-2008-6509) * A Cross-Site-Scripting vulnerability due to unsanitized input to the "url" parameter. (CVE-2008-6510, CVE-2008-6511) | ||||||
| Alerts: |
| ||||||
openssl: several vulnerabilities
| Package(s): | openssl | CVE #(s): | CVE-2009-0789 CVE-2009-0591 | ||||||||||||||||
| Created: | April 8, 2009 | Updated: | July 27, 2011 | ||||||||||||||||
| Description: | From the CVE entries:
OpenSSL before 0.9.8k on WIN64 and certain other platforms does not properly handle a malformed ASN.1 structure, which allows remote attackers to cause a denial of service (invalid memory access and application crash) by placing this structure in the public key of a certificate, as demonstrated by an RSA public key. CVE-2009-0789 The CMS_verify function in OpenSSL 0.9.8h through 0.9.8j, when CMS is enabled, does not properly handle errors associated with malformed signed attributes, which allows remote attackers to repudiate a signature that originally appeared to be valid but was actually invalid. CVE-2009-0591 | ||||||||||||||||||
| Alerts: |
| ||||||||||||||||||
php: cross site scripting vulnerability
| Package(s): | php | CVE #(s): | CVE-2008-5814 | ||||||||||||||||||||||||||||||||||||||||
| Created: | April 6, 2009 | Updated: | February 23, 2010 | ||||||||||||||||||||||||||||||||||||||||
| Description: | php has a cross site scripting vulnerability, here is the National Vulnerability Database entry description: Cross-site scripting (XSS) vulnerability in PHP, possibly 5.2.7 and earlier, when display_errors is enabled, allows remote attackers to inject arbitrary web script or HTML via unspecified vectors. NOTE: because of the lack of details, it is unclear whether this is related to CVE-2006-0208. | ||||||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||||||
tunapie: several vulnerabilities
| Package(s): | tunapie | CVE #(s): | CVE-2009-1253 CVE-2009-1254 | ||||
| Created: | April 8, 2009 | Updated: | April 8, 2009 | ||||
| Description: | From the Debian advisory:
Kees Cook discovered that insecure handling of temporary files may lead to local denial of service through symlink attacks. CVE-2009-1253 Mike Coleman discovered that insufficient escaping of stream URLs may lead to the execution of arbitrary commands if a user is tricked into opening a malformed stream URL. CVE-2009-1254 | ||||||
| Alerts: |
| ||||||
xpdf: arbitrary code execution
| Package(s): | xpdf | CVE #(s): | CVE-2009-1144 | ||||
| Created: | April 7, 2009 | Updated: | April 8, 2009 | ||||
| Description: | From the Gentoo advisory: Erik Wallin reported that Gentoo's Xpdf attempts to read the "xpdfrc" file from the current working directory if it cannot find a ".xpdfrc" file in the user's home directory. This is caused by a missing definition of the SYSTEM_XPDFRC macro when compiling a repackaged version of Xpdf. | ||||||
| Alerts: |
| ||||||
Page editor: Jake Edge
Kernel development
Brief items
Kernel release status
The current 2.6 development kernel is 2.6.30-rc1, released by Linus on April 8. "So the two week merge window has closed, and just as well - because we had a lot of changes. As usual. Certainly I had no urges to keep the window open to get those last remaining few megabytes of patches." Significant changes in 2.6.30 will include the integrity management architecture, the TOMOYO Linux security module, the preadv() and pwritev() system calls, object storage device support, the FS-Cache local filesystem caching layer, several new tracing features, the Nilfs filesystem, a number of other filesystem changes, and a huge number of new drivers. See the long-format changelog for all the details.
The current stable 2.6 kernel is 2.6.29.1, released
on April 2. "There's many bugfixes all over the tree, but this should
specifically fix the networking issues people had w/ 2.6.29. As usual,
you're encouraged to upgrade.
"
Kernel development news
Quotes of the week
Thou shalt remember to use git status or there shall be catcalls and much embarrasment shall come to pass.
2.6.30 merge window, part 2
There have been some 3400 non-merge changesets incorporated into the mainline since last week's update, for a total of some 9600 changes merged for 2.6.30 overall. At this point, the 2.6.30 merge window is complete. User-visible changes merged since last week include:
- The preadv() and pwritev() system calls have been
added. They have been long in coming; LWN first covered these system
calls in 2005. The expected user-space interface will be:
ssize_t preadv(int d, const struct iovec *iov, int iovcnt, off_t offset); ssize_t pwritev(int d, const struct iovec *iov, int iovcnt, off_t offset);Due to the portability challenges involved, though, the actual kernel interface (seen only by the C library) is somewhat different.
- The loop block driver supports a new ioctl()
(LOOP_SET_CAPACITY) which can be used to change the size of
the device on the fly.
- The eventfd() system call takes a new flag
(EFD_SEMAPHORE) which causes it to implement simple
counting-semaphore behavior. See the
changelog entry for a description of how this works.
- The ext4 system is now more careful about forcing data out to disk in
situations where small files have been truncated or renamed. This
behavior increases robustness in the face of crashes, but it can also
have a performance cost. There is a new mount option
(auto_da_alloc) which can be used to disable this behavior.
Also new for ext4 is a set of control knobs found under
/sys/fs/ext4.
- The ext3 filesystem, too, is more careful to flush data to disk when
running in the data=writeback mode.
- The default mode for ext3 has been changed from data=ordered
to data=writeback. The latter performs quite a bit better in
2.6.30, but also carries an information disclosure risk if the system
crashes. Distributors can change the default mode when they configure
their kernels; some may well choose to retain the older
data=ordered default.
- The btrfs filesystem has also been changed to be careful about
flushing data to disk after truncate or rename operations.
- The Nilfs log-structured
filesystem has been merged.
- The MD RAID layer now has support for block-layer integrity
checking. MD can also change chunk_size and layout in a reshape
operation - a capability which makes it possible to turn a RAID5 array
into RAID6 while it is running.
- The exofs (formerly osdfs) filesystem, providing support for object storage
devices, has been merged.
- FS-Cache (formerly cachefs) has been merged. This subsystem (first covered here in 2004),
provides a local caching layer for network filesystems; it has finally
overcome the concerns
expressed by some developers and made it into the mainline.
- The distributed storage
subsystem and pohmelfs network filesystem have been merged.
Interestingly, this code went in via the -staging tree.
- The ATA subsystem has gained support for the TRIM command.
- There are two new tuning knobs under /proc/sys/vm
(nr_pdflush_threads_min and nr_pdflush_threads_max);
they place limits on the number of running pdflush
threads in the system.
- Multiple message queue namespaces are now supported.
- The PA-RISC architecture has gained support for ftrace and
latencytop.
- The ARM architecture now has high memory support, for all of you out
there with 2GB ARM-based systems.
- The Xtensa architecture now supports systems without a memory
management unit.
- New device drivers:
- Block: Marvell MMC/SD/SDIO host drivers.
- Graphics: Samsung S3C framebuffers.
- Miscellaneous: National Semiconductor LM95241 sensor chips,
Linear Technology LTC4215 Hot Swap controller
I2C monitoring interfaces,
PPC4xx IBM DDR2 memory controllers,
AMD8111 HyperTransport I/O hubs,
AMD8131 HyperTransport PCI-X Tunnel chips,
TI TWL4030/TWL5030/TPS695x0 PMIC voltage
regulators,
DragonRise game controllers,
National Semiconductor DAC124S085 SPI DAC
devices,
Rohm BD2802 RGB LED controllers,
TXx9 SoC NAND flash memory controllers, and
ASUS ATK0110 ACPI hardware monitoring
interfaces.
- Networking: Neterion X3100 Series 10GbE PCIe server adapters.
- Processors and systems: Tensilica S6000 processors and
S6105 IP camera reference design kits, and
Merisc AVR32-based boards.
- Sound: HTC Magician audio devices.
- Video: i.MX1/i.MXL CMOS sensor interfaces,
Conexant cx231xx USB video capture devices, and
Legend Silicon LGS8913/LGS8GL5/LGS8GXX DMB-TH
demodulators.
- Staging drivers (those not considered ready for regular mainline inclusion): stlc4550 and stlc4560 wireless chipsets, Brontes PCI frame grabbers, ATEN 2011 USB to serial adapters, Phison PS5000 IDE adapters, Plan 9 style capability pseudo-devices, Intel Management Engine Interfaces, Line6 PODxt Pro audio devices, USB Quatech ESU-100 8 port serial devices, Ralink RT3070 wireless network adapters, and a vast array of COMEDI data acquisition drivers.
- Block: Marvell MMC/SD/SDIO host drivers.
Changes visible to kernel developers include:
- There is a new memory debug tool controlled by the PAGE_POISONING
configuration variable. Turning this feature on causes a pattern to
be written to all freed pages and checked at allocation time. The
result is "a large slowdown," but also the potential to catch a number
of use-after-free errors.
- The new function:
int pci_enable_msi_block(struct pci_dev *dev, int count);allows a driver to enable a block of MSI interrupts.
- As part of the FS-Cache work, the "slow work" thread pool mechanism
has been merged. Some have expressed the hope that it would become
the One True Kernel Thread Pool, but there seems to be little progress
in that direction. See Documentation/slow-work.txt for more
information.
- There is a pair of new printing functions:
int vbin_printf(u32 *bin_buf, size_t size, const char *fmt, ...); int bstr_printf(char *buf, size_t size, const char *fmt, const u32 *bin_buf);The difference here is that vbin_printf() places the binary value of its arguments into bin_buf. The process can be reversed with bstr_printf(), which formats a string from the given binary buffer. The main use for these functions would appear to be with Ftrace; they allow the encoding of values to be deferred until a given trace string is read by user space.
- Also added is printk_once(), which only prints its message
the first time it is executed.
- The "kmemtrace" tracing facility has been merged. Kmemtrace provides
data on how the core slab allocations function. See Documentation/vm/kmemtrace.txt for
details.
- A number of ftrace changes have been merged. There is a workqueue tracer which tracks the operations of workqueue threads. The blktrace block subsystem tracer can now be used via ftrace. The new "event" tracer allows a user to turn on specific tracepoints within the kernel; tracepoints have been added for various scheduler and interrupt events. "Raw" events (with binary-formatted data) are available now. The new "syscall" tracer is for tracing system calls.
The merge window is now closed, and the stabilization process can begin. Past experience suggests that something close to 3000 more changes will find their way into the mainline before the 2.6.30 release, which can be expected to happen sometime in June.
Unioning file systems: Implementations, part 2
In the first article in this series about unioning file systems, I reviewed the terminology and major design issues of unioning file systems. In the second article, I described three implementations of union mounts: Plan 9, BSD, and Linux. In this article, I will examine two unioning file systems for Linux: unionfs and aufs.While union mounts and union file systems have the same goals, they are fundamentally different "under the hood." Union mounts are a first class operating systems object, implemented right smack in the middle of the VFS code; they usually require some minor modifications to the underlying file systems. Union file systems, instead, are implemented in the space between the VFS and the underlying file system, with few or no changes outside the union file system code itself. With a union file system, the VFS thinks it's talking a regular file system, and the file system thinks it's talking to the VFS, but in reality both are actually talking to the union file system code. As we'll see, each approach has advantages and disadvantages.
Unionfs
Unionfs is the best-known and longest-lived implementation of a unioning file system for Linux. Unionfs development began at SUNY Stony Brook in 2003, as part of the FiST stackable file system project. Both projects are led by Erez Zadok, a professor at Stony Brook as well as an active contributor to the Linux kernel. Many developers have contributed to unionfs over the years; for a complete list, see the list of past students on the unionfs web page - or read the copyright notices in the unionfs source code.Unionfs comes in two major versions, version 1.x and version 2.x. Version 1 was the original implementation, started in 2003. Version 2 is a rewrite intended to fix some of the problems with version 1; it is the version under active development. A design document for version 2 is available at http://www.filesystems.org/unionfs-odf.txt. Not all the features described in this document are implemented (at least not in the publicly available git tree); for example, whiteouts are still stored as directory entries with special names, which pollutes the namespace and makes stacking of a unionfs file system over another unionfs file system impossible.
Unionfs basic architecture
The unionfs code is a shim between the VFS and underlying file systems (the branches). Unionfs registers itself as a file system with the VFS and communicates with it using the standard VFS-file system interface. Unionfs supplies various file system operation sets (such as super block operations, which specify how to setup the file system at mount, allocate new inodes, sync out changes to disk, and tear down its data structures on unmount). At the data structure level, unionfs file systems have their own superblock, mount, inode, dentry, and file structures that link to those of the underlying file systems. Each unionfs file system object includes an array of pointers to the related objects from the underlying branches. For example, the unionfs dentry private data (kept in thed_fsdata looks
like:
/* unionfs dentry data in memory */
struct unionfs_dentry_info {
/*
* The semaphore is used to lock the dentry as soon as we get into a
* unionfs function from the VFS. Our lock ordering is that children
* go before their parents.
*/
struct mutex lock;
int bstart;
int bend;
int bopaque;
int bcount;
atomic_t generation;
struct path *lower_paths;
};
The lower_paths member is a pointer to an array of path
structures (which include a pointer to both the dentry and
the mnt structure) for each directory with the same path in
the lower file systems. For example, if you had three branches, and
two of the branches had a directory named "/foo/bar/",
then, on lookup of that directory, unionfs will allocate (1) a
dentry structure, (2) a unionfs_dentry_info
structure with a three-member lower_paths array, and (3) two
dentry structures for the directories. Two members of
the lower_paths array will be filled with pointers to
these dentries and their respective mnt structures. The
array itself is dynamically allocated, grown, and shrunk according to
the number of branches. The number of branches (and therefore the
size of the array) is limited by a compile-time
constant, UNIONFS_MAX_BRANCHES, which defaults to 128 -
about 126 more than commonly necessary, and more than enough for every
reasonable application of union file systems. The rest of the unionfs
data structures - super blocks, dentries, etc. - look very similar to
the structure described above.
The VFS calls the unionfs inode, dentry, etc. routines directly, which
then call back into the VFS to perform operations on the corresponding
data structures of the lower level branches. Take the example of
writing to a file: the VFS calls the write() function in
the inode's file operations vector. The inode is a unionfs inode, so
it calls unionfs_write(), which
finds the lower-level inode and checks whether it is hosted on a
read-only branch. (Unionfs copies up a file on the first write to the
data or metadata, not on the first open() with write
permission.) If the file is hosted on a read-only branch, unionfs
finds a writable branch and creates a new file on that branch (and any
directories in the path that don't already exist on the selected
branch). It then copies up the various associated attributes - file
modification and access times, owner, mode, extended attributes,
etc. - and the file data itself. Finally, it calls the
low-level write() file operation from the newly allocated
inode and returns the result back to the VFS.
Unionfs supports multiple writable branches. A file deletion (unlink) operation
is propagated through all writable branches, deleting (decrementing
the link count) of all files with the same pathname. If unionfs
encounters a read-only branch, it creates a whiteout entry in the
branch above it. Whiteout entries are named
".wh.<filename>", a directory is marked opaque with
an entry named ".wh.__dir_opaque".
Unionfs provides some level of cache-coherency by revalidating dentries before operating on them. This works reasonably well as long as all accesses to the underlying file systems goes through the unionfs mount. Direct changes to the underlying file systems are possible, but unionfs cannot correctly handle this in all cases, especially when the directory structure changes.
Unionfs is under active development. According the version 2 design document, whiteouts will be moved to small external file system. A inode remapping file in the external file system will allow persistent, stable inode numbers to be returned, making NFS exports of unionfs file systems behave correctly.
The status of unionfs as a candidate for merging into the mainline Linux kernel is mixed. On the one hand, Andrew Morton merged unionfs into the -mm tree in January 2007, on the theory that unionfs may not be the ideal solution, but it is one solution to a real problem. Merging it into -mm may also prompt developers who don't like the design to work on other unioning designs. However, unionfs has strong NACKs from Al Viro and Christoph Hellwig, among others, and Linus is reluctant to overrule subsystem maintainers.
The main objections to unionfs include its heavy duplication of data structures such as inodes, the difficulty of propagating operations from one branch to another, a few apparently insoluble race conditions, and the overall code size and complexity. These objections also apply to a greater or lesser degree to other stackable file systems, such as ecryptfs. The consensus at the 2009 Linux file systems workshop was that stackable file systems are conceptually elegant, but difficult or impossible to implement in a maintainable manner with the current VFS structure. My own experience writing a stacked file system (an in-kernel chunkfs prototype) leads me to agree with these criticisms.
Stackable file systems may be on the way out. Dustin Kirkland proposed a new design for encrypted file systems that would not be based on stackable file systems. Instead, it would create generic library functions in the VFS to provide features that would also be useful for other file systems. We identified several specific instances where code could be shared between btrfs, NFS, and the proposed ecryptfs design. Clearly, if stackable file systems are no longer a part of Linux, the future of a unioning file system built on stacking is in doubt.
aufs
Aufs, short for "Another UnionFS", was initially implemented as a fork of the unionfs code, but was rewritten from scratch in 2006. The lead developer is Junjiro R. Okajima, with some contributions from other developers. The main aufs web site is at http://aufs.sourceforge.net/.The architecture of aufs is very similar to unionfs. The basic building block is the array of lower-level file system structures hanging off of the top-level aufs object. Whiteouts are named similarly to those in unionfs, but they are hard links to a single whiteout inode in the local directory. (When the maximum link count for the whiteout inode is reached, a new whiteout inode is allocated.)
Aufs is the most featureful of the unioning file systems. It supports multiple writable branch selection policies. The most useful is probably the "allocate from branch with the most free space" policy. Aufs supports stable, persistent inode numbers via an inode translation table kept on an external file system. Hard links across branches work. In general, if there is more than one way to do it, aufs not only implements them all but also gives you a run-time configuration option to select which way you would like to do it.
Given the incredible flexibility and feature set of aufs, why isn't it more popular? A quick browse through the source code gives a clue. Aufs consists of about 20,000 lines of dense, unreadable, uncommented code, as opposed to around 10,000 for unionfs and 3,000 for union mounts and 60,000 for all of the VFS. The aufs code is generally something that one does not want to look at.
The evolution of the aufs source base tends towards increasing complexity; for example, when removing a directory full of whiteouts takes an unreasonably long time, the solution is to create a kernel thread that removes the whiteouts in the background, instead of trying to find a more efficient way to handle whiteouts. Aufs slices, dices, and makes julienne fries, but it does so in ways that are difficult to maintain and which pollute the namespace. More is not better in this case; the general trend is that the fewer the lines of code (and features) in a unioning file system, the better the feedback from other file system developers.
Junjiro Okajima recently submitted a somewhat stripped down version of aufs for mainline:
While aufs is used by a number of practical projects (such as the Knoppix Live CD), aufs shows no sign of getting closer to being merged into mainline Linux.
The future of unioning file systems development
Disclaimer: I am actively working on union mounts, so my summary will be biased in their favor.Union file systems have the advantage of keeping most of the unioning code segregated off into its own corner - modularity is good. But it's hard to implement efficient race-free file system operations without the active cooperation of the VFS. My personal opinion is that union mounts will be the dominant unioning file system solution. Union mounts have always been more popular with the VFS maintainers, and during the VFS session at the recent file systems workshop, Jan Blunck and I were able to satisfactorily answer all of Al Viro's questions about corner cases in union mounts.
Part of what makes union mounts attractive is that we have focused on specific use cases and dumped the features that have a low reward-to-maintenance-cost ratio. We said "no" to NFS export of unioned file systems and therefore did not have implement stable inode numbers. While NFS export would be nice, it's not a key design requirement for the top use cases, and implementing it would require a stackable file system-style double inode structure, with the attendant complexity of propagating file system operations up and down between the union mount inode and the underlying file system inode. We won't handle online modification of branches other than the topmost writable branch, or modification of file systems that don't go through the union mount code, so we don't have to deal with complex cache-coherency issues. To enforce this policy, Al Viro suggested a per-superblock "no, you REALLY can't ever write to this file system" flag, since currently read/write permissions are on a per-mount basis.
The st_dev and st_ino fields in stat will
change after a write to a file (technically, an open with write
permission), but most programs use this information, along
with ctime/mtime to decide whether a file has changed -
which is exactly what has just happened, so the application should
behave as expected. Files from different underlying devices in the
same directory may confuse userland programs that expect to be able to
rename within a directory - e.g., at least some versions of "make
menuconf" barf in this situation. However, this problem already
exists with bind mounts, which can result in entries with different
backing devices in the same directory. Rewriting the few programs
that don't handle this correctly is necessary to handle already
existing Linux features.
Changes to the underlying read-only file system must be done offline - when it is not mounted as part of the union. We have at least two schemes for propagating those changes up to the writable branch, which may have marked directories opaque that we now want to see through again. One is to run a userland program over the writable file system to mark everything transparent again. Another is to use the mtime/ctime information on directories to see if the underlying directory has changed since we last copied up its entries. This can be done incrementally at run-time.
Overall, the solution with the most buy-in from kernel developers is
union mounts. If we can solve the readdir() problem -
and we think we can - then it will be on track for merging in a
reasonable time frame.
Linux Storage and Filesystem workshop, day 1
The annual Linux kernel summit may gain the most attention, but the size of the kernel community makes it hard to get deeply into subsystem-specific topics at that event. So, increasingly, kernel developers gather for more focused events where some real work can be done. One of those gatherings is the Linux Storage and Filesystem workshop; the 2009 workshop began on April 6. Here is your editor's summary of the discussions which took place on the first day.Things began with a quick recap of the action items from the previous year. Some of these had been fairly well resolved over that time; these include power management, support for object storage devices, fibre channel over Ethernet, barriers on by default in ext4, the fallocate() system call, and enabling relatime by default. The record for some other objectives is not quite so good; low-level error handling is still not what it could be, "too much work" has been done with I/O bandwidth controllers while nothing has made it upstream, the union filesystem problem has not been solved, etc. As a whole, a lot has been done, but a lot remains to do.
Device discovery
Joel Becker and Kay Sievers led a session on device discovery. On a contemporary system, device numbers are not stable across reboots, and neither are device names. So anything in the system which must work with block devices and filesystems must somehow find the relevant device first. Currently, that is being done by scanning through all of the devices on the system. That works reasonably well on a laptop, but it is a real problem on systems with huge numbers of block devices. There are stories of large systems taking hours to boot, with the bulk of that time being spent scanning (repeatedly - once for every mount request) through known devices.
What comes out of the discussion, of course, is that user space needs a better way to locate devices. A given program may be searching for a specific filesystem label, UUID, or something else; a good search API would support all of these modes and more. What would be best would be to build some sort of database where each new device is added at discovery time. As additional information becomes available (when a filesystem label is found, for example), it is added to the database. Then, when a specific search is done, the information has already been gathered and a scan of the system's devices is no longer necessary.
In the simplest form, this database can be the various directories full of symbolic links that udev creates now. These directories solve much of the problem, but they can never be a complete solution for one reason: some types of devices - iSCSI targets, for example - do not really exist for the system until user space has connected to them. Multipath devices also throw a spanner into that works. For this reason, Ted Ts'o asserted that some sort of programmatic API will always be needed.
Not a lot of progress was made toward specifying a solution; the main concern, seemingly, was coming to a common understanding of the problem. What's likely to happen is that the libblkid library will be extended to provide the needed functionality. Next year, we'll see if that has been done.
Asynchronous and direct I/O
Zach Brown's stated purpose in this session was to "just rant for 45 minutes" about the poor state of asynchronous I/O (AIO) support in Linux. After ten years, he says, we still have an inadequate system which has never been fixed. The problems with Linux AIO are well documented: only a few operations are truly asynchronous, the internal API is terrible, it does not properly support the POSIX AIO API, etc. There, Zach says, are people wanting to do a lot more with AIO than is currently supported by Linux.
That said, various alternatives have been proposed over time but nobody ever tests them.
The conversation then shifted for a bit; Jeff Moyer took a turn to complain about the related topic of direct I/O. It works poorly for applications, he says, its semantics are different for different filesystems, the internal I/O paths for direct I/O are completely different from those used for buffered I/O, and it is full of races and corner cases. Not a pretty picture.
One of the biggest complications with direct I/O is the need for the system to support simultaneous direct and buffered I/O on the same file. Prohibiting that combination would simplify the problem considerably, but that is a hard thing to do. In particular, it would tend to break backups, which often want to read (in buffered mode) a file which is open for direct I/O. There was some talk of adding a new O_REALLYDIRECT mode which would lock out buffered operations, but it's not clear that the advantages would make this change worthwhile.
Another thing that would help with direct I/O would be to remove the alignment restrictions on I/O buffers. That's a hard change to make, though; many disk controllers can only perform DMA to properly-aligned buffers. So allowing unaligned buffers would force the kernel to copy data internally, which rather defeats the purpose of direct I/O. There is one use case, though, where direct I/O might still make sense: some direct I/O users really only want to avoid filling the system page cache with their data. Using the fadvise() system call is arguably a better way of achieving that goal, but application developers are said to distrust it.
All told, it seems from the discussion that there is not a whole lot to be done to improve direct I/O on Linux.
Returning to the AIO problem, the developers discussed Zach's proposed acall() API, which shifts blocking operations into special-purpose kernel threads. The use of threads in this manner promises a better AIO implementation than Linux has ever had in the past. But there is a cost: some core scheduler changes need to be made to support acall(). Among other things, there are some complexities related to transferring credentials between threads, propagating signals from AIO threads back to the original process, etc. The end result is that scheduler performance may well suffer slightly. The scheduler developers tend to be sensitive to even very small performance penalties, so there may well be pushback when acall() is proposed for mainline inclusion.
The addition of acall() would also add a certain maintenance burden. Whenever a kernel developer makes a change to the task structure, that developer would have to think about whether the change is relevant to acall() and whether it would need to be transferred to or from worker threads.
The conclusion was that acall() looks promising, and that the developers in the room thought that it could work. They also agreed, though, that a number of the relevant people were not in the room, so the question of whether acall() is appropriate for the kernel as a whole could not be answered.
RAID unification
The kernel currently contains two software RAID implementations, found in the MD and device mapper (DM) subsystems. Additionally, the Btrfs filesystem is gaining RAID capabilities of its own, a process which is expected to continue in the future. It is generally agreed that having three (or more) versions of RAID in the kernel is not an optimal situation. What a proper solution will look like, though, is not all that clear.
The session on RAID unification started with this question: who thinks that block subsystem development should be happening in the device mapper layer? A single hand was raised. In general, it seems, the developers in the room had a relatively low opinion of the device mapper RAID code. It should be said, of course, that there were no DM developers present.
What it comes down to is that the next generation of filesystems wants to include multiple device support. Plans for Btrfs include eventual RAID 6 support, but Btrfs developer Chris Mason has no interest in writing that code. It would be much nicer to use a generic RAID layer provided by the kernel. There are challenges, though. For example, a RAID-aware filesystem really wants to use different stripe sizes for data and metadata. Standard RAID, which knows little about the filesystems built on it, does not provide any such feature.
So what would a filesystem RAID API look like? Christoph Hellwig is working on this problem, but he's not ready to deal with the filesystem problem yet. Instead, he's going to start by figuring out how to unify the MD and DM RAID code. Some of this work may involve creating a set of tables in the block layer for mapping specific regions of a virtual device onto real regions in a lower-level device. The block layer already does that - it's how partitions work - but incorporating RAID would complicate things considerably. But, once that's done, we'll be a lot closer to having a general-purpose RAID layer which can be used by multiple callers.
The talk wandered into the area of error handling for a while. In particular, the tools Linux provides to administrators to deal with bad blocks are still not what they could be. There was talk about providing a consistent interface for reporting bad blocks - including tools for mapping those blocks back to the files that contain them - as well as performing passive scanning for bad blocks.
The action items that came out of this discussion include the rework of in-kernel RAID by Christoph. After that, the process of trying to define filesystem-specific interfaces will begin.
Rename, fsync, and ponies
Prior to Ted Ts'o's session on fsync() and rename(), some joker filled the room with coloring-book pages depicting ponies. These pages reflected the sentiment that Ted has often expressed: application developers are asking too much of the filesystem, so they might as well request a pony while they're at it.
Ted apologized to the room for his part in the implementation of the data=ordered mode for ext3. This mode was added as a way to improve the security of the filesystem, but it had the side effect of flushing many changes to the filesystem within a five-second window. That allowed application developers to "get lazy" and stop worrying about whether their data had actually hit the disk at the right times. Now those developers are resisting the idea that they should begin to worry again.
This problem has a longer history than many people realize. The XFS
filesystem first hit it back around 2001. But, Ted says, most application
developers didn't understand why they were getting corrupted files after a
crash. Rather than fix their applications, they just switched filesystems
- to ext3. Things worked for some time until Ubuntu users started testing
the alpha "Jaunty" release, which uses ext4 by default
makes ext4 available as an installation option. At that point,
they started finding zero-length files after crashes, and they blamed
ext4.
But, Ted says, the real problem is the missing fsync() calls. There are a number of reasons why they are not there, including developer laziness, the problem that fsync() on ext3 has become very expensive, the difficulty involved in preserving access control lists and other extended attributes when creating new files, and concerns about the battery-life costs of forcing the disk to spin up. Ted had more sympathy for some of these reasons than others, but, he says, "the application developers outnumber us," so something will have to be done to meet their concerns.
Valerie Aurora broke in to point out that application developers have been put into a position where they cannot do the right thing. A call to fsync() can stall the system for quite a while on ext3. Users don't like that either; witness the fuss caused by excessive use of fsync() by the Firefox browser. So it's not just that application developers are lazy; there are real disincentives to the use of fsync(). Ted agreed, but he also claimed that a lot of application developers are refusing to help fix the problem.
In the short term, the ext4 filesystem has gained a number of workarounds to help prevent the worst surprises. If a newly-written file is renamed on top of another, existing file, its data will be flushed to disk with the next commit. Similar things happen with files which have been truncated and rewritten. There is a performance cost to these changes, but they do make a significant part of the problem go away.
For the longer term, Ted asked: should the above-described fixes become a part of the filesystem policy for Linux? In other words, should application developers be assured that they'll be able to write a file, rename it on top of another file, omit fsync(), and not encounter zero-length files after a crash? The answer turns out to be "yes," but first Ted presented his other long-term ideas.
One of those is to improve the performance of the fsync() system call. The ext4 workarounds have also been added to ext3 when it runs in the data=writeback mode. Additionally, some block-layer fixes have been incorporated into 2.6.30. With those fixes in place, it is possible to run in data=writeback mode, avoid the zero-length file problem, and also avoid the fsync() performance problem. So, Ted asked, should data=writeback be made the default for ext3?
This idea was received with a fair amount of discomfort. The data=writeback mode brings back problems that were fixed by data=ordered; after a crash, a file which was being written could turn up with completely unrelated data in it. It could be somebody else's sensitive data. Even if it's boring data, the problem looks an awful lot like file corruption to many users. It seems like a step backward and a change which is hard to justify for a filesystem which is headed toward maintenance mode. So it would be surprising to see this change made.
[After writing the above, your editor noticed that Linus had just merged a change to make data=writeback the default for ext3 in 2.6.30. Your editor, it seems, is easily surprised.]
Finally, the idea of the fbarrier() system call was raised. Essentially, fbarrier() would ensure that any data written to a file prior to the call would be flushed to disk before any metadata changes made after the call. It could be implemented with fsync(); for ext3 data=ordered mode, it would do nothing at all. Ted did not try hard to sell this system call, saying that it was mainly there to address the laptop power consumption concern. Ric Wheeler claimed that it would be a waste of time; by the time people are actually using it, we'll all have solid-state drives in our laptops and the power concern will be gone. In general, enthusiasm for fbarrier() was low.
So the discussion turned back to the idea of generalizing and guaranteeing the ext4 workarounds. Chris Mason asked when there might be a time that somebody would not want to rename files safely; he did not get an answer. There was concern that these workarounds could not be allowed to hurt the performance of well-written applications. But the general sentiment was that these workarounds should become policy that all filesystems should implement.
pNFS
There was a session on supporting parallel NFS (pNFS). It was mostly a detailed, technical discussion on what sort of API is needed to allow clustered filesystems to tell pNFS about how files are distributed across servers. Your editor will confess that his eyes glazed over after a while, and his notes are relatively incoherent. Suffice to say that, eventually, OCFS2 and GFS will be able to communicate with pNFS servers and that all the people who really care about how that works will understand it.
Miscellaneous topics
The final session of the day related to "miscellaneous VFS topics"; the first had to do with eCryptfs. This filesystem provides encryption for individual files; it is currently implemented as a stacking filesystem using an ordinary filesystem to provide the real storage. The stacking nature of eCryptfs has long been a problem; now some Ubuntu developers are working to change it.
In particular, what they would like to do is to move the encryption handling directly into the VFS layer. Somehow users will supply a key to the kernel, which will then transparently handle the encryption and decryption of data. To that end, some sort of transformation layer will be provided to process the data between the page cache and the underlying block device.
One question that came up was: what happens when the user does not have a valid key? Should the VFS just provide encrypted data in that case? Al Viro raised the question of what happens when one process opens the file with a key while another one opens it without a key. At that point there will be a mixture of encrypted and clear-text pages in the cache, a situation which seems sure to lead to confusion. So it seems that the VFS will simply refuse to provide access to files if the necessary key is not provided.
There are various problems to be solved in the creation of the transformation layer - things like not letting processes modify a page while it is being encrypted or decrypted. Chris Mason noted that he faces a similar problem when generating checksums for pages in Btrfs. These are problems which can be addressed, though. But it was clear that this kind of transformation is likely to be built into the VFS in the future. Stacking filesystems just do not work well with the Linux VFS as it exists now.
Next up was David Brown, who works in the scientific high-performance computing field. David has an interesting problem. He runs massive systems with large storage arrays spread out across many systems. Whenever some process calls stat() on a file stored in that array, the entire cluster essentially has to come to a stop. Locks have to be acquired, cached pages have to be flushed out, etc., just to ensure that specific metadata (the file size in particular) is available and correct. So, if a scientist logs in and types "ls" in a large directory, the result can be 30 minutes in coming and little work gets done in the mean time. Not ideal.
What David would like is a "stat() light" call which wouldn't cause all of this trouble. It should return the metadata to the best of its knowledge, but it would not flush caches or take cluster-wide locks to obtain this information. If that means that the size is not entirely accurate, so be it. In the subsequent discussion, the idea was modified a little bit. "Slightly inaccurate" results would not be returned; instead, the size would simply be zeroed out. It was felt that returning no information at all was better than returning something which may have no real basis in reality.
Beyond that, there would likely be a mask associated with the system call. Initially it was suggested that the mask would be returned; it would have bits set to indicate which fields in the return stat structure are valid. But it was also suggested that the mask should be an input parameter instead; the call would then do whatever was needed to provide the fields requested by the caller. Using the mask as an input parameter would avoid the need for duplicate calls in the case where the necessary information is not provided the first time around.
The actual form of the system call is likely to be determined when somebody follows Christoph Hellwig's advice to "send a bloody patch."
The final topic of the day was union mounts. Valerie Aurora, who led this session, recently wrote an article about union filesystems and the associated problems for LWN. The focus of this session was the readdir() system call in particular. POSIX requires that readdir() provide a position within a directory which can be used by the application at any future time to return to the same spot and resume reading directory entries. This requirement is hard for any contemporary filesystem to meet. It becomes almost impossible for union filesystems, which, by definition, are presenting a combination of at least two other filesystems.
The solution that Valerie was proposing was to simply recreate directories in the top (writable) layer of the union. The new directories would point to files in the appropriate places within the union and would have whiteouts applied. That would eliminate the need to mix together directory entries from multiple layers later on, and the readdir() problem would collapse back to the single-filesystem implementation. At least, that holds true for as long as none of the lower-level filesystems in the union change. Valerie proposes that these filesystems be forced to be read-only, with an unmount required before they could be changed.
The good news is that this is how BSD union mounts have worked for a long time.
The bad news is that there's one associated problem: inode number stability. NFS servers are expected to provide stable inode numbers to clients even across reboots. But copying a file entry up to the top level of a union will change its inode number, confusing NFS clients. One possible solution to this problem is to simply decree that union mounts cannot be exported via NFS. It's not clear that there is a plausible use case for this kind of export in any case. The other solution is to just let the inode number change. That could lead to different NFS clients having open file descriptors to different versions of the file, but so be it. The consensus seemed to lean toward the latter solution.
And that is where the workshop concluded. Your editor will be attending most of the second and final day (minus a brief absence for a cameo appearance at the Embedded Linux Conference); a report from that day will be posted shortly thereafter.
Linux Storage and Filesystem Workshop, day 2
The second and final day of the Linux Storage and Filesystem Workshop was held in San Francisco, California on April 7. Conflicting commitments kept your editor from attending the entire event, but he was able to participate in sessions on solid-state device support, storage topology information, and more.
Supporting SSDs
The solid-state device topic was the most active discussion of the morning. SSDs clearly stand to change the storage landscape, but it often seems that nobody has yet figured out just how things will change or what the kernel should do to make the best use of these devices. Some things are becoming clearer, though. The kernel will be well positioned to support the current generation SSDs. Supporting future products, though, is going to be a challenge.
Matthew Wilcox, who led the discussion, started by noting that Intel SSDs
are able to handle a large number of operations in parallel. The
parallelism is so good, in fact, that there is really little or no
advantage in delaying operations. I/O requests should be submitted
immediately; the block I/O subsystem shouldn't even attempt to merge
adjacent requests. This message was diluted a bit later on, but the core
message is clear: the kernel should, when driving an SSD, focus on getting
out of the way and processing operations as quickly as possible.
It was asked: how do these drives work internally? This would be nice to know; the better informed the kernel developers are, the better they can do at driving the devices better. It seems, though, that the firmware in these devices - the part that, for now, makes Intel devices work better than most of the alternatives - is laden with Valuable Intellectual Property, and not much information will be forthcoming. Solid-state devices will be black boxes for the foreseeable future.
In any case, current-generation Intel SSDs are not the only type of device that the kernel will have to work with. Drives will differ greatly in the coming years. What the kernel really needs to know is a few basic parameters: what kind of request alignment works best, what request sizes are fastest, etc. It would be nice if the drives could export this information to the operating system. There is a mechanism by which this can be done, but current drives are not making much information available.
One clear rule holds, though: bigger requests are better. They might perform better in the drive itself, but, with high-quality SSDs, the real bottleneck is simply the number of requests which can be generated and processed in a given period of time. Bundling things into larger requests will tend to increase the overall bandwidth.
A related rule has to do with changes in usage patterns. It would appear that the Intel drives, at least, observe the requests issued by the computer and adapt their operation to improve performance. In particular, they may look at the typical alignment of requests. As a result, it is important to let the drive know if the usage pattern is about to change - when the drive is repartitioned and given a new filesystem, for example. The way to do this, evidently, is to issue an ATA "secure erase" command.
From there, the conversation moved to discard (or "trim") requests, which are used by the host to tell the drive that the contents of specific blocks are no longer needed. Judicious use of trim requests can help the drive in its garbage collection work, improving both performance and the overall life span of the hardware. But what constitutes "judicious use"? Doing a trim when a new filesystem is made is one obvious candidate. When the kernel initializes a swap file, it trims the entire file at the outset since it cannot contain anything of use. There is no controversy here (though it's amusing to note that mkfs does not, yet, issue trim commands).
But what about when the drive is repartitioned? It was suggested that the portion of the drive which has been moved from one partition to another could be trimmed. But that raises an immediate problem: if the partition table has been corrupted and the "repartitioning" is really just an attempt to restore the drive to a working state, trimming that data would be a fatal error. The same is true of using trim in the fsck command, which is another idea which has been suggested. In the end, it is not clear that using trim in either case is a safe thing to do.
The other obvious place for a trim command is when a file is deleted; after all, its data clearly is no longer needed. But some people have questioned whether that is a good time too. Data recovery is one issue; sometimes people want to be able to get back the contents of an erroneously-deleted file. But there is also a potential performance issue: on ATA drives, trim commands cannot be issued as tagged commands. So, when a trim is performed, all normal operations must be brought to a halt. If that happens too often, the throughput of the drive can suffer. This problem could be mitigated by saving up trim operations and issuing them all together every few minutes. But it's not clear that the real performance impact is enough to justify this effort. So some benchmarking work will be needed to try to quantify the problem.
An alternative which was suggested was to not use trim at all. Instead, a similar result could be had by simply reusing the same logical block numbers over and over. A simple-minded implementation would always just allocate the lowest-numbered free block when space is needed, thus compressing the data toward the front end of the drive. There are a couple of problems with this approach, though, starting with the fact that a lot of cheaper SSDs have poor wear-leveling implementations. Reusing low-numbered blocks repeatedly will wear those drives out prematurely. The other problem is that allocating blocks this way would tend to fragment files. The cost of fragmentation is far less than with rotating storage, but there is still value in keeping files contiguous. In particular, it enables larger I/O operations, and, thus, better performance.
There was a side discussion on how the kernel might be able to distinguish "crap" drives from those with real wear-leveling built in. There's actually some talk of trying to create value-neutral parameters which a drive could use to export this information, but there doesn't seem to be much hope that the vendors will ever get it right. No drive vendor wants its hardware to self-identify as a lower-quality product. One suggestion is that the kernel could interpret support for the trim command as an indicator that it's dealing with one of the better drives. That led to the revelation that the much-vaunted Intel drives do not, currently, support trim. That will change in future versions, though.
A related topic is a desire to let applications issue their own trim operations on portions of files. A database manager could use this feature to tell the system that it will no longer be interested in the current contents of a set of file blocks. This is essentially a version of the long-discussed punch() system call, with the exception that the blocks would remain allocated to the file. De-allocating the blocks would be correct at one level, but it would tend to fragment the file over time, force journal transactions, and make O_DIRECT operations block while new space is allocated. Database developers would like to avoid all of those consequences. So this variant of punch() (perhaps actually a variant of fallocate()) would discard the data, but keep the blocks in place.
From there, the discussion went to the seemingly unrelated topic of "thin provisioning." This is an offering from certain large storage array vendors; they will sell an array which claims to be much larger than the amount of storage actually installed. When the available space gets low, the customer can buy more drives from the vendor. Meanwhile, from the point of view of the system, the (apparently) large array has never changed.
Thin provisioning providers can use the trim command as well; it lets them know that the indicated space is unused and can be allocated elsewhere. But that leads to an interesting problem if trim is used to discard the contents of some blocks in the middle of the file. If the application later writes to those blocks - which are, theoretically, still in place - the system could discover that the device is out of space and fail the request. That, in turn, could lead to chaos.
The truth of the matter is that thin provisioning has this problem regardless of the use of the trim command. Space "allocated" with fallocate() could turn out to be equally illusory. And if space runs out when the filesystem is trying to write metadata, the filesystem code is likely to panic, remount the filesystem read-only, and, perhaps, bring down the system. So thin provisioning should be seen as broken currently. What's needed to fix it is a way for the operating system to tell the storage device that it intends to use specific blocks; this is an idea which will be taken back to the relevant standards committees.
Finally, there was some discussion of the CFQ I/O scheduler, which has a lot of intelligence which is not needed for SSDs. There's a way to bypass CFQ for some SSD operations, but CFQ still adds an approximately 3% performance penalty compared to the no-op I/O scheduler. That kind of cost is bearable now, but it's not going to work for future drives. There is real interest in being able to perform 100,000 operations per second - or more - on an SSD. That kind of I/O rate does not leave much room for system overhead. So, at some point, we're going to see a real effort to streamline the block I/O paths to ensure that Linux can continue to get the best out of solid-state devices.
Storage topology
Martin Petersen introduced the storage topology issue by talking about the coming 4K-sector drives. The sad fact is that, for all the talk of SSDs, rotating storage will be with us for a while yet. And the vendors of disk drives intend to shift to 4-kilobyte sectors by 2011. That leads to a number of interesting support problems, most of which were covered in this LWN article in March. In the end, the kernel is going to have to know a lot more about I/O sizes and alignment requirements to be able to run future drives.
To that end, Martin has prepared a set of patches which export this information to the system. The result is a set of directories under /sys/block/drive/topology which provide the sector size, needed alignment, optimal I/O flag, and more. There's also a "consistency flag" which tells the user whether any of the other information actually matches reality. In some situations (a RAID mirror made up of drives with differing characteristics, for example), it is not possible to provide real information, so the kernel has to make something up.
There was some wincing over this use of sysfs, but the need for this kind of information is clear. So these patches will probably be merged into the 2.6.31 kernel.
readdirplus()
There was also a session on the proposed readdirplus() system call. This call would function much like readdir() (or, more likely, like getdents()), but it would provide file metadata along with the names. That, in turn, would avoid the need for a separate stat() call and, hopefully, speed things considerably in some situations.
Most of the discussion had to do with how this new system call would be implemented. There is a real desire to avoid the creation of independent readdir() and readdirplus() implementations in each filesystem. So there needs to be a way to unify the internal implementation of the two system calls. Most likely that would be done by using only the readdirplus() function if a filesystem provides one; this callback would have a "no stat information needed" flag for the case when normal readdir() is being called.
The creation of this system call looks like an opportunity to leave some old mistakes behind. So, for example, it will not support seeking within a directory. There will also probably be a new dirent structure with 64-bit fields for most parameters. Beyond that, though, the shape of this new system call remains somewhat cloudy. Somebody clearly needs to post a patch.
Conclusion
And there ends the workshop - at least, the part that your editor was able to attend. There were a number of storage-related sessions which, beyond doubt, covered interesting topics, but it was not possible to be in both rooms at the same time (though, with luck, your editor will soon receive another attendee's notes from those sessions). The consensus among the attendees was that it was a highly successful and worthwhile event; the effects should be seen to ripple through the kernel tree over the next year.
Patches and updates
Kernel trees
Architecture-specific
Core kernel code
Development tools
Device drivers
Documentation
Filesystems and block I/O
Memory management
Benchmarks and bugs
Miscellaneous
Page editor: Jonathan Corbet
Distributions
News and Editorials
Emdebian Grip 1.0: the universal embedded operating system
In the shadow of the long-awaited release of Debian 5.0 "Lenny", another announcement by the related Emdebian project appeared: Emdebian Grip, a small Debian-compatible Emdebian installation. The Emdebian project provides a more fine-grained control over package selection, size, dependencies and content to enable creation of small and efficient Debian packages for use on resource-limited embedded targets. Emdebian is a project in progress, but it already provides toolchains and two distributions.
One of these distributions, Emdebian Grip, maintains as much compatibility as possible with Debian: in essence, Emdebian Grip unpacks the .deb archives from Debian, but removes unneeded files such as manpages, info documents, documentation and unwanted translation files, then repacks the archive. So the binaries, maintainer scripts and dependencies of the original Debian package are untouched, but the overall size and the installation size of the package is reduced.
Emdebian Grip is primarily intended as a native build environment for building custom packages on an Emdebian installation. It's essentially a Debian distribution builder: the emgrip command (in the emdebian-grip package) processes a .deb from any of seven Debian architectures in the Debian archive, previously built by maintainers or buildd machines, and generates an Emdebian Grip package for this Debian architecture. Emdebian Grip 1.0 supports arm, armel, i386, amd64, powerpc, mips, mipsel and source, but the arm architecture will not be supported in Emdebian Grip 2.0, as this architecture has been deprecated after Debian 5.0 in favor of the new ARM EABI port, armel. When building a custom package for Emdebian Grip, you probably have to add a Debian mirror to the apt source to be able to install some -dev and -doc packages. Once the package is built, it can be converted to Grip with emgrip.
The fact that Emdebian Grip was able to support seven architectures in the first release is impressive, but it's just a consequence of the architecture-neutral generation process. Even more impressive is that Emdebian Grip is principally developed by one person: Neil Williams. The original idea for Grip came from Nick Bane and Wookey, during the Emdebian session at Extremadura in September 2008. Other members of the Emdebian project have also contributed ideas and added to the design requirements but development is mainly done by Williams. So, Emdebian Grip evolved from the first rough idea to the first stable release on 7 architectures in six months, with just one developer for most of the code. Williams calls this rightly "a testament to the power of architecture-neutrality and of binary compatibility with standard Debian.
"
Installations of Emdebian Grip 1.0 can be done with standard Debian tools like debootstrap, debian-installer and even debian-live. The project recommends using the Debian Lenny installer in Automatic Installation mode. After setting up the network, the installer prompts for the preconfiguration file. When www.emdebian.org is entered, the Debian base system is migrated to the Emdebian Grip distribution during the installation process. The "Select and install software" section shows some added Grip tasks: "Grip XFCE desktop" (which installs a trimmed-down list of XFCE packages) and "Minimal Grip XFCE desktop". A little comparison: the XFCE task in Debian brings in 354 new packages, needs 214MB of archives and uses 607MB of additional disk space. In contrast, the Grip XFCE task brings in 293 new packages, needs 82.5MB of archives and uses 255MB of additional disk space. The Minimal Grip XFCE task brings in 197 new packages, needs 57.3MB of archives and uses 171MB of additional disk space.
As the Emdebian Grip packages are not recompiled, they are completely binary-compatible with Debian, so one can even mix Emdebian and Debian packages. Or one can migrate an existing Debian system to Emdebian Grip simply by adding an apt source to /etc/apt/sources.list: for Lenny this is deb http://www.emdebian.org/grip lenny main. After the next apt-get upgrade the system is converted to Emdebian Grip. The user can still pin individual packages to Debian versions.
Last December, Williams made the first release of Emdebian Grip unstable available. When he converted the Debian Lenny installation on his Acer Aspire One to Emdebian Grip, 600 packages were updated (converted) and nearly 300MB disk space was freed. He went on to proclaim Emdebian Grip as Debian, only 25% smaller. Installation size is one of the main reasons why people would want to install Emdebian Grip.
Prominent software packages in this release are the Xfce 4.4.2 desktop environment, X.Org 7.3 (which autoconfigures itself with most hardware), Iceweasel (Firefox) 3.0.6, Linux kernel version 2.6.26, Python 2.5.2 and 2.4.6, Perl 5.10.0 and more than 1,000 other packages. Under the hood, it's using coreutils and glibc. Xfce is the default desktop environment. As Emdebian is meant to run on embedded devices, not all Debian packages are added to Emdebian, only the ones that make sense. For example, most -dev, -doc and -dbg packages are missing. The full Gnome or KDE suites are also probably not going to be available in Grip, although smaller parts can find their way in it.
How does Emdebian Grip squeeze its packages?
So how does Emdebian make its packages smaller? Removing manpages, info documents and documentation is simple, but what about localization? On a Debian system, /usr/share/locale consumes 250MB. The way Debian implements localizations is not suitable to embedded systems. Debian has one binary package including all translations for all locales. In contrast, Emdebian uses the TDeb system: one TDeb for each locale, for each source package. Emdebian Grip provides methods to only install the localization data needed by the actual packages installed and the locales actually configured. At this moment, there is still one catch with this system: a program that uses non-gettext translations might lose them when "gripped". Examples of these packages are OpenOffice.org, Mozilla, Qt or Java properties. According to Williams, non-gettext translations are unlikely to get full support in Emdebian Grip 2.0, but things should be easier in Grip 3.0, based on Debian 7.
A related difference between Debian and Grip is the cache data size: there is a noticeable delay when Debian loads the package data for the first time before an installation. Most of that delay is because the Packages.gz file of Debian is so large. Williams explains what he has done in Emdebian Grip to solve this problem: "Grip not only reduces the number of packages listed in the Packages.gz file, but also enforces a limit on the length of individual long descriptions for each package, producing a much smaller Packages.gz file which makes for faster installations and is more suitable for devices where the available space after initial installation could be smaller than the size of the Packages.gz file from Debian.
"
Grip, Crush and the future
Together with Emdebian Grip 1.0, another Emdebian variant appeared: Emdebian Crush 1.0. This one goes one stage further: it makes an even smaller Debian version by cross-building packages to modify dependencies and reduce overall package sizes. For example, Perl is removed, and required Perl packages are removed or reimplemented. These modified dependencies give large gains in installation size. In contrast to Grip, building, installing and maintaining a system running Crush 1.0 is a lot of work and requires detailed knowledge of Debian. Moreover, Crush is not a build environment: the emdebian-tools package used to build packages for Crush doesn't work because Crush doesn't include Perl. This means Emdebian Crush requires Debian to build. A minimal installation of Emdebian Crush 1.0 without X needs about 24 MB.
The future of Emdebian Grip surely does look interesting. One aim is to prepare an almost complete filesystem without regard to the architecture of the final install. According to Williams, this will extend architecture-neutrality from package generation to package installation. By doing all the work in advance, the installed filesystem does not need to include the downloaded .deb packages, allowing systems to be installed to within much tighter tolerances for available file space after installation. Williams depicts this process as follows: "Multistrap allows a complete system to be designed and prepared on a fast amd64 computer using armel packages from more than one repository and including all packages needed by that particular install, e.g. both lenny and lenny-proposed updates, along with security and volatile if desired. This allows the final install to be completed without network access and without needing to install any additional packages. The slower armel embedded device then merely needs to have the almost completed filesystem unpacked and then allow a single command to complete the configuration of all installed packages.
"
As an extension to this, another idea Williams wants to explore further is something he calls "incremental installation": teaching apt that if there are 100 packages to download and install, it should identify the 10 that can be installed without needing any other dependencies and download, install and cleanup after those, before moving on and identifying the next 10 or 20 that need no other dependencies from the rest of the stack. This way, the hundreds of megabytes that apt commonly needs for a large upgrade can be eliminated as the temporary space is constantly being reused instead of being allocated in one huge lump at the start and not being freed until the very end. This will bring Debian one step further in the direction of its goal of a universal operating system.
New Releases
Ubuntu Netbook Remix 9.04 Beta released
The beta release of the "Ubuntu Netbook Remix" - a version of the upcoming "Jaunty" release optimized for small systems - is now available. "This is the first release of UNR to be fully integrated into the Ubuntu family, fully up to date with the latest applications and hardware support."
Tin Hat 20090404 is released
Tin Hat 20090404 has been announced. Tin Hat is a fully featured Linux Desktop based on Hardened Gentoo which runs purely in RAM. It aims to be very secure, stable and fast. "This release addresses important updates from upstream Hardened Gentoo, including updates to hardened-sources-2.6.28-r7 and glibc-2.8_p20080602-r1. Approximately 130 other... packages were also updated. Password hashing was switched form MD5 to SHA512."
Omnia XP: 1.0 Release for testing (SourceForge)
Omnia XP 1.0 has been released for testing. Omnia hails from Brazil and includes support for English. It is a remastering of Debian Lenny 5.0 with support for 12 architectures and a graphical environment similar to MS Windows XP. This release features Broffice 2.4, iceweasel (Firefox) 3.06 and Emesene (MSN).Mandriva Linux 2009 Spring RC2 release
The second Release Candidate is available for testing. "The RC2 release of Mandriva Linux 2009 Spring (code name Estephe) is now available. This RC2 version provides some updates on major desktop components of the distribution, including KDE 4.2.2, GNOME 2.26,X.org server 1.6, kernel 2.6.29." Release notes and errata are available here.
Distribution News
Debian GNU/Linux
New architectures
The Debian Project has added two new architectures to the Debian archive: kfreebsd-i386 AKA GNU/kFreeBSD i386 and kfreebsd-amd64 AKA GNU/kFreeBSD amd64. "The two new architectures (well, better named OS i think, as they use a different kernel) are available in unstable and experimental. We do start out empty, importing only what is needed to get a buildd running. For this reason you will not be able to directly use it immediately."
Preparing for GTK 3.0 and GNOME 3
The Debian development team is making preparations for GTK+ and GNOME 3.0. "although for various reasons (mostly ongoing transitions) we are quite late in packaging GNOME 2.26 in Debian, we should also look at the future. GTK+ 3.0 is planned around march 2010, and GNOME 3.0 a little while later. With them comes the final deprecation of many GNOME 2.X interfaces. It took a very long time (8 years!) to get rid of GTK+ 1.2 and the process is in its final stage now. I'd like to avoid this horrible mess for GTK+ 2.X and for the GNOME libraries that will stop being maintained upstream after the 3.0 release."
Fedora
Fedora Board Recap 2009-03-31
This recap of the March 31 meeting of the Fedora Advisory Board includes Discussion of Intrusion Announcement, Next steps in WIF (What is Fedora?) process, and Work of Fedora QA Team.Fedora Classroom - April 2009
The April 2009 sessions of Fedora Classroom have been completed. There were five IRC sessions: Setting up a Virtual Routing Environment using Fedora and User Mode Linux - Balaji Gurudass; Introduction to busybox and QEMU on Fedora - Balaji Gurudass; Introduction to Netlink Sockets, What are they? - Balaji Gurudass; Building RPM packages - Christoph Wickert; and Fedora Networking Basics - Kevin Fenzi. IRC logs are available.
Distribution Newsletters
Arch Linux Newsletter
The Arch Linux Newsletter for April 2009 is out. "As always, we have an interview with an Arch Linux developer. This month our interview is with the maintainer of the kernel package for Arch Linux. Also, this month features an interview with the Arch Linux Games Team; the ones behind the Arch Linux games repository. Additionally, community member Chris Brannon cannot go without notice. His work with building an Arch Linux installation media for the blind is a much appreciated effort that many will enjoy." Also several other topics.
DistroWatch Weekly, Issue 297
The DistroWatch Weekly for April 6, 2009 is out. "One of the must-haves in the toolkit of any serious free software enthusiast is a decent partitioning tool. This week we take a look at the newly released Parted Magic 4.0, a live CD for managing hard drives. In the news, Intel hands control of Moblin, a distribution for netbooks and mobile devices over to the Linux Foundation, rumours about a possible purchase of Sun Microsystems by IBM spur speculations about the future of OpenSolaris, Debian announces support for kFreeBSD i386 and amd64 port, and Mark Shuttleworth talks about the upcoming release of Ubuntu 9.04. Also in the news, first hints about a possible major and more adventurous update of the GNOME desktop, version 3.0. Finally, we are pleased to announce that the recipient of the DistroWatch.com March 2009 donation is smxi, a project developing a variety of useful scripts for Debian and Debian-based distributions."
Fedora Weekly News #170
The Fedora Weekly News for the week ending April 5, 2009 is out. "In this week's issue, we're proud to include the Fedora Weekly webcomic by Nicu Buculei, who has been producing this regularly for some time. We think you will enjoy Nicu's art and humor. Other selected content includes: * Detailed coverage in the announcements and infrastructure sections on the August 2008 Fedora security intrusion, and updates on the upcoming FUDCon Berlin. * News from the Fedora Planet includes updates on the fourth grade math project for Sugar/OLPC, reviews of Songbird and Flock, amongst other birds of a feather." Plus Developments, Translations, April Fools, and more.
The Mint Newsletter - issue 80
The Mint Newsletter for April 2009 covers Mint 6 Felicia KDE CE and Fluxbox CE have been approved as stable, March monthly stats and much more.OpenSUSE Weekly News/66
This issue of the OpenSUSE Weekly News covers Google Summer of Code, Graphical Mode for YaST/Partitioning, KDE 4.2.2 is out, Mono 2.4 and MonoDevelop 2.0 released, The real antidote for Conficker, and much more.Ubuntu Weekly Newsletter #136
The Ubuntu Weekly Newsletter for the week ending April 4, 2009 is out. "In this issue we cover: Ubuntu Netbook Remix 9.04 Beta released, Newly Approved LoCo Teams, Package Training Sessions, Hug Day: April 9th, Ubuntu Brainstorm: Call for Idea Reviewers, New MOTU, The Fourth Horseman, New Ubuntu Mirror: Colombia, Ubuntu Florida and Pennsylvania Jaunty Release Parties, Launchpad 2.2.3 released, Launchpad: Official Bug Tags, Checkbox 0.7.1 released, Ubuntu Podcast: Qimo, Ubuntu Podcast #24: Mark Shuttleworth Interview, Ubuntu-UK Podcast: The Return, Ubuntu Server Team Minutes: March 31st, and much, much more!"
Distribution reviews
First look: Fedora 11 beta shows promise (ars technica)
Ryan Paul takes a look at Fedora 11 beta. "Fedora 11, which is codenamed Leonidas, is scheduled for final release at the end of May. It will include several new features and noteworthy improvements, such as RPM 4.7, which will reduce the memory consumption of complex package activity, tighter integration of PackageKit, faster boot time with a target goal of 20 seconds, and reduced power consumption thanks to a major tuning effort." (Thanks to Rahul Sundaram).
Page editor: Rebecca Sobol
Development
SyncML: an introduction, its potential, its problems
Users of PIM (Personal Information Manager) software, such as Evolution, Kontact, or Chandler, tend to accumulate more information than just email. Typically, this is data such as notes, tasks, calendars and contacts, collectively known as "personal information". Keeping all of this information synchronized between the desktop, mobile devices and the web is difficult, but the SyncML standard may be able to help.
SyncML (Synchronization Markup Language) is a standard for synchronizing information. SyncML allows different kinds of devices (cell phones, portable music players, desktops, etc.) to synchronize various contact and scheduling information so that each device is kept up to date. It can also synchronize with a web service so that a forgotten phone number, for example, could be retrieved at an internet café.
SyncML is currently maintained by the OMA (Open Mobile Alliance) under the official name OMA-DS (Data Synchronization). The specification is available free of charge.
Usually there is a SyncML server, which provides a central master copy of a user's personal information. All of the user's SyncML clients will then synchronize against this central server. This central server communicates using the platform-independent SyncML standard. This avoids the combinatorial explosion of every device having to know how to talk to every other device in a device-specific custom format.
For selecting a SyncML server, the options are either to use a pre-existing public SyncML server, which is typically less effort, or to set up a SyncML server for private use. Two examples of popular public SyncML servers are ScheduleWorld and MyFunambol beta. ScheduleWorld is based on Funambol, but forked from the GPL-licensed version quite a while ago, with no code being contributed back. Since then, its author has invested a lot of work into improving it, and in the process, rewriting the contact and calendar support from scratch. These aspects are what ScheduleWorld deserves praise for, but because the code changes are kept private it has to be considered a closed-source non-distributable server.
Alternatively, there are open-source SyncML servers for users who prefer to set up their own local SyncML servers, such as Funambol, or for a complete list, see the Wikipedia article. For Funambol, the software does not seem to have been packaged by many distributions, and is surprisingly large at 170 MB for a distribution-independent .bin file, which includes its own installation of Tomcat. Deploying Funambol as part of an existing Tomcat installation is not recommended and is unsupported.
There are also open-source SyncML connectors available for connecting Linux PIMs to SyncML servers. Examples include SyncEvolution, and its user-friendly Genesis-Sync panel applet. For users wanting a quick HOWTO, this message on the Evolution mailing list outlines how to synchronize Evolution on Ubuntu with a Nokia N-series or E-series phone, using a public SyncML server.
The benefits of SyncML are that it has the potential to do for personal information what IMAP does for email; that is, make it live "in the cloud," and be remotely accessible, modifiable, and synchronized between a wide variety of devices. The idea of being able to view and modify personal information anywhere on multiple devices or on the web, and have it all synchronized together, with an update made on any one device replicated to all the others, is quite compelling.
A further strength of SyncML is that many cell phones, including all Nokia N-series and E-series phones, most Sony Ericsson phones, and most Motorola phones, have built-in SyncML clients - which means there is no need to install extra software on these phones to synchronize personal information. Add-on software is available for most other phones, including BlackBerry and Windows Mobile - see the ScheduleWorld wiki for example configuration information.
A final strength is that all of the SyncML clients can have a local cache of information, so that even when the Internet is not available users can still access and update their data. Two-way syncing then ensures that those updates will be propagated at the next synchronization.
Some of the weaknesses of SyncML are partially traceable back to its early adopters - namely, a consortium of various cell phone manufacturers. The protocol itself is data format agnostic. So for all but the simplest uses, like verbatim copying, the client and server need to agree on a common format for items. Due to its history, this format is often vCard 2.1 and vCalendar 1.0. The more capable iCalendar 2.0 format used by all desktop PIMs is often not or only partially supported.
A good SyncML server takes the capabilities and quirks of its clients into account when exchanging data with them. For example, a photo associated with a contact can be preserved when receiving an update of that contact from a client which cannot store photos. Less capable servers themselves drop some information because their internal data model is more limited than that of the clients they exchange data with. Client implementations can be poor, including crashes when sent something unexpected.
Further weaknesses are that most of the syncing interfaces seem to assume there is exactly one contacts folder, one calendar, one notes folder, and one tasks folder. This one-folder-only assumption makes sense on a cell phone, but it does not hold for a desktop PIM. For example, it is quite common to have multiple tasks folders and multiple calendars - so this assumption means that not everything is being replicated, which reduces the power of synchronization.
In addition, support for SyncML is not yet built into PIMs, in the same way that IMAP comes "as standard" in email clients. However, even having it integrated into a single PIM would be very useful for people who wanted the same data seamlessly synchronized between a laptop and a desktop, or a work machine and a home machine, and who ran the same PIM at both ends. The most desirable though would be to have complete synchronization between different PIMs.
A final weakness is that most SyncML User-Interfaces require the user to manually initiate the two-way sync with the server. Instead, it would be easier for the user if there was a set-and-forget option, where the SyncML client would sync only when it needed to; namely when either when the server push-notified the client of pending changes that it had not yet received, or when the client uploaded changes in the background made by the user on that device as the user made them.
In summary, it is currently possible to synchronize personal information between a mobile phone and PIM software using open-source with SyncML, and it currently works quite well, albeit with some limitations. However, SyncML or a successor to it, has the potential to be so much more powerful; it could be the next logical step beyond IMAP by providing seamless automatic synchronization of all personal information between multiple PIM clients. This would enable users to easily access and update all of their personal information, wherever they were, irrespective of whether there was Internet-access or not. There is certainly hope for further developments in this area.
The author gratefully thanks Patrick Ohly, the author of SyncEvolution, for his invaluable assistance in writing this article.
System Applications
Database Software
Firebird 2.1.2 released
Version 2.1.2 of the Firebird DBMS is available, see the release notes for more information.MySQL Community Server 5.1.33 has been released
Version 5.1.33 of MySQL Community Server has been announced. "MySQL 5.1.33 is recommended for use on production systems. Users running AIX 5.2 should be aware that this platform will be EOL'd from 30th April 2009, therefore 5.1.33 is likely to be the penultimate 5.1 release for AIX 5.2."
PostgreSQL Weekly News
The April 5, 2009 edition of the PostgreSQL Weekly News is online with the latest PostgreSQL DBMS articles and resources.sqlparse 0.1.0 released
Version 0.1.0 of sqlparse has been announced. "sqlparse is a non-validating SQL parser module for Python. The module provides functions for splitting, formatting and parsing SQL statements. This is the first public release of this module. It's still buggy, but it works at least for the most common tasks when dealing with SQL statements."
Filesystem Utilities
initramfs-tools stable release 0.93.2 released
Stable release 0.93.2 of initramfs-tools has been released. "initramfs-tools is a generic initramfs generation tool. latest stable adds fb fixes and a behaviour change for update-initramfs -u to work against newest initramfs instead of following maybe outdated /initrd.img symlink."
Interoperability
Two new Samba releases
Two new releases of Samba have been announced: Samba 3.2.10 Maintenance Release includes bug fixes and Samba 3.3.3 also includes bug fixes.
Networking Tools
iptables 1.4.3.2 announced
Version 1.4.3.2 of iptables has been announced. "The netfilter coreteam presents: iptables version 1.4.3.2 the iptables release for the 2.6.29 kernel. This version includes accumulated bugfixes for the previous release from Jan Engelhardt and Peter Volkov."
Web Site Development
nginx 0.6.36 released
Version 0.6.36 of the nginx HTTP server and mail proxy server has been announced. See the CHANGES document for release details.OpenExpert: 0.4.3 Release (SourceForge)
Version 0.4.3 of OpenExpert has been announced. "OpenExpert. Web based and Easy to Use Expert System. This release includes the addition of a Client ID (or name) when printing the results of an interview. Also a number of small bug fixes are included in the release. It is recommended that all OpenExpert users upgrade to this version."
Miscellaneous
rsyslog: 3.20.5 released (SourceForge)
Stable version 3.20.5 of rsyslog has been announced. "A syslogd supporting on-demand disk buffering, TCP, writing to databases, configurable output formats, high-precision timestamps, filtering on any syslog message part, on-the-wire message compression, and the ability to convert text files to syslog. rsyslog 3.20.5, a member of the v3-stable branch, has been released today. This is a bug-fixing released that also comes with slightly enhanced documentation. Most importantly, a bug in RainerScript number conversion and two potential segfaults have been fixed."
Desktop Applications
Audio Applications
bs2b foobar2000 plugin: 3.0.0 released. (SourceForge)
Version 3.0.0 of bs2b foobar2000 plugin has been announced. "The Bauer stereophonic-to-binaural DSP (bs2b) library and plugins is designed to improve headphone listening of stereo audio records. Recommended for headphone prolonged listening to disable superstereo fatigue without essential distortions. bs2b foobar2000 plugin 3.0.0 released. * libbs2b 3.0.0 with much more settings are used."
Gnac: 0.2.0 is out (SourceForge)
Version 0.2.0 of Gnac has been announced. "Gnac (GNome Audio Converter) is an easy to use audio conversion program. It is designed to be useful but pain-free for the end user. It provides easy audio files conversion between all GStreamer supported audio formats. The Gnac Team is proud to announce a new version of Gnac! This version add many new features and fixes."
libfishsound 0.9.2 released
Version 0.9.2 of libfishsound has been announced, it includes security and bug fixes. "libfishsound provides a simple programming interface for decoding and encoding audio data using Xiph.org codecs (FLAC, Speex and Vorbis)."
New linux audio plugins announced
A number of audio effect plugins for Linux and JACK are available at http://www.linuxdsp.co.uk.Practical Music Search: 0.40.5 released (SourceForge)
Version 0.40.5 of Practical Music Search has been announced. "Practical Music Search is a ncurses-based MPD (Music Player Daemon) client with a broad set of features and configuration options. The client adds much functionality to MPD and is aimed primarily towards power users. This is a relatively new piece of software, but it's getting fairly stable."
Data Visualization
rrdtool 1.3.7 released
Version 1.3.7 of rrdtool, a plotting package for time-series data, has been announced. "Users of versions 1.3.0 and 1.3.1 should upgrade since these releases contain a serious data corruption bug triggerd by running rrdtool update with multiple values in one go."
scikits.timeseries 0.91.0 released
Version 0.91.0 of scikits.timeseries, the initial release, has been announced. "The scikits.timeseries module provides classes and functions for manipulating, reporting, and plotting time series of various frequencies. The focus is on convenient data access and manipulation while leveraging the existing mathematical functionality in numpy and scipy."
ZGRViewer: 0.8.2 released (SourceForge)
Version 0.8.2 of ZGRViewer has been announced. "ZVTM is a Zoomable (2.5D) User Interface toolkit implemented in Java, designed to ease the task of creating complex visual editors in which large amounts of objects have to be displayed, or which contain complex geometrical shapes that need to be animate. ZGRViewer is a 2.5D graph visualizer implemented in Java and based upon the Zoomable Visual Transformation Machine. ZGRViewer can now be run both as a standalone application or as an applet."
Desktop Environments
A GNOME 3.0 plan
Vincent Untz has posted a lengthy proposal for a plan that would see a GNOME 3.0 release happening around the same time as the GNOME 2.30 release - about one year from now, in other words. The core of 3.0 would be the GNOME Shell and GNOME Zeitgeist projects, but there is more to it than that. "There's one obvious question related to those potential changes: what will happen to the old way of doing things? For example, will we still make the GNOME Panel available if, for some reason, people are not immediately happy with GNOME Shell? There's no obvious answer to this, and this will have to be discussed."
Planning for GNOME 3.0 (GnomeDesktop)
GnomeDesktop looks at GNOME 3.0 plans. "Because of lack of excitement. Because of lack of vision. Slowly, a plan started to emerge. It evolved, changed, was trimmed a bit, made more solid. We started discussing with a few more people, got more feedback. And then, at GUADEC, the Release Team proposed an initial plan to the community that would lead the project to GNOME 3.0. Quite some time passed; actually, too much time passed because too many people were busy with other things. But it's never too late to do the right thing, so let's really be serious about GNOME 3.0 now!"
GNOME Software Announcements
The following new GNOME software has been announced this week:- Glade 3.6.1 (bug fixes and translation work)
- GNOME Nettool 2.26.1 (bug fixes and translation work)
- GParted 0.4.4 (new features and bug fix)
- GtkImageView 1.6.4 (unspecified)
- Hitori 0.2.2 (bug fixes, code cleanup and translation work)
- moserial 2.26.0 (new features)
- Nemiver 0.6.6 (new features and bug fixes)
- PyGtkImageView 1.2.0 (unspecified)
- Vala 0.7.0 (build changes)
KDE Software Announcements
The following new KDE software has been announced this week:- 2ManDVD 0.7.5 (bug fixes and translation work)
- 2ManDVD 0.7.6 (new features, bug fixes and translation work)
- 2ManDVD - German language files 0.7.5 (translation work)
- Bilbo Blogger 0.9 (unspecified)
- cb2Bib 1.2.2 (new feature and bug fix)
- Image Commander 1.0 (unspecified)
- Image Commander 1.1 (new features and bug fixes)
- K Menu Gnome 0.9.3 (new features and code cleanup)
- KPorts 0.8.1 (new features and bug fixes)
- KTorrent 3.2.1 (bug fixes)
- PeaZip 2.6.beta (new features)
- QtiPlot 0.9.7.6 (new features and bug fixes)
- Qtractor 0.4.1 (new features and bug fixes)
- Qaduzer 0.5.1 (unspecified)
- VariCAD 2009 1.03 (new features)
- Zhu3D 4.2.0 (bug fixes and code cleanup)
KDE 4.2.2 released
Version 4.2.2 of KDE has been announced. "The KDE Community today announced the immediate availability of "Cano", (a.k.a KDE 4.2.2), another bugfix and maintenance update for the latest generation of the most advanced and powerful free desktop. Cano is a monthly update to KDE 4.2." See the change log for more information.
Xorg Software Announcements
The following new Xorg software has been announced this week:- libdrm 2.4.6 (new features, bug fixes and code cleanup)
- libX11 1.2.1 (new features, bug fixes and code cleanup)
- xf86-video-ati 6.12.2 (new features, bug fixes and code cleanup)
- xf86-video-nv 2.1.13 (new features, bug fixes and documentation work)
- xinput 1.4.1 (bug fixes)
- xpyb 1.1 (new features and code cleanup)
Electronics
gEDA/gaf 1.5.2-20090328 released
Unstable/development snapshot 1.5.2-20090328 of gEDA/gaf, a collection of electronic design tools, has been announced. "NOTE: This unstable snapshot should _not_ be packaged into distributions. This request is being reviewed and might change, stay tuned..."
Financial Applications
SQL-Ledger 2.8.24 released
Version 2.8.24 of SQL-Ledger, a web-based double entry accounting/ERP system, has been announced. Changes include: "1. Version 2.8.24 2. added reminders; keep track of level 3. added customernumber variable for generating document control numbers 4. additional option to calculate check digits according to modulo 10 and 11"
Graphics
PyOpenGL Release 3.0.0
Version 3.0.0 of PyOpenGL has been announced. "PyOpenGL is the traditional OpenGL binding for the Python language (Python 2.x series). This release is the first major release of the package in more than 4 years. PyOpenGL 3.x is a complete rewrite of the PyOpenGL project which attempts to retain compatibility with the original PyOpenGL 2.x API while providing support for many more data-formats and extensions than were wrapped by the previous code-base."
Mapping Software
eWorld: 0.8.1 released (SourceForge)
Version 0.8.1 of eWorld has been announced. "eWorld is a framework to import mapping data from providers, such as OpenStreetMap.org (OSM), visualize it, edit and enrich it with events or annotational attributes and pass it to traffic simulators, such as SUMO or VanetMobiSim. we are proud to announce the release of eWorld 0.8.1. This release contains one major new feature as well as fixes for many of the bugs reported by all of you. The new feature allows visualizing statistical data directly on the corresponding network map by altering street colors and widths. We are very curious to find out what you think of it."
Medical Applications
camba: 2.3.0 released (SourceForge)
Version 2.3.0 of camba has been announced. "CamBA is a Linux package for statistical analysis, by script/GUI, of neuroimaging data (fMRI/sMRI), developed at the Brain Mapping Unit, University of Cambridge. Non-parametric permutation-based statistics. Input images: 4D NiFTI files, output: HTML/PNG. After nearly one year, the latest version of CamBA, version 2.3.0 is released. For the majority of users this is simply a fine-tuning release of CamBA. However, for those more adventurous there are new programs, such as RETROICOR that are implemented for experiment/testing."
Multimedia
Elisa Media Center 0.5.35 released
Version 0.5.35 of Elisa Media Center has been announced. "New features include the ability to manually re-organize TV shows, movies and unclassified videos, and consistent font sizes throughout the UI. This release is a "heavy" release, meaning a windows installer is available for download on our website and ubuntu packages (for hardy and intrepid) in our PPA."
Music Applications
PianoBooster 0.6.2 released
Version 0.6.2 of PianoBooster has been announced. "The most interesting and innovative thing in this release are timing markers which drawn in real-time as you play on the piano keyboard. They appear as white crosses that are drawn over each note and they show if you are playing ahead or behind the beat."
TuxGuitar-1.1 has been released (SourceForge)
Version 1.1 of TuxGuitar has been announced. "TuxGuitar is a multitrack guitar tablature editor and player written in Java-SWT, It can open GuitarPro, PowerTab and TablEdit files. This release does not contain many visible changes. It's actually a code structure rewrite, changes that are need to face the challenge of 2.0".
Virtual MIDI Piano Keyboard 0.2.4
Version 0.2.4 of Virtual MIDI Piano Keyboard has been announced. "This is a maintenance release, mainly for cleanup and a few new features. Thanks to Serdar Soytetir for the Turkish translation. Virtual MIDI Piano Keyboard is a MIDI event generator and receiver."
Office Applications
pyspread 0.0.11 released
Version 0.0.11 of pyspread has been announced, it adds new features and bug fixes. "Pyspread is a cross-platform spreadsheet application that is based on and written in the programming language Python. Pyspread provides an arbitrary size, three-dimensional grid for spreadsheet calculations. Each grid cell accepts a Python expression. Therefore, no spreadsheet specific language has to be learned. Python modules are usable from the spreadsheet table without external scripts."
Office Suites
KOffice 2.0 rc 1 released (KDEDot)
KDEDot has announced the release of KOffice 2.0 rc 1. "Today, the KOffice team has released the first, and hopefully the only, release candidate for KOffice 2.0, bringing more than three years of work to a temporary conclusion. Compared to Beta 7, this release candidate brings a multitude of bug fixes and not a single new feature, as it should be!"
Science
PXL: 2.1.2 released (SourceForge)
Version 2.1.2 of PXL has been announced. "The Physics eXtension Library (PXL) is a C++ toolkit for fourvector analysis and hypothesis evolution in high energy physics data analysis. New tag. Includes major restructuring from 2.0 to 2.1, and some fixes compared to versions 2.1.0 and 2.1.1."
Miscellaneous
iFolder Project releases version 3.7.2
Novell has announced version 3.7.2 of iFolder. "The iFolder project, a Novell-sponsored open source initiative that simplifies synchronizing files across multiple systems and enables users to securely access and share files with other users, today announced its first open source release since 2007. Available immediately, users and developers can download iFolder 3.7.2 client and server packages and source code. The latest release adds several features, including support for new platforms, additional security options, improved handling of file conflicts, and capabilities for merging files. In addition to an updated project Website, Novell has put in place a community development plan to ensure that iFolder becomes and remains a vital open source project."
Languages and Tools
Caml
Caml Weekly News
The April 7, 2009 edition of the Caml Weekly News is out with new articles about the Caml language.
Java
IcedTea7 1.9.1 released
Version 1.9.1 of IcedTea7, a harness for build source code from OpenJDK6, has been announced. "We are pleased to announce a new minor release of IcedTea[7], containing a number of security updates".
Python
Numpy 1.3.0 released
Version 1.3.0 of Numpy has been announced. "This minor includes numerous bug fixes, official python 2.6 support, and several new features such as generalized ufuncs."
Pyro 3.9 released
Version 3.9 of Pyro has been announced. "Pyro is a an advanced and powerful Distributed Object Technology system written entirely in Python, that is designed to be very easy to use. Highlights of this release are: - improved compatibility with Jython, - fixed a deadlock bug in the name server proxy, - fixed mobile code problem with dependent modules, - manual improvements - script tool improvements"
Python 2.6.2 candidate 1 released
Version 2.6.2 candidate 1 of Python has been announced. "This release contains dozens of bug fixes since Python 2.6.1. Please see the NEWS file for a detailed list of changes. Barring unforeseen problems, Python 2.6.2 final will be released within a few days."
Python-URL! - weekly Python news and links
The April 3, 2009 edition of the Python-URL! is online with a new collection of Python article links.Python-URL! - weekly Python news and links
The April 8, 2009 edition of the Python-URL! is online with a new collection of Python article links.
Tcl/Tk
Tcl-URL! - weekly Tcl news and links
The April 1, 2009 edition of the Tcl-URL! is online with new Tcl/Tk articles and resources.Tcl-URL! - weekly Tcl news and links
The April 8, 2009 edition of the Tcl-URL! is online with new Tcl/Tk articles and resources.
IDEs
eric4 4.3.2 announced
Version 4.3.2 of eric4 is out with bug fixes. "Eric4 is a Python IDE written using PyQt4 and QScintilla2. It has integrated project management capabilities, it gives you an unlimited number of editors, an integrated Python shell, an integrated debugger, integrated interfaces to Subversion and CVS, an integrated refactoring browser, integrated unittest and much more."
YARI: v0.7.2 released (SourceForge)
Version 0.7.2 of YARI has been announced. "YARI is a comprehensive tool suite to debug, spy, spider, inspect and navigate Eclipse based application GUIs (Workbench or RCP). YARI got an upgraded version of the expression evaluator for ISources constants. Paintings of the workbench widgets can now be undone using the "undo paint" command."
Test Suites
STAF: V3.3.3 and STAX V3.3.6 are now available (SourceForge)
Version 3.3.3 of STAF and version 3.3.6 of STAX have been announced. "The Software Testing Automation Framework (STAF) is a framework designed to improve the level of reuse and automation in test cases and test environments. The goal of STAF is to provide a complete end-to-end automation solution for testers. These new releases contain new features, bug fixes, and documentation updates."
Version Control
GIT 1.6.2.2 released
Version 1.6.2.2 of the GIT distributed version control system has been announced. Changes include: "Mostly documentation updates with a few bugfixes."
Miscellaneous
GNU patch: upcoming stable release
An alpha release of GNU patch 2.5.9 is available. "The code should be feature complete for the next stable release with only a few minor bugfixes left in the queue. This is your chance to report more bugs that still need to be addressed. Please expect the next stable release to happen in about a month's time."
SimPy 2.0.1 released
Version 2.0.1 of SimPy has been announced, it includes bug fixes. "SimPy is a process-based discrete-event simulation language based on standard Python and released under the GNU LGPL. It provides the modeller with components of a simulation model. These include processes, for active components like customers, messages, and vehicles, and resources, for passive components that form limited capacity congestion points like servers,"
Page editor: Forrest Cook
Linux in the news
Recommended Reading
Android and Open Source (ABN)
Here is a serious criticism of the Android project posted to the Android Blogging Network. "In a successful hybrid open/closed project, the open source releases drive the closed source forks. Witness Apache httpd (open) and IBM httpd (closed). They have the same codebase, and - shockingly - its not the one in IBMs secret dungeons that generates new releases. IBM does do their own set of releases, but they work just like everyone else - by taking changes from the open tree and integrating their closed code. Google works with Android in exactly the opposite way - development is done mainly in secret, and occasionally someone takes the time to audit it and dump a huge, unmanageable set of changes into the open source tree."
Trade Shows and Conferences
HIMSS Day 1: Medsphere
Fred Totter presents a report on HIMSS day1. "This is the first article I am writing from HIMSS09. I am here on a press pass provided by LinuxMedNews. I am focusing on FOSS here at HIMSS. I am, by tribal law, required to make a certain amount of Star Wars analogies when blogging and I recently categorized HIMSS as the empire with regards to health it. Of course the FOSS movement in health IT would be the rebel alliance in my analogy."
PIM Hackers Boost Akonadi Into The Future (KDEDot)
KDE.News covers a recent Akonadi developer meeting. "This weekend the A-Team (Akonadi, not Amarok) gathered in KDAB's office in the heart of Berlin to push the Akonadi PIM storage database to the next level. On Friday afternoon, after everybody arrived, the meeting started with a series of presentations to get everybody on the same page with respect to progress in various parts of Akonadi."
Companies
HP Interviews Android for Netbook Position (TechNewsWorld)
TechNewsWorld reports that HP is considering the use of the Android platform for its Netbooks. "HP has confirmed it is considering Google's Android operating system for use in upcoming netbook computers. However, the company has not set a time line for deciding whether to offer Android exclusively or as one of several OS options for its products, if at all, according to Marlene Somsak, an HP spokesperson. "We are studying Android. We want to assess its capabilities," she told LinuxInsider. If HP decides in favor of using Android, it could well become the first major PC vendor to use Google's OS, currently deployed in smartphones."
IBM Lets Sun Set (Linux Journal)
Linux Journal's Justin Ryan looks into the latest news involving IBM's plans to acquire Sun Microsystems. "Reports surfaced late this evening that computing giant IBM which has been in talks for some time to buy Sun Microsystems has pulled its $7 billion offer to buy the struggling company. According to reports, IBM withdrew the offer after Sun's Board of Directors made "onerous" requests following IBM's decision to lower its offer for the firm. IBM initially offered $9.55 per share, but dropped that offer to $9.40 less than a $1.00 premium on Sun's current stock price due in part, it says, to the discovery that far more senior employees than originally expected are covered by "change of control" contracts."
Red Hat Opts for Pragmatism Over Glitz (NY Times)
Ashlee Vance takes a look at Red Hat and the "consumer rat race". "Behold Jim Whitehurst's iron-willed pragmatism. Mr. Whitehurst, the chief executive at Red Hat, sees the Linux revolution taking place with mobile devices. In 2009, Linux will land on more cellphones and mainstream computers than ever before — thanks to the efforts of companies like Google with Android, Canonical with Ubuntu and Intel with Moblin. As this onslaught of devices threatens to make Linux a household name, Red Hat, the most prominent Linux brand on the planet, will keep its iconic logo locked away in the data center, running on servers."
Samsung's Android Phone Plans (Forbes)
Forbes reports that Samsung has plans to release several Android devices this year. "Despite a fanatical amount of interest from the tech media and early adopters, Samsung has mostly kept quiet about its plans to develop phones using Google's mobile platform, Android. But at the CTIA Wireless trade show, an executive shared with Forbes some details about the company's Android strategy."
Linux Adoption
Morgan Stanley Growing Its Use of Linux (Wall Street and Technology)
Wall Street & Technology looks at the use of Linux by the financial firm Morgan Stanley. "Anthony Golia, executive director of enterprise computing at Morgan Stanley, told attendees at the High Performance Linux on Wall Street show this morning that his firm has been using Linux in a big way since 2001. "We use it because it performs well on inexpensive, commodity hardware," Golia said. "That continues to be true and that continues to be a reason we use it." Golia said Morgan Stanley likes the open source Linux operating system, for one thing, because whenever a bug emerges, "there's a large and diverse group of minds looking at how to fix it.""
Light and Cheap, Netbooks Are Poised to Reshape PC Industry (New York Times)
The New York Times takes a look at the netbook revolution. "Netbook makers have turned to Linux, an open-source operating system that costs $3 instead of the $25 that Microsoft typically charges for Windows XP. They are also exploring the possibility of using the Android operating system from Google, originally designed for cellphones. (Companies like Acer, Dell and Hewlett-Packard already sell some Atom-based netbooks with Linux.)" (Thanks to Mark Tall)
Legal
TomTom Settlement Aftermath: Get the FAT Out (Groklaw)
Groklaw recommends a FAT-free diet to avoid Microsoft patent liability issues. "The Linux Foundation's Jim Zemlin got the same message from the TomTom story that I did: just get rid of Microsoft's FAT filesystem: "The technology at the heart of this settlement is the FAT filesystem. As acknowledged by Microsoft in the press release, this file system is easily replaced with multiple technology alternatives. The Linux Foundation is here to assist interested parties in the technical coordination of removing the FAT filesystem from products that make use of it today." OK. Sounds like a plan. There clearly is no "new" Microsoft, and they have evidenced now a lack of interest in any real interoperability with FOSS."
Interviews
Hard Times May Boost Linux in Financial Services (HPCwire)
HPCwire talks with Inna Kuznetsova, director of IBM's Linux Strategy at the High Performance Linux on Wall Street conference in New York. "Kuznetsova: Linux has unique attributes that help to improve savings. You cannot only reduce the costs with often lower rates but also eliminate CALs to avoid upgrade penalties. Paying for a subscription instead of a license provides for a higher degree of flexibility should the customer decide to reduce resources, as often happens during an economic downturn. Standardizing on Linux reduces the number of skilled resources needed to manage multiple environments -- and at the same time, a customer can select the best hardware platform for a particular workload. Also, during mergers, Linux, because it runs on the broadest set of hardware platforms, often becomes the "common denominator," providing for a streamlined integration."
Shuttleworth: Windows 7 an Opportunity for Linux (internetnews.com)
internetnews.com interviews Ubuntu's Mark Shuttleworth. "Microsoft might be betting big on Windows 7, the next version of its flagship operating system, but to Ubuntu Linux founder Mark Shuttleworth, the upcoming release is really an opportunity for Linux to shine. Granted, Linux on the desktop has not made as much of a dent against Windows as it has in the datacenter. But Shuttleworth, who is also CEO of Ubuntu's commercial backer Canonical, figures the desktop itself and the applications that people are using are changing in ways that make the coming desktop battle different than it has ever been before."
Resources
20 of the Best Free Linux Books (LiNUXLiNKS.com)
LiNUXLiNKS.com has posted a list of their favorite free Linux books. "Individuals wanting to learn about the Linux operating system have a large selection of books to choose from. There are many thousands of informative Linux books which are in-print and available to download or buy at reasonable cost. However, as many users are attracted to Linux for the very reason that it is available under a freely distributable license, some will also want this to extend to the documentation they read. The focus of this article is to select some of the finest Linux books which are available to download for free."
Provide Robust Clustered Storage with Linux and GFS (EnterpriseNetworkingPlanet)
Charlie Schluting explains GFS configuration in an EnterpriseNetworkingPlanet article. "Load balancing is difficult; often we need to share file systems via NFS or other mechanisms to provide a central location for the data. While you may be protected against a Web server node failure, you are still sharing fate with the central storage node. Using GFS, the free clustered file system in Linux, you can create a truly robust cluster that does not depend on other servers. In this article, we show you how to properly configure GFS."
Page editor: Forrest Cook
Announcements
Non-Commercial announcements
EFF: Broad Coalition Urges Obama to Diversify IP Appointments
The EFF is encouraging President Obama to diversify its IP positions beyond those with strictly commercial interests. "The Electronic Frontier Foundation (EFF) has joined a broad coalition of public interest groups and trade associations calling for President Obama to diversify future appointments to intellectual property policy positions and create new offices devoted to promoting innovation and free expression."
OpenMoko GTA03 cancelled
Those who have been waiting for the next-generation OpenMoko phone (called "GTA03" or "3D7K") will be disappointed; OpenMoko has come to the conclusion that it will not be able to successfully complete the project. Instead, the company will work, for now, on making the FreeRunner phone actually work and on a mysterious "Project B". For more information, see this post by OpenMoko marketing chief Steve Mosher. "As it was defined, it is dead. So how do we get to a new GTA03? Two requirements: continue to improve GTA02; deliver on project B. What is GTA03 and when do we get there? There are a number of independent efforts out there that are pitching me ideas for GTA03. I talked to sean a bit about this and I'd like to try to open up more of the design process and the marketing process to the community." See also: the slides from Sean Moss-Pultz's ESC talk [PDF].
Commercial announcements
Fixstars launches NVIDIA CUDA GPU software services
Fixstars has announced the launch of a NVIDIA CUDA GPU software services program. "Fixstars, as a multi-core solution company, is expanding its current application development, optimization service, and products to global customers exploiting the significant advantage from parallel computing with NVIDIA GPUs."
New Books
Agile Web Development with Rails--New from Pragmatic Bookshelf
Pragmatic Bookshelf has published the book Agile Web Development with Rails by Sam Ruby, Dave Thomas and David Heinemeier Hansson.Cloud Application Architectures - New from O'Reilly
O'Reilly has published the book Cloud Application Architectures by George Reese.Masterminds of Programming - New from O'Reilly
O'Reilly has published the book Masterminds of Programming by Federico Biancuzzi and Shane Warden.
Resources
Surveys
PyCon 2009 Survey waiting for you
A PyCon Survey is open through April for taking feedback from PyCon attendees.
Education and Certification
Python Classes in Chicago, May 11-15
David Beazley has announced two Chicago Python classes. "Just a friendly reminder that this is the last week to save $100 on the registration for the two Python courses I'm running in May in downtown Chicago: Introduction to Python, May 11-13, 2009 Python Concurrency Workshop, May 14-15, 2009"
Calls for Presentations
LinuxCon CFP submission deadline reminder - April 15th
A call for papers has gone out for LinuxCon, submissions are due by April 15. "LinuxCon is taking place September 21-23, 2009 in Portland, OR and is co-located with the Linux Plumbers Conference. LinuxCon will provide an unmatched collaboration and education space covering all matters Linux, and including everyone in the Linux community including developers, end users, sys admins, community and more."
Call for participation: LIWOLI09
A call for participation has gone out for Liwoli 2009, the hacklab for art and open source. "Liwoli 2009 is a three day long Hacklab and an open invitation to all who would like to participate in an active process of learning, producing and sharing around the areas of Free/Libre Open Source Software and Art. FLOSS developers, artists and programmers such as the collective GOTO10 or activists from HAIP (Hack Act Interact Progress) and many others form the basis for the event and share their knowledge in the form of workshops, hacklabs, presentations, installations and performances." The event takes place in Linz, Austria on April 23-25.
Upcoming Events
EUSecWest 2009
EUSecWest 2009 has been announced. "The third annual EUSecWest applied technical security conference - where the eminent figures in the international security industry will get together share best practices and technology - will be held in downtown London at the Sound Club in Leicester Square on May 27/28, 2009. The most significant new discoveries about computer network hack attacks and defenses, commercial security solutions, and pragmatic real world security experience will be presented in a series of informative tutorials."
LayerOne 2009 - registration open
Registration is open for LayerOne 2009. "Anaheim, CA - The LayerOne computer security conference is pleased to announce that we have released our first round of speakers in addition to opening pre-registration for the general public. LayerOne is currently in its 6th year of operation and this year is shaping up to be one of our best events to date. This year's LayerOne event will be held over Memorial Day weekend, May 23-24 2009, at the newly renovated Anaheim Marriott."
NLUUG spring conference 2009 registration opened
Registration is open for the NLUUG spring conference 2009, the event takes place on May 7 in Ede, The Netherlands. "A petabyte of storage weighs about as much as a small car, but a large physics experiment can fill that up in less than a week. The modern rate of data production and amount of data storage --and crucially also data search and retrieval-- have pushed the limits of computer storage and the traditional file system further and further back. The NLUUG Spring Conference 2009 focuses on storage and the means to organise it: file systems, physical storage, connections and search."
O'Reilly Open Source Convention Reveals Program and Opens Registration
Registration has opened for the 11th OSCON, scheduled for July 20-24, 2009 in the new location of the San Jose McEnery Convention Center in San Jose, California. The early registration period is open until June 2.PyCon Italy 2009: early bird deadline is April 13
Early-bird registration closes soon for PyCon Tre. "The early-bird registration deadline for PyCon Tre (PyCon Italy 2009) is April 13th, just a few days from now. The conference will take place in Florence from May 8th till May 10th 2009, and features guests like Guido Van Rossum, Alex Martelli, Raymond Hettinger, Fredrik Lundh and Antonio Cangiano."
Call for Venue for YAPC::Europe::2010 (use Perl)
use Perl has published a Call for Venue for YAPC::Europe::2010. "BooK writes "While YAPC::Europe 2009 preparations are well underway in Lisbon, it is time for the YAPC::Europe Foundation (YEF) to look for suitable sites for the 2010 conference. Any dedicated group interested in hosting YAPC::Europe 2010 should send a brief statement of intent to venue@yapceurope.org. This should be followed by a complete application. The deadline for applications is June 30, 2009."
Events: April 16, 2009 to June 15, 2009
The following event listing is taken from the LWN.net Calendar.
| Date(s) | Event | Location |
|---|---|---|
| April 16 April 17 |
Nordic Perl Workshop 2009 | Oslo, Norway |
| April 16 April 19 |
Linux Audio Conference 2009 | Parma, Italy |
| April 16 April 18 |
Linuxwochen Austria - Wien | Wien, Austria |
| April 20 April 24 |
samba eXPerience 2009 | Göttingen, Germany |
| April 20 April 23 |
MySQL Conference and Expo | Santa Clara, CA, USA |
| April 20 April 24 |
Perl Bootcamp at the Big Nerd Ranch | Atlanta, GA, USA |
| April 20 April 24 |
Cloud Slam '09 | Online, Online |
| April 22 April 25 |
ACCU 2009 | Oxford, United Kingdom |
| April 23 April 26 |
Liwoli 2009 | Linz, Austria |
| April 23 | Linuxwochen Austria - Linz | Linz, Austria |
| April 23 April 24 |
European Licensing and Legal Workshop for Free Software | Amsterdam, The Netherlands |
| April 25 May 1 |
Ruby & Ruby on Rails Bootcamp | Atlanta, Georgia, USA |
| April 25 April 26 |
LinuxFest Northwest 2009 10th Anniversary | Bellingham, Washington, USA |
| April 25 | Linuxwochen Austria - Graz | Graz, Austria |
| April 25 | Festival Latinoamericano instalación de Software libre | All Latin America, All Latin America |
| April 25 | Grazer Linux Tage 2009 | Graz, Austria |
| April 27 | OSDM 2009 | Bangkok, Thailand |
| May 4 May 8 |
JavaScript/Ajax Bootcamp at the Big Nerd Ranch | Atlanta, Georgia, USA |
| May 4 May 7 |
RailsConf 2009 | Las Vegas, NV, USA |
| May 4 May 6 |
EuroDjangoCon 2009 | Prague, Czech Republic |
| May 4 May 6 |
SYSTOR 2009---The Israeli Experimental Systems Conference | Haifa, Israel |
| May 5 | Linuxwochen Austria - Salzburg | Salzburg, Austria |
| May 6 May 9 |
Libre Graphics Meeting 2009 | Montreal, Quebec, Canada |
| May 6 May 8 |
Embedded Linux training | Maynard, USA |
| May 7 | NLUUG spring conference | Ede, The Netherlands |
| May 8 May 10 |
PyCon Italy 2009 | Florence, Italy |
| May 8 May 9 |
Linuxwochen Austria - Eisenstadt | Eisenstadt, Austria |
| May 8 May 9 |
Erlanger Firebird Conference 2009 | Erlangen-Nürnberg, Germany |
| May 11 | The Free! Summit | San Mateo, CA, USA |
| May 13 May 15 |
FOSSLC Summercamp 2009 | Ottawa, Ontario, Canada |
| May 15 May 16 |
CONFidence 2009 | Krakow, Poland |
| May 15 | Firebird Developers Day - Brazil | Piracicaba, Brazil |
| May 16 May 17 |
YAPC::Russia 2009 | Moscow, Russia |
| May 18 May 19 |
Cloud Summit 2009 | Las Vegas, NV, USA |
| May 19 May 22 |
PGCon PostgreSQL Conference | Ottawa, Canada |
| May 19 | Workshop on Software Engineering for Secure Systems | Vancouver, Canada |
| May 19 May 22 |
php|tek 2009 | Chicago, IL, USA |
| May 19 May 21 |
Where 2.0 Conference | San Jose, CA, USA |
| May 19 May 22 |
SEaCURE.it | Villasimius, Italy |
| May 21 | 7th WhyFLOSS Conference Madrid 09 | Madrid, Spain |
| May 22 May 23 |
eLiberatica - The Benefits of Open Source and Free Technologies | Bucharest, Romania |
| May 23 May 24 |
LayerOne Security Conference | Anaheim, CA, USA |
| May 25 May 29 |
Ubuntu Developers Summit - Karmic Koala | Barcelona, Spain |
| May 27 May 28 |
EUSecWest 2009 | London, UK |
| May 28 | Canberra LUG Monthly meeting - May 2009 | Canberra, Australia |
| May 29 May 31 |
Mozilla Maemo Mer Danish Weekend | Copenhagen, Denmark |
| May 31 June 3 |
Techno Security 2009 | Myrtle Beach, SC, USA |
| June 1 June 5 |
Python Bootcamp with Dave Beazley | Atlanta, GA, USA |
| June 2 June 4 |
SOA in Healthcare Conference | Chicago, IL, USA |
| June 3 June 5 |
LinuxDays 2009 | Geneva, Switzerland |
| June 3 June 4 |
Nordic Meet on Nagios 2009 | Stockholm, Sweden |
| June 6 | PgDay Junín 2009 | Buenos Aires, Argentina |
| June 8 June 12 |
Ruby on Rails Bootcamp with Charles B. Quinn | Atlanta, GA, USA |
| June 10 June 11 |
FreedomHEC Taipei | Taipei, Taiwan |
| June 11 June 12 |
ShakaCon Security Conference | Honolulu, HI, USA |
| June 12 June 13 |
III Conferenza Italiana sul Software Libero | Bologna, Italy |
| June 12 June 14 |
Writing Open Source: The Conference | Owen Sound, Canada |
| June 13 | SouthEast LinuxFest | Clemson, SC, USA |
| June 14 June 19 |
2009 USENIX Annual Technical Conference | San Diego, USA |
If your event does not appear here, please tell us about it.
Audio and Video programs
Ubuntu Podcast - Mark Shuttleworth
Ubuntu Podcast features a video interview with Canonical's Mark Shuttleworth. "Mark Shuttleworth joins us for a video podcast to discuss the upcoming 9.04 release, Ubuntu history, Linux on the desktop, impacts of cloud computing, Ayatana, the community and Ubuntu, Ubuntu and Canonical, Google Summer of Code, Ubunet, and much more!"
Page editor: Forrest Cook
