Instead of the talk he was planning to give on the first day of
presentations at GUADEC, GNOME release
manager Vincent Untz needed to shift gears to announce the biggest news of
the conference: GNOME 3.0 would be delayed. The reason for the delay is
simple, the code and documentation "weren't quite ready", so
rather than have an "OK-ish" release, the team decided to push
it back. There will still be a release in September, however, as the team
the six-month release schedule, but it will be another in the 2.x series: 2.32.
Untz noted that GNOME 3 has a compelling story: "It's the first time
since I started with GNOME that people are so excited about GNOME".
GNOME 3 was first announced at GUADEC in 2008 with a target of making the
2.30 release (i.e. March 2010) be the first GNOME 3 release. In the
was pushed back to September, and now to March 2011. The team has
"quality as [its] first priority" and, after meeting with
various teams in the first few days of GUADEC, it became clear that more
time would be required for a solid release. "We want people to love
GNOME and we want people to love GNOME 3", he said.
In addition to the 2.32 release, there will also be a GNOME 3 beta release
in September. "We want people to start using it, start playing with
it, application developers to port to GNOME 3". By March, "we
want something that's really good", Untz said.
But that means developers and maintainers of GNOME modules will need to
work on two branches for the next two months: one for 2.32 and one for
3.0. That dual-branch development mode made up most of the concerns expressed by audience members about the
announcement. Some were worried that it would cause too much extra work,
but Untz and the release team—who joined him on the
stage—seemed to think it was manageable.
The release team will be taking steps to ease the transition, starting with
getting GtkApplication backported to GTK+ 2, which will help modules that
switch back from GTK+ 3. There will also be a flag (--enable-gtkN)
available in the configuration process that will allow applications to be
built for either GTK+.
Some maintainers will want to deliver new features
in September and for those, the 2.32 and 3.0beta releases will be the
same. Others may want to focus on 3.0 and will just release an update to
2.30 with bug fixes and the like. Separate from his talk, Untz said in
each maintainer will decide how to handle it and the release team will
assist: "we do not think it will be that much of a burden".
He also noted that GNOME's development model, by design, is focused on
rather than revolution", and that 2.32 would fit that model well:
addition to new features in the existing modules, we'll integrate
gnome-color-manager and rygel. That means that GNOME 2.32 will be our
first release to have integrated color management, as well as support
for DLNA (UPnP AV) for a rich multimedia experience at home, with all
the connected devices that people now have (TV, speakers, phones, video
game consoles, etc.)
Certainly GNOME Shell is the most high-profile module that is still in need
of work, but there are others as well. There are still a lot
of applications that need migrate to GTK+ 3 and GSettings, for example.
Updating the documentation for GNOME 3 still has a ways to go, "the new accessibility stack would get performance improvements that
will make the switch to the new stack much smoother for users",
the GTK+ engine for the art team's new theme is not quite ready, and so
Interestingly, most of those issues, when taken separately are not
blockers per se. But the addition of all of them would have lead to a
quality that is not up to our standards.
There are some other things the release team will be pushing, he said in
his talk, including encouraging the maintainers to "target achievable
goals". In particular, there will be something like a feature
freeze for the GNOME Shell functionality so that they finish what they
started and don't go off "making crazy plans". In addition,
the release team will be trying to get the community to implement the UI
mockups that the design team has created.
There has been some criticism of the GNOME 3 plans because of the lack of
support for "applets", but Untz sees it differently. Most of the applets
in use today are either things that will be incorporated into GNOME Shell
or are some kind of monitoring tool (for system performance or email for
example). There are also a few applets that are "dedicated to a
task", like the Tomboy notes applet or the Hamster time-tracking
applet, he said.
We don't believe that the system we have today with applets is the right
approach: the size of applets is limiting the user interface, and the
current number of existing applets probably show that applets are not
an attractive way for developers to extend the desktop. In addition, a
few applets are actually slowly moving to full applications (tomboy and
hamster are good examples). This is why we won't add support for applets
as we know them today.
There are plans to look at a system to replace applets some time after the
GNOME 3 release, but it is not the team's highest priority at the moment.
There are alternatives, though:
It's also worth mentioning here that the traditional GNOME 2 UI (with
the GNOME Panel and applets) will still be available with GNOME 3, so
people who have a need for a very specific applet will still be able to
use the GNOME 2 UI for some time.
Overall, the announcement was met with general approval from the audience.
The project wants to make a big splash with GNOME 3 and "OK-ish" is not the
way to do that. There seems to be a clear focus on the things that need to
be completed before there can be a solid release, so one gets the sense
that we won't see further delays. For users who can't wait, there are
plans to make the beta more easily available for various distributions,
which should allow more testing and a better final release.
Comments (28 posted)
The Free Software Foundation (FSF) has recently turned its attention to
accessibility. The organization appointed
Chris Hofstader as director of access technology in May, published a statement
addressing accessibility, and started an accessibility
list to discuss accessibility work. Now the organization is faced with
the question of how to bridge the gap between free software and
accessibility without an interim solution.
Despite decades of hard work, the unfortunate reality is that free
software does not meet the needs of all users. In particular, free software
is still well behind proprietary software in providing tools for developers
and users who require assistive technologies. Some technology, like the Orca screen reader has come along
well, but users that depend on speech recognition software find themselves
without a reliable alternative. This isn't new information, but a recent
conversation on the GNU Accessibility list raised an interesting question:
What should the Free Software Foundation (FSF) tell users that rely on
assistive technologies that do not exist as free software?
One might expect that the answer would be to rely on proprietary
software when necessary, and until the free software bridges the gap. Where
the FSF is concerned, however, that doesn't appear to be an option. The
discussion began with a post from Hofstader
asking for volunteers to work on assistive technologies (AT),
documentation, testing, and so on. This prompted a response from Eric
S. Johansson, who said he'd "raise his hand" but with some caveats:
I believe strongly that the tools first approach you and others have
spoken of misses the needs of the upper extremity disabled. their primary
need is income. You can't have freedom of choice if you can't make
money. For example, today, if I want to make money, I must use
NaturallySpeaking. There is no choice and the speech recognition projects
available today or the near future are not sufficient to replace
NaturallySpeaking (I.e. they couldn't write this e-mail and they take way
too much time to set up).
I would propose organizing the project to first satisfy the economic
needs of the disabled community, so they can make money, they can be
independent and as a result, be able to make choices about software
The issue at hand is whether it's acceptable to create bridging tools
that would be free software but depend on proprietary tools, initially,
like Dragon Naturally Speaking. Johansson's request does not seem entirely
unreasonable. Free software speech recognition is not currently an option,
some users, the only way to use a computer effectively is with speech
recognition software. It's not a question of opting to use proprietary
software out of convenience, but necessity.
That, however, doesn't seem to be acceptable to the FSF. The issue,
according to Hofstader, is that endorsing a temporary solution that
includes proprietary software would "either postpone or entirely
scrap the development of a libre engine that we can endorse." To
protect users' freedom, Hofstader says that FSF/GNU cannot "take
anything away" by using a temporary proprietary solution. According
to Richard Stallman, it requires taking a long view rather than being concerned
about the short-term inability of users to work with their
In the long term, no software task inherently requires non-free software.
In the short term, there are proprietary programs that do things that free
software currently cannot do. There is no dispute about this fact. The
question is what conclusions to draw from it.
To draw the conclusion that we should grant legitimacy to those
proprietary programs tends to lead to more use and more development of
proprietary programs. It may seem convenient in the short term, but in the
long term it perpetuates the problem. It does this both directly and
indirectly: directly by encouraging the use of specific non-free programs,
and indirectly by pouring water on the fire of our movement to eliminate
Thus we must steel ourselves to refuse the sort of short-term
"compassion" that makes injustice and dependence worse. Work carried out
under GNU auspices must be consistent with our principles.
One wonders how a user's dependency on non-free software, when driven by
a physical inability to use conventional input methods, could be worse. If
denied the ability to work in conjunction with the FSF on the most pressing
concerns, the alternative seems to be that accessibility development work
will be carried on elsewhere. Johansson predicts
that in the absence of a GNU-led project, a non-free alternative is still
likely to emerge:
You failed on hurd because it didn't get done early enough to garner a
significant mind share. I'm predicting, if you follow this path, you will
fail because a hybrid or even a totally non-free approach will be developed
first and lock-in user mind share. the end result will be users will be
locked into less free software and there'll be no way for you to displace
This is reality. People are hurting and need help now. Not 15 years
from now. Now! let's apply steady pressure and free them up a bit at a time
and get them sold on the important freedoms the free software foundation
represents. At the very end, you ride to the rescue with a good recognizer
and they will be a complete solution in the shortest possible time. We will
have a working solution in the shortest possible time minimizing the pain
and suffering of disabled users. Seriously man, there are few better ways I
can think of to spend a life.
Instead, Johansson urges the FSF to accept a "compassionate exception"
that would allow interim solutions. However, the FSF seems unwilling
to consider such a measure. As a result, Johansson seems
to have abandoned the discussion and GNU Accessibility list as a whole.
The good news is that the FSF isn't the only organization working on
accessibility in general or voice recognition in particular. The GNOME Project has been
particularly active working on accessibility, though it has been affected by Oracle layoffs
recently. GNOME's Orca has
made tremendous strides as a screen reader, and is work is going on with
use Orca with Qt applications so Orca can be used on either
Those who wish to help with efforts to develop a free speech recognition
program should see the VoxForge
project, which seeks to collect transcribed speech for use developing free
and open source speech
recognition engines. There's also the Simon Listens
project to create an open source speech recognition program.
A hard-line approach of all or nothing is not going to appeal to or help
users who depend on assistive technologies, regardless of the licensing
they're under. Given the response from the FSF on this issue it would
appear that it is not going to be the right organization to lead the charge
for accessibility. The insistence of licensing purity while disregarding
the immediate needs of the target audience for the accessibility initiative
does not bode well for the FSF's leadership in this area.
Comments (35 posted)
There are many good reasons to promote a free software project, but perhaps
the biggest is to attract more users and contributors; it's difficult to
do anything with an
application that you don't know about. But many projects fail to
effectively get the word out about their work, which means that it's
less likely a community will spring up around it. At both SCALE
8x and GUADEC 2010, I have had the
opportunity to talk about ways that projects can improve their promotional
activities and present an organized, interesting look to the rest of the
free software world. Hopefully, a summary of the ideas presented will be
helpful to the wider community.
One of the key benefits of free software is the cross-pollination it
allows. Not only can users look at features in "competing" applications
and suggest that they get incorporated, but developers can dig into the
code to gain inspiration on how to implement or improve a particular
feature. Assuming compatible licenses, projects can even adopt code
directly from each other. But it goes further than that. Projects can
learn from each
other's struggles and successes in areas like governance, release
management, revision control, licensing, and so on. In order to cross-pollinate, though, projects need to be aware of each
It's also important for a project to attract the people it needs to
thrive. That includes users and their feedback, but also developers,
translators, artists, advocates, designers, packagers, etc. Not all
projects need all of those roles filled, but many do.
Another benefit of promoting a project is that it is something of a morale
builder for the existing project members. Seeing a blurb or an article
about a recent release, or some interesting news about the project can put
a smile on the face of contributors. While that may seem trite, keeping
project teams cohesive is a very important part of the puzzle, especially
for smaller projects.
Here at LWN, we get lots of email from projects, generally announcing a new
release or some other project status update. One of the most frustrating
things is the number of those that bear very little information
beyond the name of the project and a release number. There is often little
or no description of what the project is or does, nor very much detail
about why the release is being done. It often looks as if it were written
only for project members or others who are likely to already know something
about it, but those are exactly the wrong targets.
A release announcement—or any other kind—is an opportunity to catch
the eye of people who don't know anything about the project, but if
it is hard to figure out what a project is, it's unlikely very many will
try. It is even worse when there is no URL in the message, or the link
leads to a web page that has the same problem: no project description on
the first page. Clicking multiple times to try to figure out something
useful about a project is a further barrier that even fewer will surmount.
One of the most important things a project can do is to create a short
project description, just a sentence or two, that gives a good summary of
what the project or application does. It should be targeted at people
with little domain-specific knowledge and try to place the project in
context for users. The idea is that it should be understandable to anyone
who is even remotely interested in the project. Even non-technical
relatives or significant others should at least be able to get a glimpse of
what problem the program is trying to solve just by reading the description.
While accurate and concise, the following leaves something to be desired
for someone unfamiliar with the subject matter:
"GauBlur is a Python script that applies gaussian blur to
images that are produced by dcraw. Radius values in each dimension can be specified and the IIR
algorithm is used." A better version might look
something like this: "GauBlur is a Python script that applies a
blurring technique to raw photos. It implements a 'gaussian' blur with
variable amounts of blurring, which may be useful to reduce image noise,
remove details, or smooth image data."
Clearly the (hypothetical) developers of GauBlur could do better than my
the idea should be clear: try to give enough information for experts to be
interested enough to dig deeper, while not burying less-familiar folks in
too much detail. That is likely to lose those who you are trying to attract.
The description should be used in every announcement that the project
makes. In addition, a release announcement should clearly indicate why the
release is being made—and why someone unaffiliated with the project should
care about the release. Is it a major or minor release? Does it fix bugs
or add new functionality? Are there security fixes included?
For major releases, or those with significant features or impact, adding a
paragraph or two that describes the new stuff in a quotable form is
helpful. Various publications may be interested in quoting from the
announcement, so giving them a chunk of interesting text will be
beneficial. In fact, some publications are really only interested in
quoting from press releases and announcements, so try to make that easy.
It is also helpful to put the announcement somewhere on the
web page in a separately linkable item, as opposed to an entry at the top
of a news feed list on the home page. It may be some time before the link
is followed—often as a result of a search ending up at the
publication's coverage—so ensuring that the link goes to the original
announcement is important.
The home page of a project is the primary means of communicating to new
users and contributors and, as such, it should be geared toward outreach.
Make it very clear on the first page what the project is about (using the
project description), don't bury that information. It is very frustrating
for anyone who visits the web site of an unknown project to have to click
around to various pages to figure out what it actually is. Make it
clear who the target "market" for the application or project is right up
front so that people can quickly make a decision about their level of interest.
While not directly related, I have recently noticed multiple
free software conference web pages that don't have all of the following on
the first page: dates, location, and theme/topic of the conference. Many
project web pages suffer from similar problems. Don't force folks newly
introduced to your project to play "find the link" on the page, put the
important information up front.
It is important to focus on information for the web pages, rather
than glitz. There is nothing wrong with adding decorative elements to a
page as long as it doesn't detract from or, worse, eliminate important
A more detailed description of the project, beyond the short version
described above, is also useful. A few paragraphs could be placed on the home page,
but a "For more information" link from the short description to an "About" page is reasonable as
well. While it is good to provide the more in-depth introduction, it is
still important that it be focused on those who don't know about the
project, and may have only limited knowledge of the subject area.
A news "feed" on the home page is a common and useful way to keep folks up
to date on what's happening with the project. As noted above, making each
item individually linkable is helpful. But, much like release announcements
(which may make up the bulk of the news anyway), each item should try to
make people who aren't following the project understand why the news is
interesting. Try to avoid some kind of "alphabet soup" of acronyms and/or
lots of project or domain-specific terms. People who know about the
project, already use it, or contribute to it aren't likely to be put off by
an overly simplified news announcement, but folks who don't know the
project will find a list of incomprehensible news items to be offputting.
For attracting contributors, something especially helpful is an updated
list of areas that need work along with contact information for someone who
can give more information. That list can bring in people who might not
normally think of themselves as contributors. Maybe the project needs a
logo or some translations done and someone with the relevant skills may
notice it, and jump in. It may be helpful to have a personal email address
as the contact for the task as some may be uncomfortable just posting to a
mailing list or showing up in an IRC channel.
Having different pages on the site to attract different groups
of contributors may help as well. Users typically want information on
download locations, screen shots, and documentation, while developers want
pointers to the code, toolchain information, and development mailing
lists. Translators, artists, UI designers, and so on will also have needs
specific to their areas.
Many projects are using IRC to coordinate and collaborate these days, which
is fine, but IRC is a bad medium for archival purposes because it is
difficult to search. It can also be hard to pull out a particular
conversation thread from multiple simultaneous IRC discussions. For those
reasons, mailing lists still have their
place. For larger issues, where design or discussion of features occurs,
it will be easier to follow and find if it is done on a mailing list.
Web forums can serve the same purpose as
mailing lists, but may be seen by some as a barrier. Mailing lists tend to
integrate well with tools that are already used by developers. Less
technical project members may find forums easier, though. If forums are to
be used, it is important to ensure that they can be indexed by web spiders so that searches
will find the information.
Separate mailing lists for different concerns (like user problems
vs. developer discussions) may be appropriate. It may be somewhat scary
for users to post a bug report to a list that mostly discusses the gnarly
internal details of the program, and the level of discourse (and courtesy) may differ
significantly between different kinds of lists. If meetings are being held
in IRC, posting meeting summaries and logs to the mailing list (and web
site if possible) can be very helpful for those who cannot attend. All of
this helps folks interested in writing about the project as well.
Getting your project in front of writers, editors, and bloggers is a great
way to spread the word about it. Release and other announcements
should be sent to various publications, but blindly sending them to any and
all publications is unlikely to be very successful. Instead, look for
publications (both print and online) that cover areas related to your
A little bit of research can go a long way toward getting your message in
front of the right people. Doing some reading of the content of the site
and noting its technical depth will give you a good idea of whether the site is
likely to write about your project. When in doubt, and even when you
aren't, look for contact information for the publication and check with the
editor to see if they might be interested. Perhaps many won't respond, but
those who do will likely be good landing places for your promotional
Cultivating authors and editors can be a good way to get various things
about your project posted, which will in turn make other publications take
note. The free software web news outlets generally keep a close eye on
what the others are doing, so a release announcement with an well-written
description of the changes that gets posted to one site could easily lead
to an article or blog posting on another. But, in order to get the
announcement posted, you have to have something that will likely be of
interest to the readers of that outlet.
Projects should also consider posting their announcements to the relevant
mailing lists of an overarching project (like KDE, GNOME, Python, Perl,
MySQL, PostgreSQL, etc.). Many of those umbrella projects will have an
"announce" list where postings from related projects are welcome.
So, what's interesting?
The kinds of things that are covered varies widely between publications,
which is where the research comes in handy. Obviously releases, especially
those with new—exciting—features, are of interest, but there
are a lot of other things that go on in free software projects that might
merit some attention.
Press releases for things like corporate sponsorship or the formation of a
consortium around a project (or that the project joins) are topics of
interest. Often these are more formal announcements that get generated by
a public relations firm hired by the company or consortium. Try to ensure
that the press release or announcement is released in a text format or is
available on a permanent web page, rather than a PDF or word processor
Development process discussions, as well as successes and failures in
release management and other tasks, are one common topic in free software
coverage. Because each project has its own ways of doing things, some of
which may well be applicable elsewhere, it is of interest to the wider
community. Switching version control systems or build processes are
plausible topics as well for much the same reason.
In addition, things like a "code of conduct", and its establishment and
enforcement, make good fodder for an article. Similarly, licensing
discussions, like issues with the current license or projects switching licenses for various reasons, are things
that multiple other projects will struggle with, so others outside of the
specific project affected will want to understand them. That is really the
key to whether a topic is interesting: is it more widely applicable?
Some of these topics are controversial and lead to flame wars, which makes
many want to sweep them under the rug, but there are a couple of good
reasons not to do that. While it might be a short burst of bad
publicity—which doesn't exist, at least as the saying goes—it is
also a way for folks to hear about a project that they hadn't heard of
before. Other projects can learn from your project's mistakes (and
successes), but only if they hear about them.
Other promotional opportunities
One of the best ways to get your message heard is to have a project member
give a talk at a free software conference. There are numerous benefits to
that, which reach way beyond just the actual conference attendees. Others
who are considering attending, or just curious about the program, may see
the talk listed and read the abstract, leading them to hear about the
project for the first time. Also, conferences are increasingly recording
talks in video or audio formats so that interested folks can actually
"attend" the talk well after it is given.
Some publications may be interested in running articles written by someone
involved in the project. Finding a project member that has some writing
skill and pointing them toward such publications may help get the word out
as well. Some publications, including this one, will even pay for
In many ways, the advice in this article is common sense, but free
software projects are typically run by technical folks, who may not stop to
think about the details of promoting their project. Something that can't
be overstressed is to ensure that project promotion is targeted at those
unfamiliar with the project. Others will be willing, perhaps even eager,
to delve deeper into the project's communications (web site, announcements,
mailing lists, etc.), but being unable to fairly quickly understand of the
nature of the project will chase off users, article writers, and others.
It should also be noted that web pages and email announcements should be
edited carefully for both grammatical and spelling problems. It is often
very useful to show a draft of these things to a less-technical person to
get their ideas. They can often help show problems with the writing, both
from a language usage perspective and on the technical level of the text.
If your less-technical friends and relatives can't at least
get a glimmer of what your project does from the web site or some other
communication, it's pretty likely that the people you are targeting will
have trouble as well.
Comments (12 posted)
Page editor: Jonathan Corbet
Journalist and digital rights activist Danny O'Brien came to GUADEC to try to educate GNOME hackers about
the threats facing journalists, their computers, and their online
communication from governments and organized crime. But free software can
help, so he wanted to outline the features that he thinks could be added to
desktops to help secure them and protect the privacy of all users, not just
journalists. Part of his job as internet advocacy coordinator for the Committee to Protect Journalists (CPJ) is to talk
to internet developers and "persuade them to think about how
journalists in repressive regimes are affected" by the choices those
O'Brien has written for multiple publications including Wired UK and
the Need To Know email newsletter that he founded, which ceased
publication in 2007. He has also worked for the Electronic Frontier
Foundation (EFF) as activist coordinator and most recently its
international outreach coordinator. He is now with CPJ, which is
an organization that seeks to protect journalists from various threats,
both physical and in the online world. "They know the levers of
power to get people out of trouble, or to stop them from getting into
it", he said.
He started out by explaining that journalists do not understand recursion
as he found out when he tried to unpack GUADEC (GNOME users' and
developers' European conference) for his boss. The use of an acronym as
the first word of an unpacked acronym was problematic enough, but when
tried to explain that GNOME is (or was) the GNU Network Object Model
Environment, he sensed he was getting in a bit too deep. Then he had to
try to explain "GNU's Not Unix".
The problems that many in the online and free software worlds have been
concerned about for years are finally becoming mainstream he
said. "Powerful forces are trying to stop the spread of information
online", and that message is finally starting to get out. He put up
the recent xkcd comic ("It's the
world's tiniest open-source violin") as an example of one place
where those concerns are starting to get some mainstream attention.
He pointed to a number of different attacks against the computers of
journalists, generally from governments, but sometimes also from organized
crime syndicates. It's not just repressive regimes that target
journalists, he said, noting reports on the CPJ website regarding Japanese
journalists who have been subjected to governmental pressure and mistreatment.
One of the more insidious attacks against journalists' computers was an
email sent to foreign journalists based in Shanghai and Beijing from a
fictional editor for The Straits Times. The email was a credible
request for assistance in contacting people on a list contained in a PDF
attachment—a PDF with a zero-day exploit that installed spyware on
the computer. It was not just the foreign correspondents who were
targeted, however, as the email was also sent to the native Chinese
assistants of the correspondents, which is a list that would be difficult
to generate—unless a large intelligence agency was involved.
Another common tactic used by governments to intimidate and spy on
journalists is to raid the offices of a television/radio station or
publication because the
organization supposedly owes back taxes. All of the computer equipment is
then seized for evidence. A variation of that scheme was recently used in
Kyrgyzstan where a television station was raided due to alleged software
"piracy" and all of the computers were confiscated. Whether tax or
copyright violation charges are ever filed is irrelevant because the
government is really after the information stored on the computers.
Free software hackers have more of an interest in these kinds of problems
"than just not [being] the ones affected". There are things
that free software already does fairly well because those hackers
"have an interest in creating secure systems", but there's more that could
be done. It makes sense for it to be the free software community that
fixes these problems, because it is "not beholden to big
interests", O'Brien said.
So what is the "low hanging fruit"? Encryption is one area
that is relatively well covered, at least for the web, with TLS. It
provides security for both publishers, readers, and commenters that is
protected from even "state-sized interceptors". It makes
simple censorship more difficult. The well-known Great Firewall of China looks for keywords, while the lesser-known Great English
Firewall matches URLs to a list of child pornography sites; each of those
censorship methods is blocked by encrypting web traffic.
But there all sorts of internet protocols that are plaintext. "Since
we don't use telnet any more, why should our code?" He was
disappointed that the Telepathy communication framework doesn't ship with
encryption support because it makes his job harder when recommending tools
He mentioned some Russian journalists that he had talked to who don't talk
on the telephone because they believe it to be bugged. They also only use
Gmail over HTTPS, "which is fine if you trust Google", but
they switched to using Yahoo Messenger "because they heard good
things about it"—unfortunately Messenger isn't encrypted.
O'Brien said that the reason they didn't know that it "is less secure
is because their desktop isn't telling them".
SSL certificates are another area of concern. Certificates can be forged
by governments or other entities and then used in targeted attacks to
intercept encrypted communications. The journalists that O'Brien deals
with are the "canaries in a coal mine" for these kinds of
problems. It is a "challenge for user experience" to
alert the user to things like changed certificates, but there are also
technical barriers as the libraries often don't return that kind of status
to the applications.
He would like to see desktops have some sort of "advocate" for
user security that would check and report on privacy and security issues
with the software being used. User privacy and security are
"pervasive concerns that should live on the desktop", O'Brien
said. The desktop is becoming more intertwined with the web so it would be
very beneficial to have some kind of
active monitoring that is "sitting there checking that the systems
When someone wants to communicate with multiple friends, why does the data
have to be sent to a central server, he asked. He would like to see the
desktop become a "first-class player on the internet" by
communicating in a decentralized, peer-to-peer fashion.
that know they don't want people to have privacy recognize that the desktop
is the gatekeeper. A person's desktop is their "heart of
trust", he said. "We have a responsibility to take the freedom
that we take for granted and give it to people whose only privacy is their
O'Brien came to GUADEC because he believes that the project can help solve
the problems in the privacy and security areas. GNOME has the "user
experience chops" to make
these kinds of changes, while continuing to produce a usable desktop.
While he is particularly focused on journalists, the changes he advocates
would be useful to many, but making them usable too will be a big challenge.
Comments (27 posted)
I think the whole reason many early [OLPC] laptops went out locked
was that the local projects thought that somehow a locked laptop
was "more secure" or "better" than an unlocked one. Field
experience has proven the opposite. Unlocked laptops give the
project more control, easier support, and more options.
-- John Gilmore
Comments (none posted)
Paul Vixie has posted an
article introducing DNS response policy zones
(DNS RPZ), a sort of
blacklist mechanism for domain names. "ISC is not in the business of
identifying good domains or bad domains. We will not be publishing any
reputation data. But, we do publish technical information about protocols
and formats, and we do publish source code. So our role in DNS RPZ will be
to define 'the spec' whereby cooperating producers and consumers can
exchange reputation data, and to publish a version of BIND that can
subscribe to such reputation data feeds. This means we will create a market
for DNS reputation but we will not participate directly in that
Comments (18 posted)
cabextract: code execution
|Created:||August 4, 2010
||Updated:||September 28, 2010|
||An unspecified "programming error" in cabextract apparently opens an code execution vulnerability by way of a maliciously-crafted Microsoft Cabinet file.|
Comments (none posted)
freetype: arbitrary code execution
|Created:||July 30, 2010
||Updated:||January 20, 2011|
||From the Red Hat advisory:
Several buffer overflow flaws were found in the FreeType demo applications.
If a user loaded a carefully-crafted font file with a demo application, it
could cause the application to crash or, possibly, execute arbitrary code
with the privileges of the user running the application.
Comments (1 posted)
kernel: dns_resolver upcall security issue
|Created:||August 3, 2010
||Updated:||June 20, 2011|
||From the Red Hat bugzilla:
CIFS has the ability to chase MS-DFS referrals. In order to do this it has to be able to resolve hostnames into IP addresses. For this, it uses the keys API to upcall to the cifs.upcall userspace helper. It then resolves the name and hands the address back to the kernel.
The dns_resolver upcall currently used by CIFS is susceptible to cache
stuffing. It's possible for a malicious user to stuff the keyring with the
results of a lookup, and then trick the server into mounting a server of his choosing.
Comments (none posted)
kvirc: arbitrary command execution
|Created:||August 2, 2010
||Updated:||August 17, 2010|
||From the Debian advisory:
It was discovered that incorrect parsing of CTCP commands in kvirc, a
KDE-based IRC client, could lead to the execution of arbitrary IRC
commands against other users.
Comments (none posted)
libmikmod: arbitrary code execution
|Created:||August 2, 2010
||Updated:||January 20, 2011|
||CVE-2009-3995 describes a set of heap-based buffer overflows in libmikmod. It turns out that the upstream fix did not entirely close this vulnerability, necessitating another round of updates.|
Comments (none posted)
libwebkit: multiple vulnerabilities
Comments (none posted)
mapserver: multiple vulnerabilities
|Created:||August 2, 2010
||Updated:||August 26, 2010|
||From the Debian advisory:
A stack-based buffer overflow in the msTmpFile function might lead to
arbitrary code execution under some conditions. (CVE-2010-2539)
It was discovered that the CGI debug command-line arguments which are
enabled by default are insecure and may allow a remote attacker to
execute arbitrary code. Therefore they have been disabled by default. (CVE-2010-2540)
Comments (none posted)
moin: cross-site scripting
|Created:||August 3, 2010
||Updated:||August 25, 2010|
||From the Debian advisory:
It was discovered that moin, a python clone of WikiWiki, does not sufficiently sanitize parameters when passing them to the add_msg function. This allows a remote attackers to conduct cross-site scripting (XSS) attacks for example via the template parameter.
Comments (none posted)
tomcat: multiple vulnerabilities
|Created:||August 3, 2010
||Updated:||February 14, 2011|
||From the Red Hat advisory:
The Tomcat security update RHSA-2009:1164 did not, unlike the erratum text
stated, provide a fix for CVE-2009-0781, a cross-site scripting (XSS) flaw
in the examples calendar application. With some web browsers, remote
attackers could use this flaw to inject arbitrary web script or HTML via
the "time" parameter. (CVE-2009-2696)
A flaw was found in the way Tomcat handled the Transfer-Encoding header in
HTTP requests. A specially-crafted HTTP request could prevent Tomcat from
sending replies, or cause Tomcat to return truncated replies, or replies
containing data related to the requests of other users, for all subsequent
HTTP requests. (CVE-2010-2227)
Comments (none posted)
Page editor: Jonathan Corbet
The 2.6.35 kernel was released on August 2
. The relatively long announcement
includes some thoughts on this
development cycle (he's happy with how it went) and some concerns about the
state of linux-next heading into 2.6.36. Some headline features in 2.6.35
include receive packet
and receive flow
, direct I/O support for Btrfs, and the usual pile of new
drivers. As always, lots of details can be found on the excellent KernelNewbies 2.6.35 page
Stable updates: the 188.8.131.52,
184.108.40.206 stable updates are out; they all
have the usual pile of important fixes. 220.127.116.11 is the final 2.6.33
update, so users should be thinking of moving on; Greg suggests 2.6.35,
since 2.6.34 "is not going to be maintained for much longer."
Comments (none posted)
The Linux 2.6.35 release is also noteworthy in that it is the first
Linux kernel release for which Torvalds specifically attempted to
limit the number of changes made during its development to help
limit the growing size and complexity of kernel updates.
So, in between snapshotting the image and actually hibernating, we
have two parallel universes, one frozen in the image, the other
writing that out to swap: with the danger that the latter (which is
about to die) will introduce fatal inconsistencies in the former by
placing pages in swap locations innocently reallocated from it.
(Excuse me while I go write the movie script.)
-- Hugh Dickins
I regret putting the ordering into the original barrier code...it
definitely did help reiserfs back in the day but it stinks of magic
When it goes wrong, we'll only notice .000000001% of the time, and
even then it'll only be when people report some random corruption
which we'll blindly blame on either axboe or the drive.
-- Chris Mason
Comments (none posted)
It has been more than four years since LWN first reported on the AppArmor security
and the opposition to its addition to the mainline. Over that
time, there has been much discussion of pathname-based
security, the value of having multiple security modules, and more;
meanwhile, AppArmor has mostly faded from view. Canonical developer John
Johansen has picked up this module, though, and has been working toward its
inclusion. The latest "what's coming" post from security maintainer James
Morris (click below) now shows that AppArmor has been queued for the next
merge window (the "Yama" security module from Canonical is also queued).
Unless some last-minute opposition turns up, this should be the end of a
Full Story (comments: 33)
James Morris's 2.6.36 security
included, among other things, the Yama security module
contains a number of security-related changes from Canonical. James later
updated his posting
I'm going to revert the Yama stuff for 2.6.36 -- Christoph has
nacked it to me off-list.
An off-list shootdown was always going to raise eyebrows, but Christoph
(Hellwig) was quick to make his concerns
public. He said:
As mentioned a few times during the past discussion moving broken
code into a LSM doesn't magically fix it. In fact YAMA is not any
kind of (semi-)coherent security policy like Selinux, smack or
similar but just a random set of hacks that you didn't get past the
Christoph, it seems, would rather that these changes went directly into the
subsystems affected, rather than being swept into a separate security
module. The problem, of course, is that's just how Yama author Kees Cook
had started; he was told in no uncertain terms that putting his
security-related changes directly into the VFS and ptrace() code
was unwelcome. The advice at that time was that his changes should be put
into a security module where the rest of the world could ignore them. Even
Christoph suggested that
approach back in June.
The "not a coherent security model" objection was heard from some other
directions as well. According to Valdis
In other words - if you want to be an LSM, you need to be
full-featured enough to cover all the bases, not just a few
Some developers, it seems, would rather not see a set
of security-related tweaks gathered together into a module without an
overall policy behind it. There have also been the usual claims that
everything done by Yama can also be accomplished in SELinux, though Kees
seems to disagree.
This rejection leaves Kees in the difficult position of trying to upstream
his changes (something his employer has been criticized for not doing) but
having no apparent way to actually get them merged. But it may be that all
that's really required is a bit of patience. New security modules always
seem to bring opposition out of the woodwork, but, with some persistence,
they tend to get merged in the end.
Comments (31 posted)
Paul McKenney, it seems, is now working with the Linaro project, an
assignment which has given him a new interest in power management. He has
decided to start off with a bang by attempting
to summarize the suspend blocker discussion
with the goal of really
understanding what Android's requirements are. Needless to say, he has
kicked off a new, lengthy discussion which has cast the players' positions
in a new light.
To oversimplify: one side seems to believe in addressing power management
(and poorly-behaving applications in particular) by shining a light on the
problems and fixing them (or applying social pressure to get them fixed)
one at a time. This is the approach taken by developers like Arjan van de
Ven, who have developed and used tools like PowerTop to great effect. The
other side pushes for a more general solution; Paul describes the difference in view this way:
I agree that much progress has been made over the past four years.
My laptop's battery life has increased substantially, with roughly
half of the increase due to software changes. Taken over all the
laptops and PCs out there, this indeed adds up to substantial and
So, yes, you have done quite well.
However, your reliance on application-by-application fixes, while
good up to a point, is worrisome longer term. The reason for my
concern is that computing history has not been kind to those who
fail to vigorously automate. The Android guys have offered up one
way of automating power efficiency. There might be better ways,
but their approach does at the very least deserve a fair hearing --
and no one who read the recent LKML discussion of their approach
could possibly mistake it for anything resembling a fair hearing.
So far, the conversation has not yet really returned to the Android
approach; it has stayed more focused on the requirements and whether the
"whack-a-mole" approach to power management is sufficient in the long
term. Chances are good that Paul will be sending out an updated version of
his requirements description sometime in the near future. Then, perhaps,
there can be a calm discussion of how those requirements might best be
Comments (16 posted)
Kernel development news
The 2.6.36 merge window got off to a rather slow start; Linus, perhaps, has
been spending too much time with his
and not enough at the keyboard. Things got rolling,
though, on the afternoon of August 4; as of this writing, about 2600
patches have been merged into the mainline. Here is a summary of what has
been seen so far.
User-visible changes include:
- The 9p filesystem has gained support for extended attributes and
a new, Linux-specific variant of the 9p2000 protocol called 9p2000.L.
- The CIFS filesystem can now make use of FS-Cache to keep local
copies of files for performance.
- The TOMOYO Linux security module has a new "interactive enforcing
mode," allowing an administrator to permit policy violations at run
time. It is intended to help when installing application updates
(which require policy changes) on running production systems.
- At long last, the AppArmor security module has been merged.
- Rafael Wysocki's wakeup_count mechanism has
been merged. This feature is intended to make it possible to suspend
the system without having to worry about races with wakeup events; it
thus hopes to solve part of the problem addressed by suspend blockers.
- Support for the LIRC infrared controller API has been merged, along
with a long list of LIRC drivers. LIRC is one of the larger pieces of
out-of-tree code which is still shipped by many distributors, so this
merge should help bring distributor kernels that much closer to the
- New drivers:
- Boards and systems:
Bluewater Systems Snapper 9260/9G20 modules,
HP t5325 thin client systems,
NXP Semiconductor LPC32xx-based systems, and
Eukrea CPUIMX51 and CPUIMX35 modules.
Atmel QT602240 I2C touchscreens,
Analog Devices ADXL34x three-axis digital accelerometers, and
Cypress cy8ctmg110 touchscreens.
- Miscellaneous: ARM Ltd. character LCD displays,
HTC "Dream" (G1 handset) GPIO lines, and
Intel "intelligent power sharing" controllers.
Freescale Flexcan CAN controllers,
ESD USB/2 CAN/USB interfaces,
Chelsio T4-based gigabit and 10Gb Ethernet
adapters with PCI-E SR-IOV virtual functions, and
CAIF protocol drivers on slave SPI interfaces
i.MX27/i.MX25 camera sensor interfaces,
SunPlus SPCA1528-based USB cameras,
SQ Technologies SQ930X-based USB cameras,
Windows Media Center Edition eHome infrared transceivers, and
Freescale VIU video engines.
Changes visible to kernel developers include:
- The ARM architecture has lost support for the "discontigmem"
memory model; it is expected that everybody is using sparsemem at this
point. ARM has also switched from the old bootmem allocator to
memblock (formerly LMB) and added support for the -fstack-protector
- The DMAPI hooks have been dropped from the XFS filesystem, indicating
that the XFS developers do not ever expect to get hierarchical storage
management at this level merged.
- The PM_QOS API has changed again; quality-of-service requests are now
void pm_qos_add_request(struct pm_qos_request_list *request,
int pm_qos_class, s32 value);
The biggest change is that the request structure must now
be allocated by the caller; this shifts a bit of work but,
importantly, allows this function to be called in atomic context.
The merge window can be expected to remain open until around
August 15, unless Linus decides to surprise developers by making it
Comments (8 posted)
Linux, like most other operating systems, has long tried to keep
frequently-accessed data in main memory. The cost of fetching a page from
disk is high, so every I/O operation which can be eliminated by keeping
data in a faster location yields a significant performance improvement.
Recently, there has been an increasing level of interest in adding more
levels of cache; the result has been patches like
, and more. The latest
contribution in this area is a set of patches aimed at enabling multi-level
caching within the Btrfs filesystem.
The patches, posted by Ben
Chociej, are not a complete solution at this time. This code, instead, is
meant to add the infrastructure needed to determine which data within a
filesystem is "hot"; other work, to be done in the near future, will then
be able to make use of this information to determine which data would
being hosted on faster media - on a solid-state storage device, perhaps.
The copy-on-write nature of Btrfs, along with its built-in volume
management code, should make the implementation of this functionality
relatively easy. We should find out in "a few weeks," when the first of
these patches is promised; meanwhile, there is some interesting
instrumentation work to look at.
These patches work by hooking into the small number of places in Btrfs
where new I/O operations are initiated. Each of these places gets a call
void btrfs_update_freqs(struct inode *inode, u64 start, u64 len,
Where inode is the inode for the file being operated on,
start is the beginning offset (in bytes), len is the
number of bytes being transferred, and the mildly confusing create
parameter is nonzero iff the operation is a write.
This function maintains two red-black trees; the first, which is
filesystem-wide, tracks the "hottest" inodes. For each inode, there is
another tree tracking the hottest parts of the file. For each tree, the
btrfs_update_freqs() call will update the stored parameters with
the passed-in values.
The code tracks six independent parameters: the number of reads, a running
average of the time between reads, and the time since the last read - along
with the same information for writes. In the end, that information gets
passed to a piece of deep magic called btrfs_get_temp() which
boils those numbers down to a single "temperature" value. Your editor
would love to simply provide the formula which is used, but it's not that
simple - there's a lot of trickery with magic constants and various
provisions against integer overflow problems. For those who would like to
figure it out for themselves, here's the source for
There are three new ioctl() operations added by the patch set. To
get the heat information for a specific file,
BTRFS_IOC_GET_HEAT_INFO may be used. There are also
BTRFS_IOC_GET_HEAT_OPTS and BTRFS_IOC_SET_HEAT_OPTS for
querying and setting the state of heat tracking and (someday) migration of
data based on the measured temperature data. A debugfs interface is also
provided for those who would like to look at all of the data collected by
There has not been a huge response to this patch set so far. The biggest
complaint should be somewhat predictable: this capability looks like
something which would be useful for many filesystems, so implementing it
just for Btrfs looks like working at the wrong level. The virtual
filesystem (VFS) layer is well placed to track I/O operations and could
manage this kind of data collection. The VFS could also, perhaps, use this
data to make better decisions on which pages to keep in the page cache.
But, as long as the data is locked up within Btrfs, the VFS layer cannot
use it, and it cannot be used to benefit any other filesystems.
The response to this complaint is that only Btrfs has the multiple device
support needed to make use of this data. Dave Chinner finds that justification unconvincing, saying:
Why does it even need multiple devices in the filesystem? All the
filesystem needs to know is the relative speed of regions of it's
block address space and to be provided allocation hints.
There is often a degree of tension between those who would add features to
specific filesystems and those who would rather see that functionality done
at the VFS level. As a general rule, widely-useful features benefit from
being done in the VFS, where they are more widely used and more closely
scrutinized. But, often, an individual filesystem implementation can serve
as a useful proof of concept and a place where important lessons are
learned. All of which is to say that "hot data tracking" will likely make
it into the kernel at some point, but it's not clear whether what is merged
will resemble the current patches or not.
Comments (9 posted)
In the context of the IRMOS
(Interactive Real-Time Applications on
Service-Oriented Infrastructures), a new realtime scheduler for Linux
has been developed by the Real-Time
The purpose of this article is to provide a general overview of this new
scheduler, describe its features and how it can be
practically used, provide a few details about the implemented
algorithms, and gathering feedback by the community about possible
The IRMOS realtime scheduler (a.k.a., EDF throttling or realtime
throttling), allows the administrator to reserve a "slice" of the processing
capability of a system for a group of Linux tasks. It is based on a
direct modification of the POSIX realtime scheduling class within the
Linux kernel, and in particular, to the throttling mechanism already built
into the kernel for realtime tasks. Basically, the realtime
throttling mechanism is changed from a mechanism that exclusively
limits the computation power granted to groups of realtime tasks, to
one that provides them with both a limit and precise
scheduling guarantees (in terms of a guaranteed runtime every
period, on each of the available CPUs). Also,
it has been designed from scratch with SMP support in mind, and it
implements a hierarchical scheduling policy based on both deadlines
and priorities. Specifically, POSIX fixed priority (FP) realtime
scheduling is nested inside EDF (Earliest Deadline First) scheduling.
The IRMOS realtime scheduler allows for the provisioning of
scheduling guarantees to individual task groups.
This provisioning is done by specifying two scheduling parameters:
a budget Q and a period P. The tasks in the group
are entitled to run on each of the CPUs (processor, or cores when
present) available on the platform, for Q time units every period of P
time units. This constitutes a scheduling guarantee and a limitation
at the same time.
For example, on a single-CPU system, a single task attached to a
reservation of 10ms every 100ms is guaranteed to be scheduled on the
CPU for 10ms every 100ms. If the task tries to execute for more than
10ms, then the scheduler removes it from the run queue until the
next period, at which point its budget is refilled. So if the system has no
other ready tasks to schedule, then the CPU goes idle in this time.
Note that periods of different reservations may be specified
independently from each other, and the above guarantee is still valid.
The EDF-based scheduler applies a simple admission control
rule that decides whether a new reservation may be accepted; it works by
ensuring that the sum of the utilizations (budget over period) of all
the reservations is less than or equal to the maximum configured share
assigned to realtime tasks. This limit may be configured through
the cpu.rt_runtime_us and cpu.rt_period_us entries
of the root-level cgroup filesystem (see the tutorial below).
Theoretically, the EDF-based scheduling algorithm allows for
full utilization of each CPU by realtime tasks, provided that those tasks
can be properly partitioned across the CPUs. However, from a practical
perspective, this is far from being a desirable working condition.
The CBS: EDF-based Scheduling and Temporal Isolation
The deadline-based part of the IRMOS scheduler is an
implementation of a hard-reservation variant of the Constant Bandwidth
Server (CBS) algorithm, described in
Multimedia Applications in Hard Realtime Systems" by Abeni and
Buttazzo. Let us
take a peek at how this works, focusing on a single-CPU system, where
independent reservations are scheduled.
For each reservation, in addition to the configured budget and period
values, the kernel manages a current budget and a current
deadline. Reservations are scheduled on each CPU depending on
their current deadlines, using the earliest deadline first algorithm.
The first time a reservation is activated, the current deadline is
initialized to the activation time plus the configured period,
and the current budget is set equal to the configured budget. Each time
any task in the reservation is scheduled for some time on the CPU, the
current budget is decreased by the same time value. Once the current
budget goes to zero (it may also become negative due to
non-interruptible kernel sections -- see below), the reservation is
suspended (throttled) till the next activation period, when the current budget is
refilled again to the configured value, and the deadline is moved forward
by a value equal to the period.
A sample schedule is
shown in the diagram below, for two tasks with reservations of 5ms
every 9ms and 2ms every 6ms,
respectively, for an overall utilization of about 88.9%.
However, this is not enough yet, in order to guarantee temporal
isolation among independent reservations. If one of the
reserved tasks tried to consume more CPU than allocated, then it could
potentially cause a deadline miss for another task which is, instead,
behaving according to the declared parameters:
To avoid this problem, the offending task is suspended by the kernel
until the next period:
Furthermore, whenever the reservations become non-runnable (e.g., all
of the attached tasks block, then someone wakes up later) in a way
that does not fit into the classical periodic activation pattern, we
have another potential problem. For example, if a reservation becomes runnable
too close to its current deadline, and the current deadline is not
changed, then it will be selected by the EDF scheduler as the most
urgent one to schedule, causing a potentially arbitrarily long delay
to any other reservation on the same CPU:
To mitigate this problem, when a reservation wakes up as a consequence of a task
unblocking itself, the scheduler may behave in one of two ways: if a
relatively small time has elapsed since the process blocked, then the kernel
keeps the same deadline and budget for the reservation. However, if
an excessive amount of time has elapsed, then the
kernel "resets" the deadline to the current time plus the reservation
period, and the current budget to the allocated reservation budget:
More specifically, if the remaining budget
divided by the time left until the current deadline does not exceed the
bandwidth allocated to the task (equal to the configured reservation
budget over the reservation period), then the current deadline and
budget are preserved, otherwise they are reset. See the paper on
algorithm for details and a formal proof that this rule ensures
temporal isolation among reserved independent task groups, regardless of
their actual temporal behavior.
The above described mechanism has also the desirable
property of self-synchronizing the scheduler with the
temporal behavior of realtime tasks. In fact, when a reservation is
attached to a single classical periodic realtime task, as soon as it
wakes up in response to some (almost) periodic event, the scheduler
will probably move its current deadline back to the wake-up time plus the
period. On the other hand, such action is not usually done for very
short sleeps of the task during its main execution body, e.g., in case
it blocks on short critical sections for sharing data with other tasks
of the same application.
The IRMOS scheduler features hierarchical scheduling,
mixing both deadline-based and priority-based
scheduling. Specifically, POSIX priority-based realtime scheduling is
nested inside EDF-based scheduling. The situation is depicted in the
figure to the right.
When a reservation is selected to run by the partitioned
EDF-based scheduler, a global POSIX priority-based scheduling policy
decides what tasks belonging to that reservation will actually run on
each CPU. If there are M CPUs, at most the M tasks with the
highest priority (among the ones belonging to the reservation group)
are the ones which actually run. The system performs admission control
over admitted reserved groups, so that the overall system capacity may
be properly partitioned among concurrently running activities in the
system, without overloading it. Also, the scheduler has a hierarchical
configuration capability, by which it is possible to define groups and
nested subgroups of realtime tasks with given scheduling parameters.
Further details about the IRMOS realtime scheduler are omitted for
the sake of brevity, however the interested reader can refer to the
Multiprocessor CPU Reservations for the Linux Kernel" describing the
scheduler which appeared
Any comments and feedback on the project by Linux users and developers
is more than welcome. Authors can be contacted by using the
AQuoSA mailing lists.
The IRMOS scheduler is implemented as a partitioned scheduling
strategy: each reserved task group corresponds to a set of CBS
reservations allocated (with identical parameters) on the CBS
schedulers running independently on all of the available CPUs. The
overall design of the current sched_rt implementation does not change;
it still keeps one private run queue per CPU, and, thus, each CPU is
scheduled independently of the others.
That said, using EDF to schedule groups
and a fixed priority (FP) scheme among the tasks of each group
requires using a different representation for groups and tasks within
the run queues, so a sched_entity represents only tasks within
the group they belong to, and the EDF-related parameters (deadline,
budget, period) are kept inside the rt_rq describing the
actual run queue associated to each cgroup (note that
the rt_rq is, at the same time, the data structure enqueued
with EDF parameters into the run queue it belongs to, and the
fixed-priority queue responsible, once selected, for the
priority-based scheduling of its own tasks).
The existing code represents groups of tasks using struct
task_group objects; tasks are grouped on the basis of the cgroup
they belong to, and each task group contains an array of per-processor
run queues (rt_rqs).
Tasks are represented by their own scheduling entity. The full hierarchy
seen by the user is used internally only for admission control, as the
scheduler itself enqueues all the rt_rqs based on their deadlines
in the first-level run queue, and all the tasks are enqueued into the
priority queue of the rt_rq they belong to. On each scheduling
decision the (unthrottled) rt_rq with the smallest deadline is
selected, and its highest-priority task is executed. When the tasks
inside an rt_rq consume their assigned budget they will be throttled;
their rt_rq is dequeued from the EDF run queue, and a
timer is posted to recharge the run time, update the deadline, and
requeue the rt_rq for the next period.
Using the full hierarchical setup for admission control introduces an
extra element of complexity in the interface, because, in general, for
each group, the user needs to specify the overall bandwidth assigned
to the group and to its child groups, as well as the bandwidth assigned to
the tasks belonging to the group itself. This increases the number
of parameters for each group from two to four.
To avoid priority inversion problems, the scheduler uses, as the old
throttling mechanism does, boosting;
it lets groups with tasks inside critical sections run
even if they should be throttled, charging them with the extra CPU
time consumed only after they exit the critical section. From the
implementation perspective this means that the EDF run queue also
may contain boosted groups, which are scheduled only according to
the highest priority among those of the tasks they contain; boosted
rt_rqs take precedence over the other ones.
Admission control and deadline guarantees
When considering realtime systems, we might be concerned about
how, exactly, to exploit the described realtime scheduling
policy in order to provide proper realtime guarantees to
applications. In relatively simple cases, the answer is straightforward.
For example, a classical periodic realtime task with a known worst
case execution time (WCET) of C and minimum inter-arrival period of T,
can be scheduled within a reservation with budget equal to C and reservation
period equal to T and will not miss any deadlines. Also, the admission
test in this case is the well-known Liu and Layland test for EDF
realtime tasks (sum of utilizations must be less than or equal to 1).
However, looking at realtime theory, one easily finds much
more complex realtime task models, which include activation
offsets, maximum blocking times, durations of
critical sections accessing shared resources,
etc. Also, real-world realtime applications are often complex
multi-threaded applications (think of vlc) which are very far
from behaving like foreseen by the "ideal" periodic or sporadic task
model, and whose activation times are driven by disk I/O and networking
instead of (or in addition to) timers. Furthermore, if the application
is distributed, one has usually a distributed end-to-end deadline
constraint to deal with, something out of reach for a kernel-level
Under such a challenging scenario, it is still possible to schedule
realtime applications with a simple policy based on the fundamental
principle of temporal isolation, like the one being presented in this
article, and provide the necessary guarantees. However, the admission
test becomes complex, involving long and involved computations,
thus prohibitive for the kernel. For a list of possible admission
control tests for realtime applications scheduled with various
policies (including EDF and FP), the reader can have a
look at the proceedings of conferences dedicated to realtime scheduling, such as
the Real-Time Systems Symposium
(RTSS), the EUROMICRO
Conference on Real-Time Systems (ECRTS),
the IEEE Real-Time and Embedded
Technology and Applications Symposium, or others.
This complexity is why, in the EDF-based scheduler described above, the realtime
scheduling parameters communicated to the kernel are kept at the bare
minimum and are used in a very simple admission control test. This approach
not try to guarantee that all admitted applications will meet
their deadlines, but rather it aims to provide to each application a
guaranteed share of the available underlying computing power,
with a precise timing granularity. Whether this is sufficient
or not for guaranteeing the performance of specific applications must be
confirmed by other means, involving a proper design methodology and
benchmarking process, possibly with the help of user-space middleware.
A short tutorial
In order to try the IRMOS realtime scheduler, you can get the latest
changes pushed on
(currently corresponding to the 2.6.34-rc5
git clone git://aquosa.git.sourceforge.net/gitroot/aquosa/linux-irmos
or, for the PREEMPT_RT port:
git clone git://aquosa.git.sourceforge.net/gitroot/aquosa/linux-rt-irmos
Alternatively, you can download one of the supported kernel releases
(currently, we have a patch for the 18.104.22.168 series), and
the corresponding IRMOS patch from the AQuoSA web site.
Also, you need to properly configure the kernel, ensuring the
following options are enabled (most of them are already enabled by
If preferred, a few binary RPM/DEB kernel packages can also be
conveniently downloaded from
the AQuoSA web
In order to use the realtime scheduler's capabilities, you need to mount
the cgroup filesystem with something like:
mount -t cgroup -o cpu,cpuacct cgroup /cg
By default, up to 95% of the CPU power is allocated to standard POSIX
realtime tasks in the root group, which doesn't leave much left over for
reservations. So, before we can create a new group,
we need to reduce the runtime for root-level tasks, e.g., lowering it
to 200ms every 1s:
echo 200000 > /cg/cpu.rt_rt_task_runtime_us
Now we can create a new group, with a reservation of 10ms every 100ms:
echo 100000 > /cg/g1/cpu.rt_period_us
echo 10000 > /cg/g1/cpu.rt_runtime_us
echo 100000 > /cg/g1/cpu.rt_task_period_us
echo 10000 > /cg/g1/cpu.rt_task_runtime_us
At this point, the new group has no associated tasks. We can attach a
task by writing its Linux thread id (tid) to the tasks
special file entry available in the group folder:
echo 1421 > /cg/g1/tasks
Now the attached task has only been added to the group, but it still
has its own scheduling class, defaulting to SCHED_OTHER. In
order to exploit realtime scheduling, we need to assign to the task
one of the realtime classes and a realtime priority:
chrt -r -p 20 1421
At this point, the task is running with the configured scheduling
guarantee (and limitation) of 10ms every 100ms.
Usability and AQuoSA integration
As shown above, the interface towards the new realtime scheduling
functionality is based on the cgroup filesystem. While constituting a
perfectly usable interface for scripting languages and system
administrators, this kind of interface makes programming realtime
applications which exploit the new scheduler functionality quite
cumbersome: in order to create new reservations, folders need to be
created in the cgroup filesystem; for setting scheduling parameters,
numbers need to be formatted and written to cgroup entries; for
reading them, cgroup entries need to be read back and parsed; etc.
The libcgroup library may be of
some help for such issues, but it
carries non-negligible overhead into the applications. This may be
especially troublesome for adaptive applications, e.g., multimedia
ones, that might need to change dynamically the reservation parameters
following the dynamic workload.
Furthermore, when changing both scheduling parameters
(runtime and period), operations need to be carried out in a proper
order which depends on the previous values of the parameters
themselves, otherwise the admission control logic may reject one of
the intermediate steps. Also, while playing with the scheduling
parameters (e.g., while tuning the application's performance), one is
forced to use intermediate configurations which are highly
undesirable. For example, reducing both the
budget and the period by an order of magnitude, such as from (100,200)
to (10,20), one needs to reduce the runtime first,
obtaining (10,200), then the period. However, in the
intermediate configuration the realtime task is likely to fail due
to the insufficient resources being granted.
Also, in the future, the number of parameters needed to configure the
realtime scheduler's behavior is expected to (slightly) grow. What is
needed from an application development perspective, is an atomic way
of setting and changing them, possibly in the form of a user-space
library (or system call) interface.
However, the new scheduler is being integrated into the
AQuoSA open-source project
(Adaptive Quality of Service Architecture), which makes a
API and adaptive reservations available to application developers
as a dynamically linkable library. AQuoSA provides a user-space
library which implements the
API on top of the cgroup-based operations needed to deal with the
IRMOS scheduler, easing the task of coding applications that want to
use it. More details on the AQuoSA integration can be
Relationship with SCHED_DEADLINE
In addition to the IRMOS realtime scheduler, the Real-Time Systems
Laboratory of Scuola Superiore Sant'Anna also collaborated
with Evidence in the
implementation of another EDF-based realtime scheduler for
It is natural to wonder how these two schedulers differ:
SCHED_DEADLINE allows for having one single task attached to an EDF
reservation; this raises such issues as what to exactly do if the EDF
task forks. Options range from setting the policy of the child to
SCHED_OTHER, to setting it to SCHED_DEADLINE with the original
bandwidth split in half between the father and child, to providing the
child with an initial bandwidth (budget, or runtime) of zero, as
happens by default in the latest implementation.
The IRMOS scheduler has hierarchical scheduling capabilities, in that
it allows for having a full-fledged POSIX realtime (sub-)scheduler
nested inside each EDF-based reservation: when an EDF thread
forks, the child keeps the same class (_FIFO or _RR)
and priority as the parent and they both keep sharing the same EDF
reservation. This allows for easy encapsulation of entire complex
multi-threaded software components (e.g., an Apache web server, a KVM
instance, etc.), but it also works with "traditional" realtime software
components, such as a set of a few realtime threads with realtime priorities
a single realtime component on the system (e.g., a control thread
along with the IRQ threads related to its own I/O towards the
controlled peripherals). If a strong assessment of
the realtime schedulability of the system is needed, it is possible
to use hierarchical realtime scheduling theory in order to analyze
whether or not the individual RT threads will meet their deadlines.
SCHED_DEADLINE uses a new system call interface,
sched_setscheduler_ex(), which only allows for creating a
reservation attached to a task.
The IRMOS scheduler exploits the realtime throttling cgroup-based
interface which was already there in the kernel, thus a new empty
reservation is created by creating a folder in the cgroup filesystem,
tasks are attached by adding their TIDs to the tasks file,
the runtime and period are set by echoing their values into the
corresponding file entries in the group folder, etc.
SCHED_DEADLINE supports partitioned EDF, and we have a
EDF implementation [PDF] made as part of the Master Degree Thesis in
Computer Engineering of Juri Lelli.
In the IRMOS scheduler, reservations apply to all CPUs, and it
supports Global Fixed Priority tasks nested inside partitioned EDF-based
reservations. If a RT task exhausts the budget (runtime) of the EDF
scheduler over a CPU, it can still run exploiting the budget available
in the same reservation over another CPU (migrating the task or the
bandwidth are both available options). However, affinity masks can
be used in order to better control over which CPUs realtime tasks
will be able to migrate.
SCHED_DEADLINE is implemented as a completely new scheduling class,
while the IRMOS scheduler is a modification to the already available POSIX
realtime scheduling classes.
SCHED_DEADLINE also supports soft (work-conserving) resource reservations,
while the IRMOS scheduler does not (however, it is planned as future work).
Even if the two projects are currently completely separated, there is
a good basis for having a common EDF-based realtime scheduling
infrastructure, that might be used by using different user-space
APIs in the two cases.
In the future, we plan to improve the scheduler on various sides.
Concerning the user-space API, the cgroup-based
interaction model already proved to suffer from major limitations. For
example, the absence of an atomic way to set at the same time multiple
scheduling parameters constitutes a major limitation of the current
Also, we plan to develop more options in the realtime scheduling
model, such as:
- Adding soft resource reservations, allowing for
work-conserving reservations coexisting with non-work-conserving
- Improving the access-control model, making the
realtime scheduling capabilities more easily accessible to
- Adding the possibility to specify a desired budget
(run time), in addition to the minimum guaranteed one subject to
admission control, which could be used for implementing adaptive
reservations, which, in turn, could be useful for applications showing
significant workload fluctuations at run time.
- Adding some form of deadline inheritance for better addressing
the well-known priority inheritance problem, e.g., by means of the
BandWidth Inheritance (MBWI) protocol, or some variation on
All of the above improvements would go into the direction of enhancing
usability of the realtime scheduler by common multimedia
applications. These would be the applications taking most of the
benefits from the exploitation of realtime scheduling capabilities of
the Linux kernel, such as the ones described above.
Comments (4 posted)
Patches and updates
Core kernel code
Filesystems and block I/O
- Mimi Zohar: EVM .
(July 30, 2010)
Virtualization and containers
Benchmarks and bugs
Page editor: Jonathan Corbet
Debian and Ubuntu have a set of official membership roles that can be granted to
regular contributors. Those roles come with rights that enable the
contributors to do their work and to participate in the project governance
(elections and other official decision-making processes). It's also a way
for the distributions to acknowledge the work done: most contributors are
proud of the status they reached.
The membership structure plays an important role in the development
of a distribution: it defines the kind of contributors that
are welcome in the project, it sets expectations of the project towards
its contributors and defines their rights. In the end, this shapes
the project's ability to recruit new contributors to keep the project
alive and kicking.
This article introduces the existing statuses in Debian and Ubuntu, and
defines the — sometimes confusing — jargon associated with
The Debian case
Debian only has two types of official members: Debian Developers
(DD) and Debian Maintainers (DM). The rights of the developers
are codified in the Debian Constitution
while those of the maintainers have been defined in a
general resolution of
2007. The Debian Maintainer status is still mostly documented in
a wiki page. The
integration of this new status in Debian's official processes has been
slow to come largely because it was introduced — at that time —
without enough negotiation with the involved parties.
Nowadays, it is preferred that people get the DM status before applying for DD.
DM is a very limited role: maintainers can only upload packages that
already have their name on them (either in the Maintainer or Uploaders field)
and a specific flag (DM-Upload-Allowed: yes) that only Debian Developers
can add. They have no other rights and limited access to Debian's
Besides those official roles, there are also maintainers of packages
that have no official status within Debian except that they are listed in
the "Maintainer" field of the package. They are doing the
maintenance work but all uploads are done by a Debian Developer
after verification of the work done (this is called "sponsorship" and is
the only way to start with official packaging work). Once the
DD trusts the maintainer, the developer will typically ask the maintainer
to apply for DM status in order to be relieved from the sponsorship work.
In the end, that makes three different kind of package maintainers and a
lot of confusion when you discuss membership issues... in particular when
Maintainer process is the path that you follow to become
a Debian Developer. Don't be fooled by the names when reading Debian's
Developer roles in Ubuntu
Ubuntu has had, from the start, an official Ubuntu
Member status that includes all contributors: developers of course,
but also documentation writers, artists, translators, etc.
This status notably grants the right to vote in elections of the Community
Council, the right to participate on Planet Ubuntu, and an @ubuntu.com email
the situation is more complicated: the wiki page lists no less than five
different statuses. Initially, developers were split between Ubuntu Core
Developers and the MOTU (Masters Of The Universe). The
latter were responsible of the universe/multiverse sections of the archive
while the former also had upload rights for the main/restricted sections.
But, inspired by the Debian Maintainers concept and facing concrete
problems in terms of archive management, they changed their infrastructure
to offer more fine-grained control of package uploads.
Ubuntu can now grant upload permissions on a package-per-package basis, but it can also
delegate the right to grant upload permissions with the same granularity.
This lead to the new Per-Package Uploader status, which is simply
an Ubuntu Member with upload rights on a limited set of packages
where they have a specific expertise. The more generic Ubuntu
Developers status now encompasses members of various development
teams that have been delegated the right to manage
upload permissions on a (usually large) package set (the current teams
are Ubuntu Desktop, Mythbuntu, Kubuntu, and Edubuntu). Those teams can
define their own policy to add new members provided they follow the basic
rules defined by the Developer Membership Board (see this wiki
Ubuntu Contributing Developer is an intermediate status
for someone who is not yet ready for one of the other developer statuses
but who has still shown enough commitment to be an Ubuntu Member.
All those statuses can be obtained
in a similar way: you prepare a wiki page listing your past
contributions, you collect testimonials from existing members that you
have worked with, you add yourself in the agenda of the next meeting of
the board (or council) that grants the status that you seek, and you
attend the meeting. The members of the board will decide whether you are
ready for the status (or not) based on what you provided in the wiki,
based on your answers during the meeting (and on a mailing-list for
developers), and based on what others have to say about you.
The most important boards are usually elected by the
community while others are commonly appointed by the community
council. Those governance bodies include Canonical employees but not as
many as one would expect: two out of eight in the Developer
Membership Board, two out of eight in the Community Council, but
all six members of the Technical Board.
The last figure is not surprising given that the members of the technical
board are appointed by Mark Shuttleworth. The community does not get to
choose, it can only approve the choice by a confirmation vote. The founder
of the distribution clearly wants to keep control on the technical
directions of the project. He's also the only person to have a permanent
seat on both the Community Council and the Technical Board.
Comparison of the statuses between Debian and Ubuntu
The following table summarizes the rights given to each developer role in
the two projects (Put the mouse over the abbreviations to know what they
are referring to).
|Package maintenance via sponsorship||Y||N/A||Y||Y||Y||N/A
|Official email alias||-||Y||Y||Y||Y||Y
|Participate in votes for
|Participate in votes for
|Upload rights restricted to pre-approved packages||Y||-||-||Y||-||-
|Upload rights restricted to a section of the
|Unlimited upload rights||-||Y||-||-||-||Y
|Number of contributors (as of 2010-07-27)||117||904||462||27||85||63
Please note that the number of contributors are not 100% accurate for Ubuntu.
A contributor can have multiple statuses (direct membership to a launchpad group)
granted over time (while gaining experience). The problem has been mostly avoided
by calculating differences between number of members of the various groups but
it's not perfect and it can't be: some MOTU are also PPU for packages in main
and it's legitimate (but I only counted them as MOTU and not as PPU).
Another limitation is that members of some administrative teams are included
indirectly in many teams and thus appear in the count while they should
Anyway, this simple table makes it obvious that Ubuntu's structure offers
a broader choice of statuses. They acknowledge the work of all contributors
from the start while still giving the most critical rights only to those
who have proven that they deserve them. Despite this difference, Debian still has a
significant advantage in terms of number of developers. That number does
not tell the whole story though: the Ubuntu contributors include many
Canonical employees (e.g. 36 out of 63 core developers have a
@canonical.com email registered on their launchpad account) that are
likely to spend more time working on the distribution than the average
Debian member. But even if comparing person-hours would be a challenging
thought experiment, in practice it's of not much interest if both projects
continue to cooperate and more and more of the contributions flow in
Debian is aware of the shortcomings of its structure. Changes
to better accommodate non-packagers have been discussed several times
already. The last
efforts in that direction were unfortunately perceived as a
solution ready to go rather than a proposal to be discussed, and the
project got quickly buried by a general
resolution (GR). Even if that resolution invited for further discussion
and a new proposal, the truth is that when someone's initiative is
"corrected" by way of GR, it usually kills any motivation to go
On the Ubuntu side, the infrastructural changes were completed recently
and they don't expect any further change in the near future. They
do plan, however, to expand usage of those new features so that more
teams benefit from the possibility to control upload rights on packages
that are relevant to them, and so that more individuals developers apply to
become Per-Package Uploaders on packages that they know very well.
On the Debian side, a recent discussion on
the debian-project list brought back
the topic of the bad
terminology and it was agreed that the "New Maintainer
process" should be renamed into something else ("New Developer process"
has been suggested). But Christoph Berg — Debian Account Manager and hence
heavily involved in the New Maintainer Team — suggested
that Debian would be better off implementing the long-awaited membership changes
before trying to update all the documentation. It would certainly
imply some more vocabulary updates. Later in the discussion, he confirmed
that membership reform is at the top of the TODO list of the new
maintainer team (just after the rewrite of the nm.debian.org website).
What can be expected from this reform? The following answers are my own
guesses based on my experience of Debian, but the project hasn't decided
First of all: a new status for contributors who are not packagers.
The tricky part will be defining the process to follow and the rights
Changes to the technical implementation of the DM status. The
current implementation does not allow to give upload rights to a single
DM if two are listed as Uploaders of the same package (and both might not have
the same experience for that package). Furthermore, it suffers from annoying
restrictions like the inability to upload new binary packages.
A change of the Debian constitution to integrate those new statuses is
Other more invasive changes have been proposed like replacing the NM
process by a simple designation from other DD, but it's unlikely to happen. The NM
process can already be greatly simplified by the application manager if the
applicant can show good testimonials from other developers and if he has a
track record of real contributions (e.g. as witnessed by changelog
entries in Debian packages).
Almost two years have elapsed since the previous efforts in that
direction, the new maintainer team has recruited new members and is in a
general better shape. DebConf is approaching (August 1-7) and has
traditionally been a good place to discuss important reforms. Hopefully,
the next episode of this saga will have a better outcome.
Comments (13 posted)
Time for yet another MeeGo 1.0 release announcement
: this time they have produced the first version of the "in-vehicle infotainment" version of the distribution. "As part of this release, we are including a sample IVI Home Screen and taskbar, using the included Qt framework, and designed with Automotive Center Console HMI requirements in mind. We have also included some automotive specific middleware components and a few sample applications, including sample navigation program (Navit) and a sample dialer application (BT-HFP Dialer) which uses Bluetooth and a paired phone.
Comments (none posted)
The 1.0 release of the Jolicloud netbook-oriented distribution has been announced
It has various new features which will doubtless appeal to certain classes
"As you may have seen, we've made some changes to the Stream, making
it a more social app-sharing experience. Now, in Jolicloud 1.0, you can
'like' apps, which will show up in your Stream. It will also list the app
in your 'Favorite Apps'. Don't forget to use this new feature - it's
helpful for other users to figure out what the most popular apps are in our
growing App Center.
Comments (8 posted)
I think it would be good to have a [strategy] proposal focusing on end user products, on something aunt Tilly can work with. openSUSE could be a distribution aiming for polish, the final touch. Working on creating a great end user product. And both the Gnome and KDE people would be able to work with it, as would the Apache team, the Kernel team and all others in the community!
-- Jos Poortvliet
Comments (none posted)
The North American FUDCon (Fedora Users and Developers Conference) for 2011
will be held in Tempe, Arizona January 29-31, 2011. "Our last North
American FUDCon was in Toronto, Canada. The year previous it was in
Boston, MA. We always encourage feedback from the contributors, and the
one answer that popped up more often than any other was, 'Let's go
somewhere warm this coming winter!'
Full Story (comments: none)
Red Hat Enterprise Linux
Red Hat has announced that Red Hat Enterprise Linux 3 will receive another
3 months of support before it reaches its end of life. "In
accordance with the Red Hat Enterprise Linux Errata Support Policy, the
regular 7 year life-cycle of Red Hat Enterprise Linux 3 will end on October
31, 2010. After this date, Red Hat will discontinue the regular
subscription services for Red Hat Enterprise Linux 3. Therefore, new bug
fix, enhancement, and security errata updates, as well as technical support
services will no longer be available.
Full Story (comments: 3)
The H reports from the Illumos announcement
; Illumos will be a all-free downstream derivative of OpenSolaris. "Illumos is endorsed and supported by Nexenta, Joyent, Greenviolet, Belenix, Schillix, Berlios and Everycity in its efforts to create a freely available SunOS derivative. [Garrett] D'Amore emphasises that Illumos is not a fork, but the work being done will empower the community to fork if needed in the future. He believes the project already has the critical mass necessary to sustain the engineering effort needed.
Comments (62 posted)
Newsletters and articles of interest
Comments (none posted)
Page editor: Rebecca Sobol
Two new open source projects were unveiled to the public in the closing
days of July, both from companies better known for producing proprietary
products: British mobile network provider Vodafone and US defense
contractor Lockheed Martin. Vodafone released the source code to its Wayfinder Navigator mobile
mapping-and-routing system, several months after announcing the end of the
commercial Wayfinder product line. Lockheed Martin launched an "enterprise
social network" system called EurekaStreams, which the company
continues to use internally and suggests will become a commercial product
in the future. The two products could hardly be more different, and
neither could the two approaches taken by the projects.
Wayfinder Navigator was a client-server navigation system; the maps were
stored on the server, and the server searched for address and calculated
routes, relaying its results to the handheld client. Vodafone purchased
Wayfinder Systems AB in 2008, and sold subscriptions to the Wayfinder
system to its mobile customers as a revenue stream.
As mapping enthusiasts would no doubt guess, that business model began
to lose money once competition from completely free services like Google
Maps picked up. Vodafone announced
Wayfinder's end-of-life in March, refunding the balance of subscriptions
already paid by customers.
The open source project was announced
on July 13, hosted at oss.wayfinder.com. The initial code release included
the back-end server, client code for Android, iPhone, and Symbian S60
devices, a partially-finished client application written in C++, and a
suite of tools for performing map conversion and managing a cluster of
Wayfinder servers. All of the code was placed under a BSD-style license,
and the source put into public
repositories at GitHub. Subsequently, a sample set of Tele Atlas map
data was released, under separate license
terms that permit its use only for software development.
The server code as posted has been modified to use OpenStreetMap map data instead
of the proprietary Tele Atlas maps that had been used by Vodafone. The
project site cautions that differences between the maps may make routing
unreliable at present. Vodafone engineers posted to the Wayfinder forum
that there is no more code on way, particularly not the Maemo Wayfinder
client that shipped for Nokia's N800 and N810 Internet tablets. The reason
given is that the Maemo development team had long since departed, and
Vodafone could not perform the audit necessary to ensure that the client
code was entirely its to release.
As for the development of the code base from this point forward,
Vodafone's intentions are murky. The GitHub repositories are read-only for
the present, although the FAQ
page solicits patches by email, and suggests that outside contributors
can request write access. There does seem to be a small contingent of
Wayfinder developers monitoring the project wiki, discussion forum, and
blog, willing to answer questions. The project's documentation is also thorough,
covering server-side installation, client compilation, map conversion, and,
more importantly, documenting the server functions, API, and client-server
communications protocol. On the down side, there have not been any commits
to the GitHub code by Vodafone developers since the first release.
EurekaStreams is a web-based social network designed for use across a
large enterprise. Lockheed Martin developed it in-house, and uses it for
internal communications. The public "community" EurekaStreams project was
launched on July 28. Its code is also hosted at GitHub, and all
code is placed under the Apache 2.0 license.
By "enterprise social network," EurekaStreams means a single site
hosting individual user accounts and permitting an array of in-network
messaging and group collaboration methods. On the surface, the
EurekaStreams site featured in the marketing and demonstration seems to
take its cues from Facebook, although it is intended to run within a single
company network and lacks several functions that commercial services
feature, from ad serving to instant messaging.
Digging into the documentation, the emphasis is
placed on conversation streams, and its list of features falls somewhere in
between that of a basic microblogging platform and a full-blown
bells-and-whistles site like those offered by Facebook and MySpace.
EurekaStreams supports "follow" relationships (as opposed to bilateral
"friend" relationships) between users, detailed profile pages, groups,
lists, and searchable hash tags inside status updates.
The public messages are not limited to plain status updates, however;
link- and media-embedding is allowed, as is posting to "group streams"
instead of the user's personal stream, and the system will gladly consume
any publicly-retrievable feed, such as Google Reader or Del.icio.us
bookmarks. The messaging format is based on OpenSocial 0.9, which also makes
writing embeddable "social gadget" web applications possible. The
documentation discusses this, but does not provide any OpenSocial gadgets
in the default setup.
EurekaStreams is designed to run on Apache, and uses several well-known
open source components: Lucene for
search, Memcache for caching, PostgreSQL and Hibernate for storage, and Shindig for the OpenSocial
implementation. The documentation provides a thorough specification of the
system's architecture, from the conceptual design right down to the
persistence and notification frameworks.
On the development end, the project is demonstrably still active. There
has been ongoing activity since the project's release, reportedly from
Lockheed Martin's original EurekaStreams development team. There are more
than 130 open issues in the bug tracker, which is publicly accessible, and
an active mailing list on which Lockheed Martin engineers interact with
third-parties. One of the EurekaStreams developers said on his blog, however, that
write access to the source repository itself will remain closed to outside
contributors for the foreseeable future.
The value for the open source ecosystem
As is always the case when a large entity launches a new open source
project with a substantial code base, the big question remains what impact
either of these newcomers will have on the existing open source
The open source Wayfinder certainly has a lot of competition in the free
mapping software space. There are numerous navigation applications for
desktop Linux systems as well as for Android, Maemo, and MeeGo. Wayfinder
is different in that provides a routing engine, which most open source
clients do not. The routing algorithm is also documented,
which could lead to its transplanting into other projects. Wayfinder also
reportedly enjoyed a good reputation for its accessibility by hearing- and
seeing-impaired users, something in which the open source competition does
not receive high marks, and the code base covers two mobile platforms not
often given attention by projects that begin as open source: iPhone and
On the other hand, Wayfinder is also unique in its client-server design;
most of the mobile open source applications are either client-only or
depend on calculating routes and adding map tiles at a desktop computer in
advance. The client-server model might be more convenient for those users
with low-resource handheld devices, but a completely free routing server
will not be easy to deploy. The OpenStreetMap project itself restricts API
usage in order to keep its hosting costs under control — running a
free OSM-based mobile service on a large scale either has to overcome the
local map storage problem, or running a Wayfinder routing server
EurekaStreams is most often compared to StatusNet in the open source social
networking circle. In many ways, the feature sets do overlap, especially
given both projects emphasis on real-time notification passing.
EurekaStreams adds a bit more than microblogging, with a richer user
profile system, built-in link-sharing, lists, and saved search streams.
For its part, StatusNet supports plugins that can add many
of those features, including expanded profiles, bookmark sharing, and file
attachments other than images.
But StatusNet is also built around the OStatus protocol, which supports several
features that are not currently implemented in StatusNet itself. OStatus
incorporates ActivityStreams, which
includes several message types beyond basic status updates, including
"follow," "share," and "tag" alerts. Even more importantly, OStatus
supports federation between independent sites, through the use of Salmon and PortableContacts. The
EurekaStreams project sells potential customers on its highly-scalable
architecture, but by all accounts, it still implements a social network
designed to pass messages solely inside its own four walls.
Some early assessments of the open source Wayfinder project deemed it a
code dump, with Vodafone setting its unwanted software out on the curb. If
it is, and Vodafone has no intention of maintaining the code base, there is
a window for outside developers to take up interest in the project before
it falls into obsolescence. It is probably too early to make a final
judgment on that; to its credit the company has at least been monitoring
the project's activity and says it is open to outside contributors.
Without full-time development, though, third-party developers may find it
more attractive to sift through the GitHub repository for bits of reusable
code simply to take home to their own applications.
EurekaStreams, in contrast, is far from finished. Not only does
Lockheed Martin use it internally, but it appears to be undergoing constant
development, and the company openly discusses plans to sell paid versions
of the package to enterprises. Whether or not that proves a
viable business model is yet to be seen. The social networking space is
very young, and the competing open source projects even disagree on
standards. EurekaStreams is certainly an impressive-looking stack at
present; developers looking to start their own social networking system
from scratch (such as Diaspora) could save themselves a lot of time and
effort by at least taking a hard look at what it offers now.
Comments (3 posted)
Audio hackers unfortunately don't grow on trees. In my counting,
there are 3 people paid in the whole industry who work on general
purpose audio infrastructure of Linux. Two of them are basically
busy with keeping the HDA driver up-to-date, if I am correctly
informed. The third one is me.
-- Lennart Poettering
Over the past few months, I've noticed something: Subversion
development is fun again. I don't know if it's the fact that we're
all working hard on a difficult problem, or that the code is
changing rapidly, that we've got some new blood in the project, or
simply that we're headed into the final push toward a landmark
Whatever the cause, thanks for making me look forward to hacking code
-- Hyrum K. Wright
I'm right now listening music with my newest music app on my phone:
Rockbox plays music on my HTC legend... I'm overly happy that I've
gotten that far now. I'm looking forward to finally bring gapless
playback, dynamic playlists and an extensive equalizer (and more)
to Android! Although there's a lot of work still to do.
-- Thomas Martitz
Less and less sites work with Gnash everyday, and nobody
cares. People just install the Adobe plugin, even most of our free
software supporting friends.
-- Rob Savoye
Comments (1 posted)
When Dave Neary announced his GNOME Census report, he stated that the full
report would only be available to paid customers until October, when it
would be released under the CC Attribution-Sharealike license. Things have
changed, though, and the
full report is now available to all
. "Why the change of heart?
My intention was never to make a fortune with the report, my main priority
was covering my costs and time spent. And after 24 hours, I've achieved
that. I have had several press requests for the full report, and requests
from clients to be allowed to use the report both with press and with their
" The report may be downloaded via this
Comments (6 posted)
Version 0.7 of OCRFeeder - a tool for analyzing the layout of documents and
performing optical character recognition on the text parts - has been
released. Changes include a number of user interface improvements, a
number of accessibility improvements, and a new "deskew images" action.
Full Story (comments: none)
The Python development team has announced the first alpha release of Python
3.2. "Since PEP 3003, the Moratorium on Language Changes, is in
effect, there are no changes in Python's syntax and built-in types in
Python 3.2. Development efforts concentrated on the standard library and
support for porting code to Python 3.
Full Story (comments: none)
Pyvm is an experiment in building an entire user-space implementation from
scratch using a Python-like language. Version 3.0 has just been released. "Currently, in 80k lines of code one can find: web browser,
font rasterizer (TTF & Type1), PDF viewer, git, PGP, SSH,
windowing environment that can run on linuxfb, audio/video
player (requires ffmpeg) and many other applications in
compact simplified implementations.
Full Story (comments: none)
The first of a regular series of Rakudo Star releases has been announced
. "Rakudo Star is
aimed at 'early adopters' of Perl 6. We know that it still has some bugs,
it is far slower than it ought to be, and there are some advanced pieces of
the Perl 6 language specification that aren't implemented yet. But Rakudo
Perl 6 in its current form is also proving to be viable (and fun) for
developing applications and exploring a great new language. These 'Star'
releases are intended to make Perl 6 more widely available to programmers,
grow the Perl 6 codebase, and gain additional end-user feedback about the
Perl 6 language and Rakudo's implementation of it.
" It's built on
the Rakudo Perl 6 compiler, the Parrot virtual machine, and an initial
set of library modules.
Comments (46 posted)
Newsletters and articles
Comments (none posted)
Jono Bacon responds to the GNOME census
and the criticisms of Canonical which have followed. "What the report doesnt take into account are upstream contributions that are built on the GNOME platform but (a) not part of official GNOME modules, and (b) hosted and developed elsewhere, such as Launchpad. As such, while the report is accurate for showing code and contributions accepted into GNOME, there are also many projects built on GNOME technology that are not taken into account due to non-inclusion in GNOME modules or being developed outside of GNOME infrastructure.
Comments (22 posted)
Page editor: Jonathan Corbet
Groklaw has the
from Conservancy v. Best Buy. "One of the defendants was
Westinghouse Digital Technologies, LLC, which refused to participate in
discovery. It had applied for a kind of bankruptcy equivalent in
California. Judge Shira Scheindlin of the Southern District of New York has
now granted Software Freedom Conservancy, a wing of Software Freedom Law
Center, triple damages ($90,000) for willful copyright infringement,
lawyer's fees and costs ($47,865), an injunction against Westinghouse, and
an order requiring Westinghouse to turn over all infringing equipment in
its possession to the plaintiffs, to be donated to charity. So, presumably
a lot of high-def TVs are on their way to charities.
" Given that
the defendant is in bankruptcy, one should not hold one's breath waiting
for those TVs, but, as the article points out, this ruling cannot fail to
get the attention of the other defendants.
Comments (20 posted)
Articles of interest
Reddit has posted an
extensive interview with Richard Stallman
main shortcoming of Linux is at the level of device support. The
obstacle there isn't a lack of ability among Linux developers, but
rather the use of devices whose specs are secret.
Finishing the HURD would not advance us at all in supporting these
devices. The work that is needed is at the driver and firmware level.
That's why our high priority task list includes items relating to free
drivers, but not the HURD.
Comments (14 posted)
report on iTnews
saying that Oracle has abruptly shut down a set of
servers used to perform quality assurance on PostgreSQL releases.
"Sun Microsystems - and for a short time its new owner Oracle - had
provided three member servers to ensure PostgreSQL was stable on the
Solaris operating system. The development of PostgreSQL had been supported
by Sun - which contributed DTrace support, amongst other features to the
database platform. At the start of July, Oracle shut down its three
PostgreSQL build farm servers without warning, leaving the PostgreSQL
community rushing to find replacements.
Comments (21 posted)
Roland Mas and Raphaël Hertzog have announced their intent to
translate their French Debian administration book into English. "The
resulting book will be published under a Debian Free Software
Guidelines-compliant license if we manage to collect the 25000€ that
will allow us to complete this translation in good conditions.
Full Story (comments: 1)
O'Reilly has released "HTML5: Up and Running" by Mark Pilgrim.
Full Story (comments: none)
O'Reilly has released "Head First WordPress", A Brain-Friendly Guide to
Creating Your Own Custom WordPress Blog, by Jeff Siarto.
Full Story (comments: none)
The Free Software Foundation Europe Newsletter for August 2010 is out.
"The focus of this edition is Free Software in the public sector: on
a national level within the United Kingdom, in the Italian region of Bozen,
and in the Austrian city of Linz. We introduce a new definition and of
mnemonic Open Standards, and invite you to participate in upcoming local
Free Software events.
Full Story (comments: none)
The July issue of the CE Linux Forum Newsletter covers Embedded Linux
Conference Europe Speakers, LinuxCon Japan Schedule, and the eLinux wiki
Full Story (comments: none)
Education and Certification
The Linux Professional Institute (LPI) has launched the "Virtualization and
High Availability" exam elective for their LPIC-3 certification program.
"Exam development for the LPI-304 "Virtualization and High
Availability" included extensive consultation with industry, a global Job
Task Analysis survey amongst IT professionals, and close to a hundred
"beta" exams at a series of special events around the world. These "beta"
exams were offered to qualified IT professionals in order to establish the
psychometric data necessary to ensure exam questions and objectives were
relevant and of the highest quality.
Full Story (comments: none)
Developers working on Battle For Wesnoth recently held an IRC meeting to
discuss the Apple App Store
and other situations which make GPL license compliance hard.
"The point was generally agreed that this meeting was about where our
boundaries are, and not specifically about Apple; other platforms we
are considering present similar issues. Android (with AT&T's apparent
removal of the 'allow third party apps' button) and PalmOS were
specifically mentioned as imminent porting targets.
proposals can be found in the minutes, but no decisions have yet been made.
Full Story (comments: 14)
The GNOME and KDE projects have announced
that they will be holding a joint desktop summit in Berlin in August, 2011. "The 2011 Desktop Summit will build on the first Summit's success. More than 1,000 contributors from more than 50 countries are expected to attend the 2011 event in Berlin. In addition to members of the GNOME and KDE development community, the conference will also attract many participants in the overall FLOSS community from local projects, organizations, and companies.
Comments (none posted)
Events: August 12, 2010 to October 11, 2010
The following event listing is taken from the
||Debian Day Costa Rica
||Desamparados, Costa Rica
|Conference for Open Source Coders, Users and Promoters
|Free and Open Source Software Conference
||St. Augustin, Germany
||Waco, TX, USA
|LinuxCon Brazil 2010
||São Paulo, Brazil
|Free and Open Source Software for Geospatial Conference
|DjangoCon US 2010
||Portland, OR, USA
|CouchCamp: CouchDB summer camp
||Petaluma, CA, United States
|Ohio Linux Fest
||Columbus, Ohio, USA
||Open Tech 2010
|Open Source Singapore Pacific-Asia Conference
|3rd International Conference FOSS Sea 2010
|X Developers' Summit
|Italian Debian/Ubuntu Community Conference 2010
||Software Freedom Day 2010
||Portland, OR, USA
||Open Hardware Summit
||New York, NY, USA
|BruCON Security Conference 2010
|PyCon India 2010
|Workshop on Self-sustaining Systems
|Japan Linux Symposium
||3rd Firebird Conference - Moscow
|Open World Forum
||Firebird Day Paris - La Cinémathèque Française
|Open Video Conference
||New York, NY, USA
|Foundations of Open Media Software 2010
||New York, NY, USA
|IRILL days - where FOSS developers, researchers, and communities meet
|Utah Open Source Conference
||Salt Lake City, UT, USA
|Free Culture Research Conference
If your event does not appear here, please
tell us about it.
Page editor: Rebecca Sobol