LWN.net Weekly Edition for October 1, 2009
LinuxCon: Kernel roundtable covers more than just bloat
If you have already heard about the kernel roundtable at LinuxCon, it
is likely due to Linus Torvalds's statement that the kernel
is "huge and bloated
". While much of the media focused on
that soundbite, there was quite a bit more to the panel session. For one
thing, Torvalds definitely validated the
impression that the development process is working better than it ever has,
which has made his job an "absolute pleasure
" over the last
few months. In addition, many other topics were discussed, from
Torvalds's motivations to the lessons learned in the 2.6 development
series—as well as a bit about bloat.
![[Roundtable]](https://static.lwn.net/images/lc-roundtable-sm.jpg)
The panel consisted of five kernel developers: Torvalds, Greg
Kroah-Hartman of Novell, Chris Wright of Red Hat, Jonathan Corbet of LWN,
and Ted Ts'o of IBM (and CTO of the Linux Foundation) sitting in for Arjan
van de Ven who got held up in the Netherlands due to visa problems. James
Bottomley of Novell moderated the panel and set out to establish the
ground rules by noting that he wanted to "do as little work as
possible
", so he wanted questions from the audience, in particular
those that would require answers from Torvalds as "he is sitting up
here hoping to answer as little as possible
". Bottomley was
reasonably successful in getting audience questions, but moderating the
panel probably took a bit more effort than he claimed to be looking for.
Innovative features
Bottomley began with a question about the "most innovative
feature
" that went into the kernel in the last year. Wright noted
that he had a "virtualization slant
", so he pointed to the
work done to improve "Linux as a hypervisor
", including memory
management improvements that will allow running more virtual machines more
efficiently under Linux. Corbet and Ts'o both pointed to the ftrace and
performance counters facilities that have been recently added. Tracing and
performance monitoring have both been attacked in various ways over the
years, without getting into the mainline, but it is interesting to see
someone approach "the problem from a different direction, and then
things take off
", Corbet said.
Bottomley altered the question somewhat for Kroah-Hartman, enquiring about
the best thing that had come out of the staging tree that Kroah-Hartman
maintains. That seemed to stump him momentarily, so he mentioned the USB 3.0
drivers as an innovative feature added to the kernel recently, noting that Linux is the
first OS to have a driver for that bus, when hardware using it is still not
available to buy: "It's pretty impressive
". After a moment's
thought, though, Kroah-Hartman pointed out that he had gotten Torvalds's
laptop to work by using a wireless driver from the staging tree, which
completely justified that tree's existence.
Ts'o also noted the kernel mode switching support for graphics devices as
another innovative feature, pointing out that "it means that the X server
no longer has to run as root—what a concept
". He also
suggested that it made things easier for users who could potentially get
kernel error messages in the event of a system hang, without having to hook up
a serial console.
Making it easy for Linus
Torvalds took a "different tack
" on the question, noting that
he was quite pleased with "how much easier my job has been getting in
the last few months
". He said that it is a feature that is not
visible to users but it is the feature that is most important to
him, and that, in the end, "it improves, hopefully, the kernel in
every area
".
Because subsystem maintainers have focused on making it "easy for
Linus
" by keeping their trees in a more mergeable state, Torvalds
has had more time to get involved in other areas. He can participate in
more threads on linux-kernel and "sometimes fix bugs too
". He
clearly is enjoying that, especially because "I don't spend all my
time just hating people that are sending merge requests that are hard to
merge
".
Over the last two merge windows (including the just completed 2.6.32
window), things have been going much more smoothly. Smooth merges mean
that Torvalds gets a "happy feeling inside that I know what I am
merging — whether it works or not [is a] different issue
". In
order to know what he is merging, Torvalds depends on documentation and
commit messages in the trees that outline what the feature is, as well as
why people want it. In order to feel comfortable that the code will
actually work, he bases that on his trust of the person whose tree he is
merging to "fix up his problems afterwards
".
Motivation
The first question from the audience was directed at Torvalds's motivation,
both in the past and in the future. According to Torvalds, his motivation
for working on the kernel has changed a lot over the years. It started
with an interest in low-level programming that interacted directly with the
hardware, but has slowly morphed into working with the community, though
"I shouldn't say 'the community', because when anyone else says 'the
community', my hackles rise [...] there's no one community
". It is
the social aspect of working with other people on the kernel project that
is his main motivation today, part of which is that "I really enjoy
arguing
".
Torvalds's technical itch has already been scratched, so other things
keep him going now: "All of my technical problems were solved so long
ago that I don't even care [...] I do it because it's interesting and I
feel like I am doing something worthwhile
". He doesn't see that
changing over the next 5-10 years, so, while he wouldn't predict the
future, there is a clear sense that things will continue as they
are—at least in that time frame.
Malicious code
Another question from the audience was about the increasing rate of kernel
contributions and whether that made it harder to keep out malicious code
from people with bad intentions. Kroah-Hartman said that it is hard to
say what is malicious code versus just a bug, because "bugs are
bugs
". He said he doesn't remember any recent attempts to
intentionally introduce malicious code.
Torvalds pointed out that the problem has never been people intentionally
doing something bad, but, instead, trying to do something good and
unintentionally ending up causing a security hole or other bug. He did
note an attempt to introduce a back door into the kernel via the
BitKeeper
repository 7-8 years ago which "was caught by BitKeeper with
checksums, because they [the attackers] weren't very good at it
". While that is the
only case he is aware of, "the really successful ones we wouldn't
know about
".
One of Git's
design goals was to keep things completely decentralized and to
cryptographically sign all of the objects so that a compromise of a public
git server would be immediately recognized, because it didn't match others'
private trees, he said.
Performance regressions
Bottomley then turned to performance regressions, stating that Intel had
been running a "database benchmark that we can't name
" on
every kernel release. They have found that the performance drops a
couple of percentage points each release, with a cumulative effect over the
last ten releases of about 12%. Torvalds responded that the kernel is
"getting bloated and huge, yes, it's a problem
".
"I'd love to say we have a plan
" for fixing that, Torvalds
said but it's not the case. Linux is "definitely not the
streamlined, small, hyper-efficient kernel that I envisioned 15 years
ago
"; the kernel has gotten large and "our icache
[instruction cache] footprint is scary
". The performance regression is
"unacceptable, but it's probably also unavoidable
" due to the
new features that get added with each release.
Audio and storage
In response to a question about professional audio, Torvalds said that the
sound subsystem in the kernel was much better than it is given credit for,
especially by "crazy
" Slashdot commenters who pine for the
days of the Open Sound System (OSS). Corbet also noted that audio issues
have gotten a lot better, though, due to somewhat conflicting
stories from the kernel developers over the years, audio developers
"have had a bit of a rough ride
".
A question about the need for handling memory failures, both in RAM and
flash devices, led Ts'o to note that, based on his experience at a recent
storage conference, there is "growing acceptance of the fact that
hard disks aren't going away
". Hard disks will always be cheaper,
so flash will be just be another element in the storage hierarchy. The
flash hardware itself is better placed to know about and handle failures of
its cells, so that is likely to be the place where it is done, he said.
Lessons learned
The lessons learned during the six years of the 2.6 development model was
the subject of another question from Bottomley. Kroah-Hartman pointed to
the linux-next tree as part of a better kernel development infrastructure
that has led to more effective collaboration:
"We know now how to work better together
". Corbet noted that
early 2.6 releases didn't have a merge window, which made stability of
those releases suffer. "What we've learned is some
discipline
", he said.
In comparing notes with the NTFS architect from Microsoft, Ts'o related that
the core Windows OS team has a similar development model. "Redmond
has independently come up with something almost identical to what
we're doing
", he said. They do quarterly releases, with a merge
period followed by a stabilization period. Microsoft didn't copy the Linux
development model, according to the NTFS architect, leading he and Ts'o to
theorize that when doing
development "on that scale, it's one of the few things that actually
works well
". That led Bottomley to jokingly suggest a headline:
"Microsoft validates Linux development model
".
Torvalds also noted that the development model is spreading:
"The kernel way of doing things has clearly entered the 'hive mind' when it
comes to open source
". Other projects have adopted many of the
processes and tools that the kernel developers use, but also things like
the sign-off process that was added in response to the SCO mess. Sign-offs
provide a nice mechanism to see how a particular chunk of code reached the
mainline, and other projects are finding value in that as well.
Overall, the roundtable gave an interesting view into the thinking of the kernel developers. It was much more candid than a typical marketing-centric view that comes from proprietary OS vendors. Of course, that led to the "bloated" headlines that dominated the coverage of the event, but it also gave the audience an unvarnished look at the kernel. The Linux Foundation and Linux Pro magazine have made a video of the roundtable available—unfortunately only in Flash format—which may be of interest; it certainly was useful in augmenting the author's notes.
OpenInkpot: free software for e-book readers
Back in July, Jonathan Corbet lamented that Amazon was making the Kindle an unattractive hacking target for Linux users. The comments to his article suggested having a closer look at OpenInkpot, a fairly new Linux distribution for e-book readers. This much in advance: It doesn't run on the Kindle. Not yet, anyway.
![[N516 with
OI]](https://static.lwn.net/images/n516-with-oi-sm.jpg)
OpenInkpot (or OI) intends to be a full alternative, free software stack for e-ink based readers. It comes with a "bookshelf" menu to handle your collection of e-books and brings the popular FBReader and Cool Reader to display most e-book formats, among them epub, fb2, txt, html, rtf, pdb, plucker and mobipocket. PDF files are handled by OI's own locoPDF and madeye is a simple viewer for images. A sound player application for audio books will join the OI software collection soon.
History
The initial motivation for OpenInkpot was the limited use of the original software that Debian hackers Mikhail Gusarov and Yauhen Kharuzhy found on their newly acquired Hanlin V3. They found it too slow and its software clunky. E.g., there was no way to adjust the PDF viewer's zoom and the bookshelf couldn't handle Cyrillic or other non-ASCII characters in file names.
Because of that, the V3 became the only supported device of the first public 0.1 OpenInkpot release in August 2008. Mikhail says they achieved most of their goals for the V3. OI supports more formats and is faster to display and turn pages, although the more complex user interface of OI eats up some of the speed improvements to the original firmware.
![[OI page]](https://static.lwn.net/images/oi-reading-sm.png)
Right now, the team is busy porting OI to a second hardware platform for the upcoming 0.2 release, the Hanvon N516. Both V3 and N516 are available inexpensively under various brand names around the world. Intrigued by OI, your author quickly found an online shop based in his country and, a few days later, the original firmware of his brand new N516 didn't survive more than 30 minutes before it got overwritten with a development snapshot of OI.
With the new firmware, you win some and you lose some features. The N516 gains a lot more e-book formats through OI, but loses others such as the Chinese XEB format. The OI PDF reader is an improvement, but an audio player is missing since OI's sound drivers and applications are still under development. The main advantage of running OI is that it is fully open source and not tied to one manufacturer or device.
The internals of OI
OI's current development trunk for the upcoming 0.2 release uses a recent 2.6.29 kernel (with a switch to .31 planned), kernel drivers, and system software targeted for Hanlin V3 and Hanvon N516 devices.
To get the first version of OI going, kernel hacker Yauhen had to work without hardware documentation. He disassembled the V3's original firmware binary and wrote drivers for the buttons, the battery controller, and the display. The V3 is a simple ARM board with an e-ink display, but its audio hardware is unusual: It only decodes MP3 streams and further investigation is needed to see if it can be made to play simple raw PCM audio so that it can be used for generic audio applications.
![[OI text options]](https://static.lwn.net/images/oi-reading-setup-sm.png)
The port to the N516 is mostly complete and luckily, its manufacturer is far more helpful with specs and sources. An audio driver is still missing for the N516 too, but it is in progress for the 0.2 release. Compared to the V3, the N516 sports a faster MIPS CPU and more RAM, which is helpful for applications such as FBReader, which keeps the full document file in memory. On the downside, the N516 uses some funky hardware components that complicate driver development, e.g. handling key presses or reading the status of the battery is unnecessarily difficult.
Yauhen also made some improvements to the e-ink display driver by Jaya Kumar to speed up screen updates. All drivers and modifications by the OI team are intended to be contributed upstream.
The OI user space software is based on the Debian-like IPlinux, which is a fork that Mikhail made of the dormant Slind project. (OpenEmbedded was also tried, but discarded, mostly because the team did not like OE's package build management.)
Alexander Kerner, the third main OI developer, maintains many of the user space libraries and applications and describes OI as an abstraction layer that hides most of the embedded nature of the hardware from the developer. He installs the same libraries on his x86 desktop to develop an application, so that cross-compiling for the device later is not much of an effort.
![[OI menu]](https://static.lwn.net/images/oi-menu-sm.png)
The static nature of an e-ink display is a challenge for user interface development. Screen updates are very slow and expensive, but the display does not consume power to keep its state. An e-ink device only needs to render a new page and then goes back to deep sleep. There is no use for the clutter toolkit, since animation is impossible on e-ink. OI uses X and the Enlightenment Foundation Libraries (EFL), which the OI team found to be feature-rich, yet fast and lightweight and thus well-suited for limited hardware. EFL's memory footprint is smaller than that of the GTK or Qt libraries (the latter requiring libstdc++). Since the OI team had bad experiences with GTK's slow performance on the Nokia N770, they wanted to try something else. While EFL turned out not to be as lean as hoped for, they consider it a good choice for a device such as the original V3 with its slow CPU and only 16 MB of RAM.
For development, OI uses the familiar array of open source tools: git repository, bugtracker, and documentation wiki (which is a bit outdated, as documentation tends to be). The mailing list is low volume and the IRC channel isn't crowded and thus friendly to newcomers.
Localization is done with Transifex. The UI strings are few and short, so it was not much of an effort for your author to contribute a few German translations. But not all text is in gettext yet, so would-be-translators may want to wait until the 0.2 release gets closer. The system uses UTF-8 and supports right-to-left text, so that any language supported by Unicode may become a user interface choice.
Commercial e-books and DRM
The OI software stack does not handle content "protected" with DRM, so most commercial e-book downloads available today cannot be read on an OI device. The team is not opposed to DRM per se, but to make it part of the distribution, it would have to be fully open source and may not rely on hardware to enforce its restrictions. That is rather tough to implement, so it is safe to expect that there will be no DRM found in OI in the foreseeable future.
You will find plenty of legal, DRM-free content at places like the Gutenberg Project — time to catch up on the classics. For web content, tools like Calibre, Readability, and many other PDF and e-book format converters will bring your morning paper (or LWN) onto your e-book reader. Maybe the publishing industry will see the light and offer DRM-free commercial downloads soon, just like the music industry has started to do after it went through a painful learning process.
Help wanted. Apply within
OI is a well-managed project. Alexander jokes that the quick and dirty hacks he found in the commercial e-book firmware sources would never make it past Mikhail. But the project does have a serious lack of manpower. Next to the three main developers, there are only a few casual contributors.
The current development trunk is very much work in progress, and not suitable for end users, but beta testing it is already fun. The PDF viewer is still too slow and the OI user interface is inconsistent. Alexander describes the team as a group of engineers, not usability professionals, and your author wholeheartedly agrees.
OI looks like a good foundation and the team would welcome commercial
manufacturers joining the effort. The team's current work to port OI to the
N516 is commissioned by Russian Ukrainian hardware distributor Azbooka
(Азбука), who plans to sell N516
devices with OI as the official firmware. Their target audience is
students and young budget-oriented users. Alexey Sednev of Azbooka is
excited about the open source nature of OI and calls it "the greatest
feature
" of the Azbooka N516. He expects that OI will encourage software
development by device owners and that this will help foster customization
for specific user groups such as education, medicine, or law.
The team hopes to port the software to more hardware, so OI will soon need to add abstraction layers for device-specific input methods, storage media, and network devices. OI could also be a chance at new life for devices abandoned by their manufacturers, allowing users to avoid planned obsolescence, which the manufacturers create by not providing new firmware for "old" products. The Sony PRS-505 may become the third supported e-reader, as there are only a few driver details missing. They are confident that it is possible to port OI to any hardware, including the Kindle, but lack of manpower and time is stopping them. They need the help of device owners willing to write and maintain drivers. If you are a kernel developer with an e-ink reader lying around, you may just have found an exciting new hacking target.
LPC: 25 years of X
The X Window System quietly hit its 25th anniversary back in June; it is, undoubtedly, one of the oldest and most successful free software projects in existence. Keith Packard has been working with X for most of that time; at the Linux Plumbers Conference in Portland he used his keynote address to look at the highlights (and lowlights) of those 25 years and some of the lessons that have been learned.
The version of X we know today is X11. There were nine predecessor
versions (one got skipped), but the first version to escape widely was X10,
which was released in 1986. Companies were shipping it, and the vendors which
formed the X Consortium were starting to think that the job was done, but
the X developers successfully pleaded for the opportunity to make one more
"small" set of revisions to the X protocol. The result was X11 - a complete
reworking of the whole system - which was released on September 15,
1987; it is still running today.
There was a wealth of new ideas in X11, some of which made more sense than others. One of those ideas was the notion of an external window manager. In X, the window manager is just another process working with the same API. This approach helped to create a consistent API across windows, and it also made it possible to manage broken (non-responding) applications in a way that some other systems still can't do. On the other hand, the external window manager created a lot of on-screen flashing - a problem which still pops up today - and it does not work entirely well with modern compositing techniques, getting in the way of the page-flipping operations needed to make things fast.
The use of selections for cut-and-paste operations was another early X11 innovation. With selections, the source of selected data advertises its availability, and the destination requests it in the desired format. This mechanism allows data to be selected and moved between applications in almost any format. Unfortunately, the "cut buffer" concept was left in, so applications had to support both modes; the fact that Emacs was not updated to use selections for a very long time did not help. The existence of multiple selections created interoperability problems between applications. On the other hand, the selection mechanism proved to be a very nice foundation for drag-and-drop interfaces, and it handled the transition to Unicode easily.
Input has been the source of a number of problems. The requirement that applications specify which events they want made sense when the focus was on trying to make the best use of a network connection, but it led to some interesting behavioral changes depending on how applications selected their events. X was meant to be a policy-free system, but, in retrospect, the rules for event delivery were a significant amount of policy wired into the lowest levels of the system.
"Grabs," where an application can request exclusive delivery of specific events, were a case in point. "Passive grabs" allow window managers to bind to specific keys (think alt-middle to resize a window), but that required a non-intuitive "parent window gets it first" policy when these grabs are in use. "Synchronous grabs" were worse. They were intended to help create responsive interfaces in the face of slow networks and slow applications; clicking on a window and typing there will do the right thing, even if the system is slow to respond to the click and direct keyboard focus correctly. It was a complicated system, hard to program to, harder to test, and it required potentially infinite event storage in the X server. And it's really unnecessary; no applications use it now. This "feature" is getting in the way of more useful features, like event redirection; it may eventually have to be removed even at the cost of breaking the X11 protocol.
Text input was not without problems of its own; X went to considerable effort to describe what was written on every key, and required applications to deal with details like keyboard maps and modifier keys. It worked reasonably well for English-language input, but poorly indeed for Asian languages. The addition of the XIM internationalization layer did not really help; it was all shoved into the Xlib library and required that applications be rewritten. It also forced the installation of a large set of core fonts, despite the fact that most of them would never be used.
Text output was "an even bigger disaster." It required that fonts be resident in the server; applications then needed to pull down large sets of font metrics to start up. That was bad enough, but generating the font metrics required the server to actually rasterize all of the glyphs - not fun when dealing with large Asian fonts. Adding fonts to the system was an exercise in pain, and Unicode "never happened" in this subsystem. In retrospect, Keith says, there was an obvious warning in the fact that neither FrameMaker nor Xdvi - the two applications trying to do serious text output at that time - used the core fonts mechanism.
This warning might have been heeded by moving font handling into clients (as eventually happened), but what was done at that time, instead, was to layer on a whole set of new kludges. Font servers were introduced to save space and make addition of fonts easier. The XLFD (X logical font description) mechanism inflicted font names like:
-adobe-courier-medium-r-normal--14-100-100-100-m-90-iso8859-1
on the world without making life easier for anybody. The compound text mechanism brought things forward to iso-2022, but couldn't handle Unicode - and, once again, it required rewriting applications.
The X drawing model had amusing problems of its own. It was meant to be "PostScript lite," but, to get there, it dispensed with small concepts like paths, splines, and transforms, and it required the use of circular pens. So there really wasn't much of PostScript left. The model required precise pixelization, except when zero-width "thin" lines were used - but all applications ended up using thin lines. Precise pixelization was a nice concept, and it was easily tested, but it was painfully slow in practice.
The use of circular pens was the source of more pain; the idea was taken from the PostScript "Red Book," but, by then, the PostScript folks had already figured out that they were hard to work with and had kludged around the problem. A line drawn with a circular pen, in the absence of antialiasing, tends to vary in width - it looks blobby. The generation of these lines also required the calculation of square roots in the server, which was not the way to get the best performance. Even so, people had figured out how to do circular pens right, but nobody in the X team knew about that work, so X did not benefit from it.
Rather than provide splines, the X11 protocol allowed for the drawing of
ellipses. But there was a catch: ellipses had to be aligned with the X or
Y axis, no diagonal ellipses allowed. There was a reason for this: there
was a rumor circulating in those days that the drawing of non-axis-aligned
ellipses involved a patented algorithm, and, for all of the usual reasons,
nobody wanted to go and
actually look it up. It turns out that the method had been published in
1967, so any patent which might have existed would have been expired. But
nobody knew that because nobody was willing to take the risks involved with
researching the alleged patent; even in the 1980's, software patents were
creating problems.
As an added bonus, the combination of ellipses and circular pens requires the evaluation of quartic equations. Doing that job properly requires the use of 128-bit arithmetic; 64-bit floating-point numbers were not up to the job.
Color management was bolted on at a late date; it, too, was shoved into the "thin and light" Xlib layer. It provided lots of nice primitives for dealing with colors in the CIE color space, despite the fact that users generally prefer to identify colors with names like "red." So nobody ever used the color space features. And the "color management" code only worked with X; there was no provision for matching colors in output to graphic metafiles or printed output. X color management was never a big hit.
[PULL QUOTE: All of these mistakes notwithstanding, one should not overlook the success of X as free software. END QUOTE] All of these mistakes notwithstanding, one should not overlook the success of X as free software. X predates version 1 of the GPL by some five years. Once the GPL came out, Richard Stallman was a regular visitor to the X Consortium's offices; he would ask, in that persistent way he has, for X to change licenses. That was not an option, though; the X Consortium was supported by a group of corporations which was entirely happy with the MIT license. But in retrospect, Keith says, "Richard was right."
X was an industry-supported project, open to "anybody but Sun." Sun's domination of the workstation market at that time was daunting to vendors; they thought that, if they could displace SunView with an industry-standard alternative, they would have an easier time breaking into that market. Jim Gettys sold this idea, nearly single-handedly, to Digital Equipment Corporation; it is, arguably, the first attempt to take over an existing market with free software. It worked: those vendors destroyed Sun's lock on the market - and, perhaps, Keith noted, the Unix workstation market as a whole.
There were problems, needless to say. The MIT license discourages sharing of code, so every vendor took the X code and created its own, closed fork. No patches ever came back to the free version of X from those vendors. Beyond that, while the implementation of X11 was done mainly at DEC, the maintenance of the code was assigned to the X Consortium at MIT. At that point, Keith said, all innovation on X simply stopped. Projects which came out of the X Consortium in these days were invariably absolute failures: XIE, PEX, XIM, XCMS, etc. There began the long, dark period in which X essentially stagnated.
X is no longer stagnant; it is being heavily developed under freedesktop.org. As X has come back to life, its developers have had to do a massive amount of code cleanup. Keith has figured out a fail-safe method for the removal of cruft from an old code base. The steps, he said, are these:
- Publish a protocol specification and promise that there will be
long-term support.
- Realize failure.
- "Accidentally" break things in the code.
- Let a few years go by, and note that nobody has complained about the
broken features.
- Remove the code since it is obviously not being used.
Under this model, the XCMS subsystem was broken for five years without any complaints. The DGA code has recently been seen to have been broken for as long. The technique works, so Keith encouraged the audience to "go forth and introduce bugs."
The important conclusion, though, is that, after 25 years, X survives and is going strong. It is still able to support 20-year-old applications. There are few free software projects which can make that sort of claim. For all its glitches, kludges, and problems, the X Window System is a clear success.
FOSS compliance engineering in the embedded industry
[ Editor's note: This is part two of a series of three articles on
FOSS license compliance. Part one
introduces the topic and describes what developers can do to protect their
rights. Part three is coming soon and will look at what
companies can do to comply, as well as what to do in the case of a
violation. ]
This article examines a field called compliance engineering. Compliance engineering was pioneered by technical experts who wanted to address misuses of software, and was made famous by gpl-violations.org, FSF, and similar organizations correcting Free and Open Source Software (FOSS) license violations. The field has grown into a commercial segment with companies like Blackduck Software and consultancy firms like Loohuis Consulting offering formal services to third parties.
Rather than attempting to examine compliance engineering in all market segments and under all conditions, this article will focus on explaining some of the tools and skills required to undertake due diligence activities related to licensing and binary code in the embedded industry. It is based on the GPL Compliance Engineering Guide, which in turn is based on the experience of engineers contributing to the gpl-violations.org project.
Some of the methods described in this article may not be permitted by the DMCA or similar legislation in certain jurisdictions. It is important to stress that the goal of compliance engineering is not to reverse engineer a product so it can be resold for monetary gain, but rather to apply digital forensics to see if copyright was violated. You should consult a lawyer to find out the legal status of the engineering methods described here.
Context and confusion
The first phase of compliance engineering is not engineering. It is about about understanding the license that applies to code and understanding what that means with regards to obligations in a particular market segment. This dry art is sometimes challenging because of the culture of FOSS. FOSS has an innovative, fast moving, and diverse ecosystem. Contributors tend to be passionate about their work and about how it is released, shared, and further improved by the community as a whole. This can be something of a double-edged sword, providing exceptional engagement and occasionally an overabundance of enthusiasm in areas like software licensing or compliance.
The gpl-violations.org project enforces the copyright of Harald Welte and other Linux kernel developers, and has a mechanism for third parties to report suspected issues with use of Linux and related GPL code. One of the most common false positives reported is that companies are violating the GNU GPL version 2 by providing a binary firmware release for embedded devices without shipping source code in the package or offering it on a website for download. This highlights a misunderstanding regarding what the GPL requires. It is true that the GPL comes into effect when distributing code and that offering a binary firmware for download is distribution, but compliance with the license terms is more subtle than it may appear to parties who have not read the license carefully.
In the GPLv2 license there is no requirement for source code to be provided in the product package or on a website to ensure compliance. Instead, in sections 3a and 3b of the GPLv2 license there are two options regarding source code available to people distributing binary versions of licensed software. One is to accompany a product with the source code and the other is to include a written offer to supply the source code to any third party for three years. When someone gets a device with GPLv2 code and wants to check compliance, they need to look for accompanying source or a written offer on the manual, the box, a separate leaflet, web interface menus and any interactive menus.
It gets a little more complex when you consider that the above constitutes only the terms applying to source code. Finding source code or a written offer for it does not constitute GPLv2 full compliance. Instead compliance depends on whether the offered source code is complete and corresponds precisely to what is on the product, if the product also shipped with a copy of the license, and what else is shipped in what way alongside the GPL code. The full text of the license spells out how the parameters of this relationship work.
Compliance engineering is an activity that requires a mixture of technical and legal skills. Practitioners have to identify false positives and negatives, and to contextualize their analysis within applicable jurisdictional constraints. This can appear daunting for parties who have a casual approach to reading licenses. However, the skills and tools applied are relatively simple as long as a balanced approach is taken when understanding what is explicitly required in a license and what is actually present in a product. Given these two skills anyone can help make sure that people who use GPL or other FOSS licenses are adhering to the terms the copyright holders selected.
The nuts and bolts
Compliance engineers in organizations like gpl-violations.org do not have an extensive toolset. In the embedded market the product from a software perspective is a firmware image, and this is just a compilation of binary code. The contents may include everything needed to power an embedded device (bootloader, plus operating system) or just updates to certain parts of the embedded device software.
Checking if firmware is meeting the terms of a license like the GPLv2 requires the application of knowledge and a sequence of tests such as extracting visible strings from binary files and correlating them to source code. One aspect is identifying GPL software components and making sure they are included in source releases, and another requires opening the device to get physical access to serial ports. The only essential tools required are a Linux machine, a good editor, binutils, util-linux, and the ability to mount file systems over loopback or tools like unsquashfs to unpack file systems to disk.
Opening firmware
The most common operating systems for embedded devices today are Linux-kernel based or VxWorks. There are a few specialized operating systems and variants of BSD available in the market, but they are becoming less common. Linux-based firmware nearly always contains the kernel itself, one or more file systems, and sometimes a bootloader.
The quickest way to find file systems or kernels in a firmware is to search for padding. Padding usually consists of NOP characters such as zeroes which fill up space. This ensures that the individual components of a firmware are at the right offsets. The bootloader uses these offsets to quickly jump to the location of the kernel or a file system. Therefore if you see padding there will either be something following it, or it marks the end of the file. Once you have identified the components you will know what type of firmware you are dealing with, what's in there on the architecture level, and (with a little bit of experience) what's likely to be problematic with regards complete source code releases.
If you can't find any padding in the firmware then another method is to look for strings like "done, booting the kernel", as these indicate that something else will follow immediately afterwards. This method is a little more tricky and involves things like searching for markers that indicate compression (gzip header, bzip2 header, etc.), a file system (squashfs header, cramfs header, etc.), and so on. The quickest way to do this is to use hexdump -C and search for headers. Detailed information about headers is already available on most Linux systems in /usr/share/magic.
Problems you can encounter
The techniques employed for compliance engineering are essentially the same as those employed for debugging an embedded system. While this means the basic knowledge is easy to obtain, but it also means that issues can arise when the tools you are attempting to apply are different from the tools used for designing and building the system in the first place:
- Encryption: Some devices have a firmware image that is encrypted. The bootloader decrypts it during boot time with a key that is stored in the device. Unless you know the decryption key it is impossible to take these devices apart by looking at the firmware only. Examples are ADSL modem/routers which are based on the Broadcom bcm63xx chipset. There are also companies that encrypt their firmware images using a simple XOR. It is often quite easy to find these if you see patterns that repeat themselves very often.
- Code changes: Sometimes slight changes were made to the file system code in the kernel, which make it hard or even impossible to mount a file system over loopback without adapting a kernel driver. Examples include Broadcom bcm63xx-based devices and devices based on the Texas Instruments AR7 chipset, which both use SquashFS implementations with some modifications to either the LZMA compression (AR7) or the file system code.
To explore what code is present in these cases you need network access or even physical access to the device.
Network scanning
With portscanners like nmap you can make a fairly accurate guesstimate of what a certain device is running by using fingerprinting: many network stacks respond slightly differently to different network packets. While a fingerprint is not enough to use as evidence, scanning can give you useful information, like which TCP ports are open and which services are running. Surprisingly often you can still find a running telnet daemon which will give you direct access to the device. Sometimes exploiting bugs in the web interface also allow you to download or transfer individual files or even the whole (decrypted) file system.
Physical access
Most embedded devices have a serial port, and this is sometimes the only way to find violations. This may not be visible and sometimes is only present as a series of solder pads on the internal board. After adding pin headers you can connect a serial port to the device and – perhaps with the addition of a voltage level shifter – attach the device to a PC. Projects like OpenWrt have a lot of hardware information on their website and this can be useful in working out how to start.
Once physical access is granted things get easier. The bootloader is usually configured to be accessible via the serial port for maintenance work such as uploading a new firmware, and this often translates into a shell starting via the serial port after device initialization. Many devices are shipped with GPL licensed bootloaders, such as RedBoot, u-boot, and others. The bootloader often comes preloaded on a device and is not included in firmware updates because the firmware update only overwrites parts of the flash and leaves the bootloader alone. More problematically, the bootloader may not be included in the source packages released by the vendor, as they overlook its status as GPL code.
Example: OpenWrt firmware
GPL compliance engineering is best demonstrated using a concrete example. In this example we will take apart a firmware from the OpenWrt project. OpenWrt is a project that makes a kit to build alternative firmwares for routers and some storage devices. There are prebuilt firmwares (as well as sources) available for download from the OpenWrt website. In this example we have taken firmware 8.09.1 for a generic brcm47xx device (openwrt-brcm47xx-squashfs.trx).
Running the strings command on the file seems to return random bytes, but if you look a bit deeper there is structure. The hexdump tool has a few options which come in really handy, such as -C which displays the hexadecimal offset of the file, the characters in hexadecimal notation and the ASCII representation of those characters, if available.
A trained eye will spot that at hex offset 0x001c there is the start of a gzip header, starting with the hex values 0x1f 0x8b 0x08:
$ hexdump -C openwrt-brcm47xx-squashfs.trx 00000000 48 44 52 30 00 10 22 00 28 fa 8b 1c 00 00 01 00 |HDR0..".(.......| 00000010 1c 00 00 00 0c 09 00 00 00 d4 0b 00 1f 8b 08 00 |................| 00000020 00 00 00 00 02 03 8d 57 5d 68 1c d7 15 fe e6 ce |.......W]h......| ...
Extracting can be done using an editor, or easier with dd:
$ dd if=openwrt-brcm47xx-squashfs.trx of=tmpfile bs=4 skip=7
This command reads the file openwrt-brcm47xx-squashfs.trx and outputs it to another file, skipping the first 28 bytes.
$ file tmpfile tmpfile: gzip compressed data, from Unix, max compression
With zcat this file can be uncompressed to standard output and redirected to another file:
$ zcat tmpfile > foo
The result in this particular case is not a Linux kernel image or a file system, but the LZMA loader used to uncompress the LZMA compressed kernel that is used by OpenWrt. LZMA does not always use the same headers for compressed files, which makes it quite easy to miss. In this case the LZMA compressed kernel can be found at offset 0x090c.
$ dd if=openwrt-brcm47xx-squashfs.trx of=kernel.lzma bs=4 skip=579
Unpacking the kernel can be done using the lzma tool.
$ lzma -cd kernel.lzma > bar
Running the strings tool on the result quite clearly shows strings from the Linux kernel.
In openwrt-brcm47xx-squashfs.trx you can see padding in action around hex offset 0x0bd280, immediately followed by a header for a little endian SquashFS file system.
$ hexdump -C openwrt-brcm47xx-squashfs.trx ... 000bd270 1d 09 36 96 85 67 df 8f 1b 25 ff c0 f8 ed 90 00 |..6..g...%......| 000bd280 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| * 000bd400 68 73 71 73 9b 02 00 00 00 c6 e1 e2 d1 2a 00 00 |hsqs.........*..| ... $ dd if=openwrt-brcm47xx-squashfs.trx of=squashfs bs=16 skip=48448
From just the header of the file system it is not obvious which compression method is used:
$ file squashfs squashfs: Squashfs filesystem, little endian, version 3.0, 1322493 bytes,\ 667 inodes, blocksize: 65536 bytes, created: Tue Jun 2 01:40:40 2009
The two most used compression techniques are zlib and LZMA, the latter becoming more popular quickly. Unpacking with the unsquashfs tool will give an error:
zlib::uncompress failed, unknown error -3
This indicates that probably LZMA compression is used instead of zlib. Unpacking requires a version of unsquashfs that can handle LZMA. The OpenWrt source distribution contains all necessary configuration and buildscripts to fairly easily build a version of unsquashfs with LZMA support.
The OpenWrt example is fairly typical for real cases that are handled by gpl-violations.org, where unpacking the firmware is usually the step that takes the least effort, often just taking a few minutes. Matching the binary files to sources and correct configuration information and verifying that the sources and binaries match is a process that takes a lot more time.
In conclusion
Compliance engineering is a demanding and occasionally tedious aspect of the software field. Emotion has little place in the analysis applied and the rewards of volunteer work are not visible to most people. Yet compliance engineering is also essential, providing as it does a clear imperative for people to obey the terms of FOSS licenses. It contributes part of the certainty and stability necessary for diverse stakeholders to work together on common code, and it allows a clear mechanism for discovering which parties are misunderstanding their obligations as part of the broader ecosystem. Transactions between individuals, projects and businesses cannot be sustained without such mechanisms.
It is important to remember that the skills involved in compliance engineering are not necessarily limited to a small subset of consultants and companies. Documents like the GPL Compliance Engineering Guide describe how to dig through binary code suspected of issues. Engineers from all aspects of FOSS can contribute assistance to a project or business when it comes to forensic analysis or due diligence, and they can report any issues discovered to the copyright holders or to entities like FSF's Free Software Licensing and Compliance Lab, gpl-violations.org, FSFE's Freedom Task Force and Software Freedom Law Center.
Security
BruCON: Can we trust cryptography?
On September 18 and 19, the community-made conference BruCON made its first appearance in
Brussels. BruCON is organized by a small group of Belgian people working in
the security industry who wanted to create a security conference with room
for independent research and without a commercial undertone. As one of the
organizers, Benny Ketelslegers, said at the beginning of the conference:
"We have a lot of security researchers in Belgium, but we didn't have
a conference here that suits our needs.
"
Being a Belgian conference, there couldn't be a better speaker for the first lecture than Vincent Rijmen, a Belgian cryptographer and one of the designers of the Advanced Encryption Standard (AES). He is currently working as an associate professor at the University of Leuven. His talk was named Trusted cryptography [PDF] and discussed the growing doubts about the trust we can put in cryptology and its applications. His point being: today we can't trust cryptography.
Rijmen took the audience back to where cryptography started. In the Roman days, we had the Caesar cipher, which is a simple substitution cipher in which each letter in the plain text is replaced by a letter some fixed number of positions down the alphabet. This encryption method is named after Julius Caesar, who used it with a shift of three positions to communicate with his generals. Polyalphabetic substitution ciphers, like Vigenère or the Enigma machine used by Germany in World War II, were also driven by military requirements.
So cryptography in the past was used solely for military purposes,
Rijmen explained, but there is more to it than that: "Encryption was
used between trusted
parties, with secure perimeters. This is much different with our current
use of encryption: think about our bank cards, where crackers can even
measure the power consumption of the smartcard chip to try to crack the
encryption.
"
The shift to the public
This shift began in the 1970s and 1980s, for example with the concept of public-key cryptography introduced by Whitfield Diffie and Martin Hellman, and RSA being invented by Ronald Rivest, Adi Shamir and Leonard Adleman. As a result of these technical breakthroughs, Rijmen maintains that cryptography finally entered the public world:
This is how we have come to the current situation, where modern
communication networks are all based on cryptography. We have the A5/1
stream cipher in the GSM cellular telephone standard, the KeeLoq block
cipher used in the majority of remote car keys, and the Wired Equivalent
Privacy (WEP) algorithm which was the first attempt to secure IEEE 802.11
wireless networks. It is not a coincidence that Rijmen named these
protocols: they are all broken. The A5/1 design was initially kept secret,
but after leaks and reverse engineering several serious weaknesses have
been identified. KeeLoq was cryptanalyzed with much success in recent
years. And WEP, which is using the stream cipher RC4, can be broken in
minutes with off-the-shelf hardware and open source software such as Aircrack-ng. "RC4 is a good
protocol, but it is used incorrectly in WEP. If one of my students came up
with such a design, he should redo my course and come back next
year
", Rijmen told the audience.
Security myths and evil cryptography
All these defective designs, supposedly made by smart people, don't
improve trust in cryptography. Rijmen blames the defective designs to a
couple of "industry myths
" that don't seem to die out:
But there are cases where cryptography evidently works. When there is a business case, companies are suddenly able to do it right. For example, HP implemented authentication between the printer and the ink cartridge, as Rijmen explains:
But it's not only in industry where things go wrong. Rijmen maintains that
there are also some fairly pervasive research myths circulating. Many
security researchers are too academic and think that a good security model
is a model that allows them to prove theorems. In their eyes, security is,
then, what they can prove about some objects in their abstract mathematical
models. This whole abstract notion of security amounts to a degenerate
concept of "good research
" as applying well-known methods to
well-known problems, taking all the creativity and innovations out of the
research.
Added to this, we see that malware writers have discovered
cryptography. They use it to escape detection or to implement recovery
after partial exposure. But there's also a worrying trend where malware
encrypts the hard drive of a victim and then the malware writer extorts the
victim to get his data back. The consequence of these bad cryptography
examples in industry, academia, and the "evil cryptography
"
used by malware writers means that the public loses trust in the
technology. And that's were Rijmen's talk came at a breakpoint. His message
was clear:
It was funny to see how most of the audience's questions were about Rijmen's example of electronic voting, although he hastened to add that e-voting was just an example of the perils of bad cryptography and their consequences for our trust in cryptography in general. It was not, in any way, a remark about the security of particular e-voting implementations. Many persons attending his talk were genuinely worried about the security of e-voting, but don't consider themselves Luddites. One person stated that he doesn't trust e-voting because this centralizes the power of counting the votes into a black-box network of electronic devices from the same producer. In contrast, traditional voting decentralizes the counting by outsourcing it to thousands of people, which makes the votes less susceptible to manipulation.
A new kind of cryptography development
To regain trust in cryptography, Rijmen has two proposals: collaborative standards development and best practices. As an example case, Rijmen points to the development of his own AES. In January 1997, the National Institute of Standards and Technology (NIST) announced the initiative to develop a successor to the aging Data Encryption Standard (DES). NIST asked for input from interested parties, and in September 1997 there was a call for new algorithms. In the next three years, fifteen different designs were submitted, analyzed, narrowed down, and, in October 2000, NIST announced the winner: Rijndael, designed by Vincent Rijmen and Joan Daemen. Rijmen stressed some remarkable facts about the AES process:
According to Rijmen, the AES process should be taken as an example for collaborative standards development, not only for algorithms like AES, but also for protocols and even applications. The organizers of such a competition should invite the relevant people to contribute, get both the industry and academia on board, and envision future requirements. Moreover, they should advertise the development process, motivate submitters and reviewers, and evaluate the evaluations. Last but not least, they should push the result after all this work.
Rijmen's second proposal is to limit the number of standards and standard solutions, an approach that he calls green cryptography. It's all about recycling: reuse ideas that have proven their merits, and keep the implementations simple. This makes sense, because complexity is the culprit behind a lot of instances of cryptography failing:
From the developer's perspective, this means that they have a
marketplace of algorithms to pick from, and developers should be
discouraged of making their own home-brew algorithms: "Unless you can
absolutely not, use a standard.
" Rijmen gave an example of how it
shouldn't be done. Since 2000, there is a trend to combine encryption and
authentication into one operation, because encryption without
authentication leads to weaknesses in almost all applications. There are a
couple of standards and RFCs for authenticated encryption, but what did
Microsoft do with its BitLocker Drive Encryption in Windows Vista and 7? It
uses AES (which is good), in CBC mode (which Rijmen calls "the
standard mode in the 1980s, not in 2000
"), and without
authentication, against all security trends. Microsoft's explanation was
that "There is no space to store authentication tags on the hard
disk
", although each hard disk reserves space for bad
blocks. Rijmen's take-home message is that we don't need better
cryptography, but better implementations, sticking to the standards:
"Cryptography is not do-it-yourself stuff.
"
Security should be open
Regarding the open source aspect,
Rijmen concluded that openness has been the pulse of cryptographic design
in the last few decades, and that we should expect the same from its
implementations: "Openness works in cryptography because
cryptographers have access to the design and the analysis.
" But he
adds that we should not focus on opening the source for cryptographic
implementations: opening the source alone is not sufficient to attract
cryptographers and let them research the code, we should open the whole
standards development process.
The BruCON organizers showed the same openness as their first speaker. Different from other security events that are more commercially focused, BruCON gathered hackers (in the good sense), security researchers, security vendors, and governments, and they succeeded with a diverse mix of presentation topics and speakers, from Rijmen's metatalk, talks about social engineering and the information leakages in social networks to highly technical talks about the risks of IPv6, SQL injection, and the future techniques of malware. Moreover, anyone who has missed the conference can find the slides and even video recordings of almost all of the presentations online. Although the BruCON organizers wanted to make it a real "Belgian" conference, they didn't make the mistake of being too chauvinist. They were able to attract a lot of top-class international speakers, and the audience came from all over Europe and from the US. Your author hopes they return next year with a second event.
New vulnerabilities
asterisk: multiple vulnerabilities
Package(s): | asterisk | CVE #(s): | CVE-2009-2726 CVE-2009-0871 CVE-2009-2346 | ||||||||||||
Created: | September 28, 2009 | Updated: | June 4, 2010 | ||||||||||||
Description: | From the Red Hat bugzilla (1, 2, 3): CVE-2009-0871: The SIP channel driver in Asterisk Open Source 1.4.22, 1.4.23, and 1.4.23.1; 1.6.0 before 1.6.0.6; 1.6.1 before 1.6.1.0-rc2; and Asterisk Business Edition C.2.3, with the pedantic option enabled, allows remote authenticated users to cause a denial of service (crash) via a SIP INVITE request without any headers, which triggers a NULL pointer dereference in the (1) sip_uri_headers_cmp and (2) sip_uri_params_cmp functions. CVE-2009-2346: The IAX2 protocol implementation in Asterisk Open Source 1.2.x before 1.2.35, 1.4.x before 1.4.26.2, 1.6.0.x before 1.6.0.15, and 1.6.1.x before 1.6.1.6; Business Edition B.x.x before B.2.5.10, C.2.x before C.2.4.3, and C.3.x before C.3.1.1; and s800i 1.3.x before 1.3.0.3 allows remote attackers to cause a denial of service (call-number exhaustion) by initiating many IAX2 message exchanges, a related issue to CVE-2008-3263. CVE-2009-2726: On certain implementations of libc, the scanf family of functions uses an unbounded amount of stack memory to repeatedly allocate string buffers prior to conversion to the target type. Coupled with Asterisk's allocation of thread stack sizes that are smaller than the default, an attacker may exhaust stack memory in the SIP stack network thread by presenting excessively long numeric strings in various fields. Note that while this potential vulnerability has existed in Asterisk for a very long time, it is only potentially exploitable in 1.6.1 and above, since those versions are the first that have allowed SIP packets to exceed 1500 bytes total, which does not permit strings that are large enough to crash Asterisk. (The number strings presented to us by the security researcher were approximately 32,000 bytes long.) Additionally note that while this can crash Asterisk, execution of arbitrary code is not possible with this vector. | ||||||||||||||
Alerts: |
|
asterisk: remote denial of service
Package(s): | asterisk | CVE #(s): | CVE-2009-2651 | ||||
Created: | September 28, 2009 | Updated: | September 30, 2009 | ||||
Description: | From the Red Hat bugzilla entry: main/rtp.c in Asterisk Open Source 1.6.1 before 1.6.1.2 allows remote attackers to cause a denial of service (crash) via an RTP text frame without a certain delimiter, which triggers a NULL pointer dereference and the subsequent calculation of an invalid pointer. | ||||||
Alerts: |
|
backintime: incorrect file permissions when removing backup
Package(s): | backintime | CVE #(s): | |||||||||
Created: | September 28, 2009 | Updated: | September 30, 2009 | ||||||||
Description: | From the Red Hat bugzilla entry: A Debian bug reportindicates that backintime chmods files to mode 0777 prior to removing them via removing a snapshot. What makes this worse is that if those files exist in subsequent snapshots, the permissions on those files is also mode 0777 which allows anyone to manipulate/delete the files in the backup. | ||||||||||
Alerts: |
|
dovecot: arbitrary file modification
Package(s): | dovecot | CVE #(s): | CVE-2008-5301 | ||||
Created: | September 28, 2009 | Updated: | September 30, 2009 | ||||
Description: | From the Ubuntu advisory: It was discovered that the ManageSieve service in Dovecot incorrectly handled ".." in script names. A remote attacker could exploit this to read and modify arbitrary sieve files on the server. This only affected Ubuntu 8.10. (CVE-2008-5301) | ||||||
Alerts: |
|
glib2.0: privilege escalation
Package(s): | glib2.0 | CVE #(s): | CVE-2009-3289 | ||||||||||||
Created: | September 24, 2009 | Updated: | April 27, 2010 | ||||||||||||
Description: | From the Mandriva alert: The g_file_copy function in glib 2.0 sets the permissions of a target file to the permissions of a symbolic link (777), which allows user-assisted local users to modify files of other users, as demonstrated by using Nautilus to modify the permissions of the user home directory. | ||||||||||||||
Alerts: |
|
horde3: arbitrary code execution
Package(s): | horde3 | CVE #(s): | CVE-2009-3236 | ||||||||||||||||||||
Created: | September 28, 2009 | Updated: | April 1, 2010 | ||||||||||||||||||||
Description: | From the Debian advisory: Stefan Esser discovered that Horde, a web application framework providing classes for dealing with preferences, compression, browser detection, connection tracking, MIME, and more, is insufficiently validating and escaping user provided input. The Horde_Form_Type_image form element allows to reuse a temporary filename on reuploads which are stored in a hidden HTML field and then trusted without prior validation. An attacker can use this to overwrite arbitrary files on the system or to upload PHP code and thus execute arbitrary code with the rights of the webserver. | ||||||||||||||||||||||
Alerts: |
|
kvm: privilege escalation
Package(s): | kvm | CVE #(s): | CVE-2009-3290 | ||||||||||||||||||||||||||||||||
Created: | September 29, 2009 | Updated: | November 6, 2009 | ||||||||||||||||||||||||||||||||
Description: | From the Red Hat advisory: The kvm_emulate_hypercall() implementation was missing a check for the Current Privilege Level (CPL). A local, unprivileged user in a virtual machine could use this flaw to cause a local denial of service or escalate their privileges within that virtual machine. | ||||||||||||||||||||||||||||||||||
Alerts: |
|
newt: buffer overflow
Package(s): | newt | CVE #(s): | CVE-2009-2905 | ||||||||||||||||||||||||||||||||||||||||||||
Created: | September 24, 2009 | Updated: | June 3, 2010 | ||||||||||||||||||||||||||||||||||||||||||||
Description: | From the Debian alert: Miroslav Lichvar discovered that newt, a windowing toolkit, is prone to a buffer overflow in the content processing code, which can lead to the execution of arbitrary code. | ||||||||||||||||||||||||||||||||||||||||||||||
Alerts: |
|
opensaml: multiple vulnerabilities
Package(s): | opensaml | CVE #(s): | |||||
Created: | September 28, 2009 | Updated: | September 30, 2009 | ||||
Description: | From the Debian advisory: Chris Ries discovered that decoding a crafted URL leads to a crash (and potentially, arbitrary code execution). Ian Young discovered that embedded NUL characters in certificate names were not correctly handled, exposing configurations using PKIX trust validation to impersonation attacks. Incorrect processing of SAML metadata ignored key usage constraints. | ||||||
Alerts: |
|
openssh: privilege escalation
Package(s): | openssh | CVE #(s): | CVE-2009-2904 | ||||||||||||
Created: | September 30, 2009 | Updated: | March 30, 2010 | ||||||||||||
Description: | From the Red Hat alert: A Red Hat specific patch used in the openssh packages as shipped in Red Hat Enterprise Linux 5.4 (RHSA-2009:1287) loosened certain ownership requirements for directories used as arguments for the ChrootDirectory configuration options. A malicious user that also has or previously had non-chroot shell access to a system could possibly use this flaw to escalate their privileges and run commands as any system user. (CVE-2009-2904) | ||||||||||||||
Alerts: |
|
php: multiple vulnerabilities
Package(s): | php | CVE #(s): | CVE-2008-7068 CVE-2009-3291 CVE-2009-3292 CVE-2009-3293 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Created: | September 28, 2009 | Updated: | January 15, 2010 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Description: | From the Mandriva advisory: The dba_replace function in PHP 5.2.6 and 4.x allows context-dependent attackers to cause a denial of service (file truncation) via a key with the NULL byte. NOTE: this might only be a vulnerability in limited circumstances in which the attacker can modify or add database entries but does not have permissions to truncate the file (CVE-2008-7068). The php_openssl_apply_verification_policy function in PHP before 5.2.11 does not properly perform certificate validation, which has unknown impact and attack vectors, probably related to an ability to spoof certificates (CVE-2009-3291). Unspecified vulnerability in PHP before 5.2.11 has unknown impact and attack vectors related to missing sanity checks around exif processing. (CVE-2009-3292) Unspecified vulnerability in the imagecolortransparent function in PHP before 5.2.11 has unknown impact and attack vectors related to an incorrect sanity check for the color index. (CVE-2009-3293) | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Alerts: |
|
xmp: buffer overflows
Package(s): | xmp | CVE #(s): | CVE-2007-6731 CVE-2007-6732 | ||||||||
Created: | September 24, 2009 | Updated: | September 30, 2009 | ||||||||
Description: | From the National Vulnerability Database entrys:
CVE-2007-6731:
"
CVE-2007-6732:
" | ||||||||||
Alerts: |
|
xmltooling: several vulnerabilities
Package(s): | xmltooling | CVE #(s): | |||||
Created: | September 25, 2009 | Updated: | September 30, 2009 | ||||
Description: | From the Debian advisory:
Several vulnerabilities have been discovered in the xmltooling packages, as used by Shibboleth: Chris Ries discovered that decoding a crafted URL leads to a crash (and potentially, arbitrary code execution). Ian Young discovered that embedded NUL characters in certificate names were not correctly handled, exposing configurations using PKIX trust validation to impersonation attacks. Incorrect processing of SAML metadata ignores key usage constraints. This minor issue also needs a correction in the opensaml2 packages, which will be provided in an upcoming stable point release (and, before that, via stable-proposed-updates). | ||||||
Alerts: |
|
Page editor: Jake Edge
Kernel development
Brief items
Kernel release status
The current development kernel is 2.6.32-rc1, released by Linus on September 27. Note that Linus fat-fingered the version number in the makefile, so this kernel thinks it's -rc2. See the separate article, below, for the list of significant changes added at the end of the merge window.The current stable kernel is 2.6.31.1, released (along with 2.6.27.35 and 2.6.30.8) on September 24. These updates contain a number of important fixes, some of which are security-related.
Quotes of the week
Damn. I hadn't realized. I'm a moron. Ok, so it's an extra-special -rc1. It's the "short bus" kind of special -rc1 release.
2.6.32 merge window, part 3
The 2.6.32 merge window closed on September 27 with the 2.6.32-rc1 release; this merge window ran a little longer than usual to make up for the distractions of LinuxCon and the Linux Plumbers Conference. Changes merged since last week's update include:
- The 9p (Plan9) filesystem has been updated to make use of the FS-cache
caching layer.
- Control group hierarchies can now have names bound to them.
- The fcntl() system call supports new F_SETOWN_EX and
F_GETOWN_EX operations. They differ from F_SETOWN
and F_GETOWN in that they direct SIGIO signals to a specific
thread within a multi-threaded application.
- The HWPOISON subsystem
has been merged.
- Framebuffer compression support has been added for Intel graphics
chipsets. Compression reduces the amount of work involved in driving
the display, leading to a claimed 0.5 watt reduction in power
consumption. A set of tracepoints has also been added to the Intel
graphics driver.
- There are new drivers for
ADP5588 I2C QWERTY Keypad and IO Expander devices,
OpenCores keyboard controllers,
Atmel AT42QT2160 touch sensor chips,
MELFAS MCS-5000 touchscreen controllers,
Maxim MAX7359 key switch controllers,
ARM "tightly-coupled memory" areas,
Palm Tungsten|C handheld systems,
Iskratel Electronics XCEP boards,
EMS CPC-USB/ARM7 CAN/USB interfaces,
Broadcom 43xx-based SDIO devices,
Avionic Design Xanthos watchdog and backlight devices,
WM831x PMIC backlight devices,
Samsung LMS283GF05 LCDs,
Analog Devices ADP5520/ADP5501 MFD PMIC backlight devices, and
WM831x PMIC status LEDs.
- The proc_handler function prototype, used in sysctl handling, has lost its unused struct file argument.
In the end, 8742 non-merge changesets were incorporated in the 2.6.32 merge window.
In defense of per-BDI writeback
Last week's quotes of the week included a complaint from Andrew Morton about the replacement of the writeback code in 2.6.32. According to Andrew, a bunch of critical code had been redone, replacing a well-tested implementation with new code without any hard justification. It's a complaint which should be taken seriously; replacing the writeback code has the potential to introduce performance regressions for specific workloads. It should not be done without a solid reason.Chris Mason has tried to provide that justification with a combination of benchmark results and explanations. The benchmarks show a clear - and large - performance improvement from the use of per-BDI writeback. That is good, but does not, by itself, justify the switch to per-BDI writeback; Andrew had suggested that the older code was slower as the result of performance regressions introduced over time by other changes. If the 2.6.31 code could be fixed, the performance improvement could be (re)gained without replacing the entire subsystem.
What Chris is saying is that the old, per-CPU pdflush method could not be fixed. The fundamental problem with pdflush is that it would back off when the backing device appeared to be congested. But congestion is easy to cause, and no other part of the system backs off in the same way. So pdflush could end up not doing writeback for significant periods of time. Forcing all other writers to back off in the face of congestion could improve things, but that would be a big change which doesn't address the other problem: congestion-based backoff can defeat attempts by filesystem code and the block layer to write large, contiguous segments to disk.
As it happens, there is a more general throttling mechanism already built into the block layer: the finite number of outstanding requests allowed for any specific device. Once requests are exhausted, threads generating block I/O operations are forced to wait until request slots become free again. Pdflush cannot use this mechanism, though, because it must perform writeback to multiple devices at once; it cannot block on request allocation. A per-device writeback thread can block there, though, since it will not affect I/O to any other device. The per-BDI patch creates these per-device threads and, as a result, it is able to keep devices busier. That, it seems, is why the old writeback code needed to be replaced instead of patched.
TRACE_EVENT_ABI
Tracepoints are proving to be increasingly useful as system development and diagnostic tools. There is one question about tracepoints, though, which has not yet gotten a real answer: do tracepoints constitute a user-space ABI? If so, some serious constraints come into play. An ABI, once exposed, cannot be changed in a way which might break applications. Tracepoints, being tightly tied to the kernel code they instrument, are inherently hard to keep stable. If a tracepoint cannot be modified or removed, it will make modifications to the surrounding code harder. In the worst case, ABI-preservation requirements could block the incorporation of important kernel changes - an outcome which could quickly sour developers on the tracepoint idea as a whole.Arjan van de Ven's TRACE_EVENT_ABI patch is an attempt to bring some clarity to the situation. For now, it just defines a tracepoint in exactly the same way as TRACE_EVENT; the difference is that it is meant to create a tracepoint which can be relied upon as part of the kernel ABI. Such tracepoints should continue to exist in future kernel releases, and the format of the associated trace information will not change in application-breaking ways. What that means in practice is that no fields would be deleted, and any new fields would be added at the end.
Whether this approach will work remains to be seen. The word from Linus in the past has been that kernel ABIs are created by applications which rely on an interface, rather than any specific marking on the interface itself. So if people start using applications which expect to be able to use a specific tracepoint, that tracepoint may be set in cement regardless of whether it was defined with TRACE_EVENT_ABI. This macro would thus be a good guide to the kernel developers' intent, but it can make no guarantee that only specially-marked tracepoints will be subject to ABI stability requirements.
Kernel development news
Featherstitch: Killing fsync() softly
Soft updates, a method of maintaining on-disk file system consistency through carefully ordering writes to disk, have only been implemented once in a production operating system (FreeBSD). You can argue about exactly why they have not been implemented elsewhere, and in Linux in particular, but my theory is that not enough file systems geniuses exist in the world to write and maintain more than one instance of soft updates. Chris Frost, a graduate student at UCLA, agrees with the too-complicated-for-mere-mortals theory. That's why in 2006, he and several co-conspirators at UCLA wrote the Featherstitch system for keeping on-disk data consistent.
Featherstitch is a generalization of the soft updates system of write
dependencies and rollback data. The resulting system is general
enough that most (possibly all) other file system consistency strategies
(e.g., journaling) can be efficiently implemented on top of the
Featherstitch interface. What makes Featherstitch unique among file
systems consistency techniques is that it exports a safe, efficient,
non-blocking mechanism to userland applications that lets them group
and order writes without using fsync()
or relying on file
system-specific behavior (like ext3 data=ordered
mode).
Featherstitch basics: patches, dependencies, and undo data
What is Featherstitch, other than something file system aficionados throw in your face whenever you complain about soft updates being too complicated? Featherstitch grew out of soft updates and has a lot in common with that approach architecturally. The main difference between Featherstitch and soft updates is that the latter implements each file system operation individually with a different specialized set of data structures specific to the FFS file system, while Featherstitch generalizes the concept of a set of updates to different blocks and creates one data structure and write-handling mechanism shared by all file system operations. As a result, Featherstitch is easier to understand and implement than soft updates.Featherstitch records all changes to the file system in "patches" (the dearth of original terminology in software development strikes again). A patch includes the block number, a linked list of patches that this patch depends on, and the "undo data." The undo data is a byte-level diff of the changes made to this block by this patch, including the offset, length, and contents of the range of bytes overwritten by this change. Another version of a patch is optimized for bit-flip changes, like those made to block bitmaps. The rule for writing patches out to storage is simple: If any of the patches this patch depends on - its dependencies - aren't confirmed to be written to disk, this patch can't be written yet.
In other words, patches and dependencies look a lot like a generic directed acyclic graph (DAG), with patches as the circles and dependencies as the arrows. If you are a programmer, you've probably drawn hundreds or thousands of these pictures in your life. Just imagine a little diff hanging off each circle and you've got a good mental model for thinking about Featherstitch. The interesting bits are around reducing the number of little circles - in the first implementation, the memory used by Featherstitch undo data was often twice that of the actual changes written to disk. For example, untarring a 220MB kernel source tree allocated about 460MB of undo data.
The acyclic-ness of Featherstitch patch dependencies deserves a little
more attention. It is the caller's responsibility to avoid creating
circular patch dependencies in the first place; Featherstitch doesn't
detect or attempt to fix them. (The simplified interface exported to
userspace makes cycles impossible to create in the first place, more
about that later.) However, lack of circular dependencies among
patches does not imply a lack of circular dependencies between blocks.
Patches are a record of a change to a block and each block can have
multiple outstanding patches against it. Imagine a patch dependency,
patch A depends on patch B, which depends on patch C. That
is, A->B->C
, where "->
" reads as "depends
on." If patch A applies to block 1, and patch B applies to block 2,
and patch C applies to block 1, then viewing the blocks and their
outstanding patches as a whole, you have a circular dependency where
block 1 must be written before block 2, but block 2 must also be
written before block 1. This is called a "block-level cycle" and it
causes most of the headaches in a system based on write ordering.
The way both soft updates and Featherstitch resolve block level cycles is
by keeping enough information about each change to roll it back. When it
comes time to write a block, any applied patches which can't be written yet
(because their dependencies haven't been written yet) are rolled back using
their undo data. In our example, with A->B->C
and A and
C both applied to block 1, we would roll back A on block 1, write block 1
with patch C applied, write B's block, and then write block 1 a second time
with both patch A and C applied.
Optimization
The first version of Featherstitch was elegant, general purpose, easy to understand, and extraordinarily inefficient. On several benchmarks, the original implementation allocated over twice as much memory for patches and undo data as needed for the actual new data itself. The system became CPU-bound with as few as 128 blocks in the buffer cache.
The first goal was to reduce the number of patches needed to complete an operation. In many cases, a patch will never be reverted - for example, if we write to a file's data block when no other writes are outstanding on the file system, then there is no reason we'd ever have to roll back to the old version of the block. In this case, Featherstitch creates a "hard patch" - a patch that doesn't keep any undo data. The next optimization is to merge patches when they can always be written together without violating any dependencies. A third optimization merges overlapping patches in some cases. All of these patch reduction techniques hinge on the Featherstitch rules for creating patches and dependencies, in particular that a patch's dependencies must be specified at creation time. Some opportunities for merging can be detected at patch creation time, others when a patch commits and is removed from the queue.
The second major goal was to efficiently find patches ready for writing. A normal buffer cache holds several hundred thousand blocks, so any per-block data structures and algorithms must be extremely efficient. Normally, the buffer cache just has to, in essence, walk a list of dirty blocks and issue writes on them in some reasonably optimal manner. With Featherstitch, it can find a dirty block, but then it has to walk its list of patches checking to see if there is a subset whose dependencies have been satisfied and are therefore ready for writing. This list can be long, and it can turn out that none of the patches are ready, in which case it has to give up and go on to the next patch. Rather than randomly searching in the cache, Featherstitch instead keeps a list of patches that are ready to be written. When a patch has committed, the list of patches that depended on it is traversed and newly ready patches added to the list.
With these optimizations, the memory overhead of Featherstitch dropped from 200+% to 4-18% in the set of benchmarks used for evaluation - still high, but in the realm of practicality. The optimizations described above were only partially implemented in some cases, leaving more room for improvement without any further insight.
Performance
For head-to-head performance comparisons, the authors implemented several versions of file system consistency using the Featherstitch patch interface and compared them to the ext2 and ext3 equivalents. Using ext2 as the on-disk file system format, they re-implemented soft updates, metadata-only journaling, and full data/metadata journaling. Metadata-only journaling corresponds to ext3'sdata=writeback
mode (file data is written without
regard to the state of file system metadata that refers to it) and
full journaling corresponds to ext3's data=full
mode (all
file data is written directly to the journal along with file system
metadata).
The benchmarks used were extraction of a ~200MB tar file (the kernel
source code, natch), deletion of the results of previous, a Postmark
run, and a modified Andrew file system benchmark - in other words, the
usually motley crew of terrible, incomplete, unrepresentative file
system benchmarks we always run because there's nothing better
available. The deficiency shows: under this workload, ext3 performed
about the same in data=writeback
and data=ordered
mode (not usually the case in real-world
systems), which is one of the reasons the authors didn't implement
ordered mode for Featherstitch. The overall performance result was
that the Featherstitch implementations were at par or somewhat better
with the comparable ext3 version for elapsed time, but used
significantly more CPU time.
Patchgroups: Featherstitch for userspace
So, you can use Featherstitch to re-implement all kinds of file system consistency schemes - soft updates, copy-on-write, journaling of all flavors - and it will go about as fast the old version while using up more of your CPU. When you have big new features like checksums and snapshots in btrfs, it's hard to get excited about an under-the-covers re-implementation of file system internals. It's cool, but no one but file systems developers will care, right?
In my opinion, the most exciting application of Featherstitch is not
in the kernel, but userland. In short, Featherstitch exports an
interface that applications can use to get the on-disk consistency
results they want, AND keep most of the performance benefits that come
with re-ordering and delaying writes. Right now, applications have
only two practical choices for controlling the order of changes to the
file system: Wait for all writes to a file to complete using fsync(),
or rely on file system-specific implementation details, like
ext3 data=ordered
mode. Featherstitch gives you a third
option: Describe the exact, minimal ordering relationship between
various file system writes and then let the kernel re-order, delay,
and otherwise optimize the writes as much possible within those
constraints.
The userland interface is called "patchgroups." The interface prevents the two major pitfalls that usually accompany exporting a kernel-level consistency mechanism to userspace. First, it prevents deadlocks caused by dependency cycles ("Hey, kernel! Write A depends on write B! And, oh yeah, write B depends on write A! Have a nice day!"). In the kernel, you can define misuse of the interface as a kernel bug, but if an application screws up a dependency, the whole kernel grinds to a halt. Second, it prevents applications from stalling their own or other writes by opening a transaction and holding it open indefinitely while it adds new changes to the transaction (or goes off into an infinite loop, or crashes, or otherwise fails to wrap up its changes in a neat little bow).
The patchgroups interface simply says that all of those writes over there must be on-disk before any of these writes over here can start being written to disk. Any other writes that happen to be going on outside of these two sets can go to disk in any order they please, and the writes inside each set are not ordered with respect to each other either. Here's a pseudo-code example of using patchgroups:
/* Atomic update of a file using patchgroups */ /* Create a patch group to track the creation of the new copy of the file */ copy_pg = pg_create(); /* Tell it to track all our file systems changes until pg_disengage() */ pg_engage(copy_pg); /* Open the source file, get a temporary filename, etc. */ /* Create the temp file */ temp_fd = creat(); /* Copy the original file data to the temp file and make your changes */ /* All changes done, now wrap up this patchgroup */ pg_disengage(copy_pg);
The temp file now contains the new version of the file, and all of the related file system changes are part of the current patchgroup. Now we want to put the following rename() in a separate patchgroup that depends on the patchgroup containing the new version of the file.
/* Start a new patchgroup for the rename() */ rename_pg = pg_create(); pg_engage(rename_pg); /* * MAGIC OCCURS HERE: This is where we tell the system that the * rename() can't hit disk until the temporary file's changes have * committed. If you don't have patchgroups, this is where you would * fsync() instead. fsync() can also be thought of as: * * pg_depend(all previous writes to this file, this_pg); * pg_sync(this_pg); */ /* This new patchgroup, rename_pg, depends on the copy_pg patchgroup */ pg_depend(copy_pg, rename_pg); /* This rename() becomes part of the rename_pg patchgroup */ rename(); /* All set! */ pg_disengage(rename_pg); /* Cleanup. */ pg_close(copy_pg); pg_close(rename_pg);Short version: No more "Firefox
fsync()
" bug, O_PONIES for
everyone who wants them and very little cost for those who don't.
Conclusion
Featherstitch is a generalization and simplification of soft updates, with reasonable, but not stellar, performance and overhead. Featherstitch really shines when it comes to exporting a useful, safe write ordering interface for userspace applications. It replaces the enormous performance-destroying hammer offsync()
with a
minimal and elegant write grouping and ordering mechanism.
When it comes to the Featherstitch paper itself, I highly recommend reading the entire paper simply for the brief yet accurate summaries of complex storage-related issues. Sometimes I feel like I'm reading the distillation of three hours of the Linux Storage and File Systems Workshop plus another couple of weeks of mailing list discussion, all in one paragraph. For example, section 7 describes, in a most extraordinarily succinct manner, the options for correctly flushing a disk's write cache, including specific commands, both SCSI and ATA, and a brief summary of the quality of hardware support for these commands.
The realtime preemption mini-summit
Prior to the Eleventh Real Time Linux Workshop in Dresden, Germany, a small group met to discuss the further development of the realtime preemption work for the Linux kernel. This "mini-summit" covered a wide range of topics, but was driven by a straightforward set of goals: the continuing improvement of realtime capabilities in Linux and the merging of the realtime preemption patches into the mainline.The participants were: Stefan Assmann, Jan Blunck, Jonathan Corbet, Sven-Thorsten Dietrich, Thomas Gleixner, Darren Hart, John Kacur, Paul McKenney, Ingo Molnar, Oleg Nesterov, Steven Rostedt, Frederic Weisbecker, Clark Williams, and Peter Zijlstra. Together they represented several companies working in the area of realtime Linux; they brought a lot of experience with customer needs to the table. The discussion was somewhat unstructured - no formal agenda existed - but a lot of useful topics were covered.
Threaded interrupt handlers came out early in the discussion. This feature was merged into the mainline for the 2.6.30 kernel; it is useful in realtime situations because it allows interrupt handlers to be prioritized and scheduled like any other process. There is one part of the threaded interrupt code which remains outside of the mainline: the piece which forces all drivers to use threaded handlers. There are no plans to move that code into the mainline; instead, it's going to be a matter of persuasion to get driver writers to switch to the newer way of doing things.
Uptake in the mainline is small so far; few drivers are actually using this feature. That is beginning to change, though; the SCSI layer is one example. SCSI has always featured relatively heavyweight interrupt-handling code and work done in single-threaded workqueues. This code could move fairly naturally to process context; the SCSI developers are said to be evaluating a possible move toward threaded interrupt handlers in the near future. There have also been suggestions that the network stack might eventually move in that direction.
System management interrupts (SMIs) are a very different sort of problem. These interrupts happen at a very low level in the hardware and are handled by the BIOS code. They often perform hardware monitoring tasks, from simple thermal monitoring to far more complex operations not normally associated with BIOS-level software. SMIs are almost entirely invisible to the operating system and are generally not subject to control at that level, but they are visible in some important ways: they monopolize anything between one CPU and all CPUs in the system for a measurable period of time, and they can change important parameters like the system clock rate. SMIs on some types of hardware can run for surprisingly long periods; one vendor sells systems where an SMI for managing ECC memory runs for 200µs every three minutes. That is long enough to play havoc with any latency deadlines that the operating system is trying to meet.
Dealing with the SMI problem is a challenge. Some hardware allows SMIs to be disabled, but it's never clear what the consequences of doing so might be; if the CPU melts into a puddle of silicon, the resulting latencies will be even worse than before. Sharing information about SMI problems can be hard because many of the people working in this area are working under non-disclosure agreements with the hardware vendors; this is unfortunate, because some vendors have done a far better job of avoiding SMI-related latencies than others. There is a tool now (hwlat_detector) which can measure SMI latency, so we should start seeing more publicly-posted information on this issue. And, with luck, vendors will start to deal with the problem.
Not all hardware latency is caused by SMIs; hypervisors, too, can be a significant source of latency problems.
A related issue is hardware changes imposed by SMI handlers. If the BIOS determines that the system is overheating, it may respond by slowing the clock rate or lowering the processor voltage. On a throughput-oriented system, that may well be the right thing to do. When latencies are important, though, slowing the processor could be a mistake - it could cause applications to miss their deadlines. A better response might be to simply shut down some processors while keeping others at full speed. What is really needed here is a way to get this information to user space so that policy decisions can be made there.
Testing is always an issue in this kind of software development; how do the developers know that they are really making things better? There are various test suites out there (RTMB, for example), but there is no complete and integrated test suite. There was some talk of trying to move more of the realtime testing code into the Linux Test Project, but LTP is a huge body of code. So the realtime tests might remain on their own, but it would be nice, at least, to standardize test options and output formats to help with the automation of testing. XML output from test programs is favored by some, but it is fair to say that XML is not universally loved in this crowd.
The big kernel lock is a perennial outstanding issue for realtime development for a couple of reasons. One is that, despite having been pushed out of much of the core code, the BKL can still create long latencies. The other is that elimination of the BKL would appear to be part of the price for an eventual merge of sleeping spinlocks into the mainline kernel. The ability to preempt code running under the BKL was removed in 2.6.26; this change was directly motivated by a performance regression caused by the semaphore rewrite, but it was also seen as a way to help inspire BKL-removal efforts by those who care about latencies.
Much of the hard work in getting rid of the BKL has been done; one big outstanding piece is the conversion of reiserfs being done by Frederic Weisbecker. After that, what's left is a lot of grunt work: figuring out what (if anything) is protected by a lock_kernel() call and putting in proper locking. The "tip" tree has a branch (rt/kill-the-bkl) where this work can be coordinated and collected.
Signal delivery is still not an entirely solved problem. Actually, signals are always a problem, for implementers and users alike. In the realtime context, signal delivery has some specific latency issues. Signal delivery to thread groups involves an O(n) algorithm to determine which specific thread to target; getting through this code can create excessive latencies. There are also some locks in the delivery path which interfere with the delivery of signals in realtime interrupt context.
Everybody agrees that the proper solution is to avoid signals in applications whenever possible. For example, timerfd() can be used for timer events. But everybody also agrees that applications will continue to use signals, so they have to be made to work somehow. The probable solution is to remove much of the work from the immediate signal delivery path. Signal delivery would just enqueue the information and set a bit in the task structure; the real work would then be done in the context of the receiving process. That work might still be expensive, but it would at least fall to the process which is actually using signals instead of imposing latencies on random parts of the system.
A side discussion on best practices for efficient realtime application development yielded a few basic recommendations. The best API to use, it turns out, is the basic pthread interface; it has been well optimized over time. SYSV IPC is best avoided. Cpusets work better than the affinity mechanism for CPU isolation. In general, developers should realize that getting the best performance out of a realtime system will require a certain amount of manual tuning effort. Realtime Linux allows the prioritization of things like interrupt handlers, but the hard work of figuring out what those priorities should be can only be done by developers or administrators. It was acknowledged that the interfaces provided to administrators currently are not entirely easy to use; it can be hard to identify interrupt threads, for example. Red Hat's tuna tool can help in this regard, but more needs to be done.
Scalability was a common theme at the meeting. As a general rule, realtime development has not been focused specifically on scalability issues. But there is interest in running realtime applications on larger systems, and that is bringing out problems. The realtime kernel tends to run into scalability problems before the mainline kernel does; it was described as an early warning system which highlights issues that the mainline will be dealing with five years from now. So realtime will tend to scale more poorly than mainline, but fixing realtime's problems will eventually benefit mainline users as well.
Darren Hart presented a couple of charts
containing the results of some work by John Stultz
showing the impact of running the realtime kernel on a 24-processor
system. When running in anything other than uniprocessor mode, the
realtime kernel imposes a roughly 50% throughput penalty on a suitably
pathological workload - a severe price.
Interestingly, if the locking changes from the realtime kernel are removed
while leaving all of the other changes, most of the performance loss goes
away. This has led Darren to wonder if there should be a hybrid option
available for situations where hard latency requirements are not present.
In other situations, the realtime kernel generally shows performance degradation starting with eight CPUS, with sixteen showing unacceptable overhead.
As it happens, nobody really understands where the performance cost of realtime locking comes from. It could be in the sleeping spinlocks, but there is also a lot of suspicion directed at reader-writer locks. In the mainline kernel, rwlocks allow multiple readers to run in parallel; in the realtime kernel, instead, only one reader runs at a time. That change is necessary to make priority inheritance work; priority inheritance in the presence of multiple readers is a difficult problem. One obvious conclusion that comes from this observation is that, perhaps, rwlocks should not implement priority inheritance. There is resistance to that idea, though; priority inheritance is important in situations where the highest-priority process should always run as quickly as possible.
The alternative to changing rwlocks is to simply stop using them whenever possible. The usual way to remove an rwlock is to replace it with a read-copy-update scheme. Switching to RCU will improve scalability, arguably at the cost of increasing complexity. But before embarking on any such effort, it is important to get a handle on how much of the problem really comes down to rwlocks. Some research will be done in the near future to better understand the source of the scalability problems.
Another problem is per-CPU variables, which work by disabling preemption while a specific variable is being used. Disabling preemption is anathema to the realtime developers, so per-CPU variables in the realtime tree are protected by sleeping locks instead. That increases overhead. The problem is especially acute in slab-level memory allocators, which make extensive use of per-CPU variables.
Solutions take a number of forms. There will eventually be a more realtime-friendly slab allocator, probably a variant of SLQB. Minimizing the use of per-CPU variables in general makes sense for realtime. There are also schemes involving the creation of multiple virtual "CPUs" so that even processes running on the same processor can have their own "per-CPU" variables. That decreases contention for those variables considerably at the cost of a slightly higher cache footprint.
Plain old locks can also be a problem; a run of dbench on a 16-processor system during the workshop showed a 90% reduction in throughput, with the processors sitting idle half the time. The problem in this case turns out to be dcache_lock, one of the last global spinlocks remaining in the kernel. The realtime tree feels the effects of this lock more strongly for a couple of reasons. One is that threads holding the lock can be preempted; that leads to longer lock hold times and more context switches. The other is that sleeping spinlocks are simply more complicated, especially in the contended slow path of the code. So the locking primitives themselves require more CPU time.
The solution to this particular problem can only be the elimination of the global dcache_lock. Nick Piggin has a patch set which does exactly that, but it has not yet been tested with the realtime tree.
Realtime makes life harder for the scheduler. On a normal system, the scheduler can optimize for overall system throughput. The constraints imposed by realtime, though, require the scheduler to respond much more aggressively to events. So context switches are higher and processes are much more likely to migrate between CPUs - better for bounded response times, but worse for throughput. By the time the system scales up to something relatively large - 128 CPUs, say - there does not seem to be any practical way to get consistently good decisions from the scheduler.
There is some interest in deadline-oriented schedulers. Adding an "earliest deadline first" or related scheduler could be useful for application developers, but nobody seems to feel that a deadline scheduler would scale better than the current code.
What all this means is that realtime applications running on that kind of system must be partitioned. When specific CPUs are set aside for specific processes, the scheduling problem gets simpler. Partitioning requires real work on the part of the administrator, but it seems unavoidable for larger systems.
It doesn't help that complete CPU isolation is still hard to accomplish on a Linux system. Certain sorts of operations, such as workqueue flushes, can spill into a processor which has been set aside for specific processes. In general, anything involving interrupts - both device interrupts and inter-processor interrupts - is a problem when one is trying to dedicate a CPU to a task. Steering device interrupts to a given processor is not that hard, though the management tools could use improvement. Inter-processor interrupts are currently harder to avoid; code generating IPIs needs to be reviewed and, when possible, modified to avoid interrupting processors which do not actually have work to do.
Integrating interrupt management into the current cpuset and control group
code would be useful for system administrators. That seems to be a harder
task; Paul Jackson, the original cpuset developer, was strongly opposed to
trying to include interrupt management there. There's a lack of good
abstractions for this kind of administration, though the generic IRQ layer
helps. The opinion at the meeting seemed to be that this was a solvable
problem; if it can be solved for the x86 architecture, the other
architectures will eventually follow.
Going to a fully tickless kernel is also an important step for full CPU isolation. Some work has recently been done in that direction, but much remains to be done.
Stable kernel ABI concerns made a surprising appearance. The "enterprise" Linux offerings from distributors generally include a promise that the internal kernel interface will not change. The realtime enterprise distributions have been an exception to this rule, though; the realtime code is simply in too much flux to make such a promise practical. This exemption has made life easier for developers working on that code, naturally; it also has made it possible for customers to get the newest code much more quickly. There are some concerns that, once the remaining realtime code is merged into the mainline, the same kernel ABI constraints may be imposed on realtime distributions. It is not clear that this needs to happen, though; realtime customers seem to be more interested in keeping up with newer technology and more willing to put up with large changes.
Future work was discussed briefly. Some of the things remaining to be done include:
- More SMP work, especially on NUMA systems.
- A realtime idle loop. There is the usual tension there between
preserving the best response time and minimizing power consumption.
- Supporting hardware-assisted operations - things like onboard
cryptographic acceleration hardware.
- Elimination of the timer tick.
- Synchronization of clock events across CPUs. Clock synchronization is always a challenging task. In this case, it's complicated by the fact that a certain amount of clock skew can actually be advantageous on an SMP system. If clock events are strictly synchronized, processors will be trying to do things at the same time and lock contention will increase.
A near-future issue is spinlock naming. Merging the sleeping spinlock code requires a way to distinguish between traditional, spinning locks and the newer type of lock which might sleep on a realtime system. The best solution, in theory, is to rename sleeping locks to something like lock_t, but that would be a huge change affecting many thousands of files. So the realtime developers have been contemplating a new name for non-sleeping locks instead. There are far fewer of these locks, so renaming them to something like atomic_spinlock would be much less disruptive.
There was some talk of the best names for "atomic spinlocks"; they could be "core locks," "little kernel locks," or "dread locks." What really came out of the discussion, though, is that there was a fair amount of confusion regarding the two types of locks even in this group, which understands them better than anybody else. That suggests that some extra care should go into the naming, with the goal of making the locking semantics clear and discouraging the use of non-sleeping locks. If the semantics of spinlock_t change, there is a good argument that the name should also change. That supports the idea of the massive lock renaming, regardless of how disruptive it might be.
Whether such a change would be accepted is an open question, though. For now, both the small renaming and the massive renaming will be prepared for review. The issue may then be taken to the kernel summit in October for a final decision.
Tools for realtime developers came up a couple of times. There are a number of tools for application optimization now, but they are scattered and not always easy to use. And, it is said, there needs to be a tool with a graphical interface or a lot of users simply will not take it seriously. The "perf" tool, part of the kernels "performance events" subsystem, seems poised to grow into this role. It can handle many of the desired tasks - latency tracing, for example - now, and new features are being added. The "tuna" tool may be extended to provide a nicer interface to perf.
User-space tracepoints seem to be high on the list of desirable features for application developers. Best would be to integrate these tracepoints with ftrace somehow. Alternatively, user-space trace data could be collected separately and integrated with kernel trace data at postprocessing time. That leads to clock synchronization issues, though, which are never easy to solve.
The final part of the meeting became a series of informal discussions and hacking efforts. The participants universally saw it as a worthwhile gathering, with much learned by all. There are some obvious action items, including more testing to better understand scalability problems, increasing adoption of threaded interrupt handlers, solving the spinlock naming problem, improving tools, and more. Plenty of work for all to do. But your editor has been assured that the work will be done and merged in the next year - for real this time.
Patches and updates
Kernel trees
Architecture-specific
Core kernel code
Development tools
Device drivers
Filesystems and block I/O
Networking
Security-related
Miscellaneous
Page editor: Jake Edge
Distributions
News and Editorials
Puppy Linux 4.3 and Woof
Puppy Linux 4.3.0 was released several weeks ago and with it came several interesting developments. First, this release saw the return of Barry Kauler, founder and project lead of Puppy Linux. Second, 4.3.0 includes some great new tools that have the potential to increase Puppy's usability by empowering developers and users with thousands of extra packages.
Puppy has received lots of updates this release. Besides a slight facelift, it features Linux kernel 2.6.30.5 with support for Ext4 and Aufs2, a new graphical CPU Scaling configuration, new Xdelta and Bcrypt GUIs, and several new light-weight apps. These include Aqualung for playing music, Pstreamvid for streaming video, Pcur cursor selector, and BootFlash USB flashdrive install wizard. But probably the most exciting update is the new Woof system builder.

Woof - New System Builder
Kauler introduced a new build system to replace Puppy Unleashed. Woof, the new Puppy builder has the ability to include packages released for other distributions. This currently includes support for Ubuntu, Debian, Slackware, and Arch with others possibly in the works. Woof allows you to create your own customized Puppy or 'puplet' using both native Puppy PET packages and binary packages from the distribution of choice. Once an external distribution is specified, Woof retains binary compatibility with that distribution, so any PET packages will be built with the same toolchain.
The process isn't difficult, although it may require a bit more manual work than some other distributions' remaster tool. Remastering Puppy to personal preferences would probably be the average user's goal in using Woof, its real purpose is to streamline the build process for Puppy developers. An added advantage expressed by Kauler is to eliminate the need to host all the source packages used in Puppy construction, since Woof only deals with binary packages.
These are the basic steps I used to construct my own Puppy derivative:
- extract:
ftp://ibiblio.org/pub/linux/distributions/puppylinux/puppy-4.3/woof-20090917.tar.gz
- edit file DISTRO_SPECS and at least change the DISTRO_BINARY_COMPAT
to Ubuntu, Debian, or whichever is your preference. You can also edit the ISO title and version number as well as several other options.
- execute 0setup, which downloads the package database files from Puppy and your distro choice, Ubuntu in this case.
- execute 1download, which downloads lots of packages from
Puppy and Ubuntu.
- One can edit the corresponding DISTRO_PKGS_SPECS file to add some of the featured packages by changing no to yes. Kauler stated that a package chooser script may be added in later incarnations for a bit more convenience. These spec files are a bit limited and adding other desired packages to the list doesn't seem to work.
- execute 2createpackages
- execute 3builddistro, which will build the files and iso. You will be asked several configuration questions, many for the kernel, some for the desktop appearance. Then it offers to burn the ISO to CD.
All in all, Woof works really well for what it's designed to do right now. Some more advanced options like the ability to choose any package available in repositories would be nice in future releases. Barry is currently working on a GUI frontend for Woof.

One advantage of this process is if your desired packages are listed in the PKGS_SPECS you can construct an installable portable system to your preferences. But probably the greatest advantage is that the distribution used in constructing the new Puppy system is also added to the package manager of your new system. At that point you can add any software available from the repositories.
The Puppy Package Manager (PPM) has had some updates this release as well. Besides a bit of a facelift, the backend was updated to accommodate the new package repositories and their various formats.
Packages are categorized into subheadings, if desired, with headings such as Desktop, Utility, and Multimedia. The search is fast and accurate. 'Configure package manager' allows one to update repository databases and add or remove repositories to the package management system. However, one can only add other distribution repositories if it was used to build the underlying operating system. When that is done, you will see the extra repositories listed at the top of the package management window. They can be disabled or reenabled from there as well as using the Configure package manager setting.
Security updates
Security is a touchy subject surrounding the Puppy distribution. Puppy is run from within a single-user (root) mode and many question the security of that. Security updates has been a major point of contention between Puppy and some members of the community for some time. Puppy and the package manager don't address security updates specifically. One reviewer stated that the lack of security updates is the main reason she doesn't recommend Puppy to users. This lack of security updates is the reason Puppy Linux was classified as a hobbyist distribution at Distrowatch.com as well.
Puppy releases come regularly and updated packages are a key reason, but in-between releases, security updates just seem to be ignored. Even the security pages at the Puppy Wiki have disappeared.
Conclusion
Puppy continues to be a handy and fun little distro for all sorts of purposes. Its small original tools work well, in particular for lower-resource machines and it includes an adequate collection of useful applications. Some of the applications found are mtPaint image manipulation application, Gxine media player, Abiword word processor, Homebank accounting software, Gnumeric spreadsheet application, Ayttm instant messenger, and the SeaMonkey suite.
Flash is included and works on YouTube.com and such, but my internet connection wasn't available automagically at boot like previous releases. This release functioned well in recent whitebox machines and my HP Pavilion laptop, but my antique Dell was out of luck. My hopes of bringing a more up-to-date system to the old Dell were dashed by issues with both Xorg and Xvesa. Neither would display properly on the old NeoMagic video chip.
With Kauler back at the helm, Puppy Linux 5.0 is under heavy development. Rumors have it that the next release will either be based on Ubuntu or have an Ubuntu version available called Upuptu. All in all, it still impresses by offering all it does at only 115 MB.
New Releases
Gentoo Ten LiveDVD Testing
In honor of Gentoo's 10th birthday the project has announced a new live DVD. "We need YOU to test it on as many x86 and x86_64 machines as you can and post bugs."
MOBLIN V2.0, THE MOBLIN GARAGE, AND MOBLIN V2.1
The Moblin steering committee has announced three new developments within the Moblin project. The official project release of Moblin v2.0 for Intel Atom* Processor based netbooks, a preview of the Moblin Garage and Moblin Application Installer, and a community preview release of Moblin v2.1 for Intel Atom* Processor based netbooks and nettops for early development.
Distribution News
Debian GNU/Linux
Linux-RT for Debian
Pengutronix has started providing the Realtime Preemption Linux kernel packages for the Debian distribution. "The Debian packets provided now consist of the kernels in exactly the state as they have been released by the maintainers. This makes it possible that users can direct their bug reports directly into the upstream Realtime Preemption project, in order to improve the common code base."
(Overdue) bits from keyring-maint
Jonathan McDowell has an update on Debian's keyring maintainers. "I mentioned back in May that I'd started chasing DDs with both v3 and v4 keys in our keyrings about removing the v3 keys. Many people responded and confirmed it was ok to remove the v3 key immediately. A few stragglers wanted to hold off and check things out. And unfortunately a few failed to reply to repeated mails and I submitted them to the MIA team for investigation."
Fedora
MirrorManager automatic local mirror selection
Matt Domsch takes a look at MirrorManager in Fedora. "As you know, Internet routing uses BGP (Border Gateway Protocol), and Autonomous System Numbers (ASNs) to exchange IP prefixes (aa.bb.cc.dd/nn) and routing tables. By grabbing a copy of the global BGP table a few times a day, MM can know the ASN of an incoming client request, and Hosts in the MM database have grown two new fields: ASN and "ASN Clients?". MM then looks to see if there is a mirror with the same ASN as each client, and offers it up earlier in the list."
New Distributions
Chakra
Chakra aims to be a fast, user-friendly and powerful Live CD and/or small distribution based on the K Desktop Environment (KDE) and on Arch Linux. Chakra features rolling releases, freshly cooked packages, unique GUI tools and a small, diverse community. The third alpha release of the Chakra LiveCD is available for testing.
Distribution Newsletters
DistroWatch Weekly, Issue 322
The DistroWatch Weekly for September 28, 2009 is out. "This week's issue is almost entirely dedicated to netbooks. First, we'll take a look at a Linux-based HP Mini 110 and its customised user interface called HP Mi. As part of the review we'll also investigate possible Linux alternatives to install on the netbook, including the latest alpha release of Ubuntu Netbook Remix 9.10. The news section then provides further netbook-related news as both Canonical and Mandriva announce products built around the new Moblin 2.0 user interface, while the Fedora community launches Fedora Mini, a custom distribution specifically built with netbooks in mind. But if netbooks are not your cup of tea, the news section also has some other distro news: Slackware releases official KDE 3.5 packages for its latest version 13.0, Debian developers launch two new alternative package management systems, and Ubuntu publishes a full development schedule for its first release of 2010 - version 10.04 "Lucid Lynx". All this and more in this week's issue of DistroWatch Weekly - happy reading!"
Fedora Weekly News 195
The Fedora Weekly News for September 27, 2009 is out. "Kicking off this week's issue in announcements, a new IRC channel for Fedora Activity Days, launched in time for the next upcoming FAD in Germany, and updates on feature freeze for Fedora 12 beta this week, along with other related updates. From the Fedora Planet, postings and views from Fedora contributors worldwide, and a collection of FAD EMEA related links. In marketing news, Fedora 12 talking points, Fedora Insight status and other current activities. In ambassadors, details of the upcoming Utah Open Source Conference, and activities Ambassadors can do for Fedora 12. The Quality Assurance beat this week brings up up-to-date on weekly meeting and Test Day activities, as well as Fedora 12 beta related work. This issue rounds out with news from the Art/Design team, providing detail on the mosaic polish for the Fedora 12 theme. That rounds out this week's issue of Fedora Weekly News, which we hope you enjoy!"
The Mint Newsletter - issue 94
This issue of the Mint Newsletter covers Preparations for the next version of Mint, Helena, a new mintInstall, a new Mint KDE logo, and more.Ubuntu Weekly Newsletter #161
The Ubuntu Weekly Newsletter for September 26, 2009 is out. "In this issue we cover: Ubuntu 9.10 beta approaching, Ubuntu 9.10 beta freeze in effect, Sponsorship deadline for UDS-Lucid approaching, Ubuntu Community Council Elections 2009, New LoCo Council member sought, New Ubuntu members, Ubuntu California is approved LoCo, Mark Shuttleworth: Don't give up the Linux Desktop, New Ubuntu Developers, LoCo News: New Mexico, Pennsylvania, Israel, and Florida, Launchpad 3.0, The Planet: Kenneth Wimer, Collin Pruitt, and Neil Jagdish Patel, Full Circle Magazine, Atlanta LinuxFest: Top 9 Ubuntu Highlights, Ubuntu User Magazine, and much, much more!"
Distribution reviews
Intel Ports Linux Netbook OS to Desktops (PCWorld)
PCWorld covers Intel's porting of Moblin to the desktop space. "Intel has expanded the scope of Linux-based Moblin by porting the OS from netbooks to mobile devices and desktops, where it could compete with Microsoft's Windows OS. The company introduced a beta version of Moblin 2.1 at the Intel Developer Forum being held in San Francisco. The new version of the OS now builds in capabilities like native touchscreen input and gesture support, new user interface features, and support for more hardware drivers. It also includes incremental upgrades that expand the usability of the OS."
Page editor: Rebecca Sobol
Development
LinuxCon: Building a secure IP telephony system
Free software running open Voice-over-IP (VoIP) standards like Session Initiation Protocol (SIP) are already an alternative to closed and proprietary voice services, but relatively few know that free software can also provide secure, end-to-end encrypted calling. On Wednesday afternoon at LinuxCon, David Sugar spoke about the work in this area being done in the GNU Telephony project's Secure Calling initiative. Sugar outlined the major pieces of secure voice communication, detailed the project's components — including GNU's implementation of ZRTP and the SIP Witch server — and discussed their usage in practical VoIP deployments.
GNU Telephony is an umbrella project that encompasses work on the (concurrently-developed) Bayonne and Bayonne 2 servers, SIP Witch, and a suite of libraries for implementing different parts of a VoIP stack: audio processing, Real-time Transport Protocol (RTP) media channels, and ZRTP encryption. The project also puts special emphasis on embedded applications with the GNU Telephony Open Embedded effort to port solutions to ARM hardware, and the uCommon library for lightweight C++ development.
Sugar started his talk with a brief history of the Secure Calling Initiative. Earnest interest in secure VoIP software dated back to 1999, he said, in reaction to the passage of the Communications Assistance for Law Enforcement Act (CALEA) act and its mandates for government-accessible backdoors in telephony equipment. Proprietary services like Skype may offer encryption, but without access to the source code it is impossible to know that no such back doors or simple security flaws exist.
A precondition to secure calling is a secure media path, but it took some time for a suitable, standards-based free software stack to evolve. The Secure RTP (SRTP) protocol was published in 2004, but it was not an end-to-end solution on its own, because it does not include secure cryptographic key exchange. The industry standard SDES exchanges both keys in the clear, Sugar noted, and public key authority systems rely on trusting third parties.
That situation changed in 2005 when Phil Zimmermann released the ZRTP key agreement protocol. ZRTP uses Diffie-Hellman key exchange between the two callers, but adds a "social key exchange" factor to prevent man-in-the-middle attack. Each user hashes together their own public key with the other caller's public key; the result is a Short Authentication String (SAS) that can be exchanged and verified by the callers verbally. Once the setup is established, the media stream is encrypted using standard SRTP.
As with the original PGP, however, Zimmermann's implementation of ZRTP was not free software. In 2006, Werner Dittmann developed a ZRTP implementation as an extension to the GNU RTP stack ccRTP, thus immediately making it possible to use ZRTP with softphones already using ccRTP, such as the popular Twinkle client. Since then, a second GNU implementation project called ZRTP4J was developed to bring ZRTP support to Java applications, including SIP Communicator.
SIP Witch, call serving, and the CIA
Making end-to-end secure calling usable for the average user still required a call-registration and setup server, though. GNU Telephony wanted to avoid building a complete phone switch similar to Asterisk, said Sugar, because it wanted to separate call serving from the potentially patent-encumbered task of media encoding and decoding.
The result is SIP Witch, a gateway that negotiates call setup with ZRTP, but is free from the compute-bound tasks of audio codec processing. A side effect of separating SIP Witch's call negotiation features from media processing is that it enables secure call setup, but at no cost to latency. Furthermore, once the secure call is established, SIP Witch hands off the connection to the clients, so it can handle potentially thousands of calls on modest hardware.
In contrast, Asterisk handles SIP registration, call set-up, codec negotiation, and encodes and decodes audio — but still does not support ZRTP/SRTP. Other popular SIP registration servers, such as SIP Express Router (SER), similarly build in additional features like load-balancing and media relaying, that also require more processing and potentially add latency.
Nevertheless, SIP Witch is designed to coexist with and interoperate with other telephony servers, Sugar explained. SIP Witch can sit in front of Asterisk and intercept ZRTP requests, directing them to an encrypted softphone, but permit unencrypted calls to pass through to the Asterisk server. Multiple SIP Witch servers can also operate together, directing calls to extensions on different nodes, thus making it suitable for large site-wide deployments.
In a side note, Sugar told of one of the SIP Witch-plus-ZRTP solution's first deployments "in the wild." The callers were a world-famous pair of Latin American heads of state with well-known animosity towards the US government. Prior to SIP Witch, their phone conversations were regularly intercepted and played back in the news media. After SIP Witch, the interceptions appear to have stopped.
That use case might make some feel uncomfortable (depending on their nationality and politics), but Sugar stressed repeatedly that the purpose of secure calling is not to prevent lawful interceptions or block subpoenaed wiretaps; it is to prevent illegal surveillance. Wiretaps that are court authorized can still be executed, he said, perhaps by installing logging or audio recording software on the caller's computer. Granting the access to place such software is no different than granting access to an office to hide a bug inside the ceiling, and it is subject to judicial oversight. End-to-end VoIP encryption just prevents unauthorized eavesdropping, something that is relatively easy for unencrypted Internet-based communication.
Developing applications and services with GNU Telephony
Sugar postulated many potential uses for SIP Witch beyond the dedicated home hacker's private line. Because of its modest CPU requirements, it is a good candidate not only for the typical private branch exchange (PBX) found in a office deployment, but remotely hosted telephone services as well, including low-cost community telecenters. SIP Witch could even run on cloud computing services, Sugar added, providing a scalable, secure calling alternative to closed systems like Skype.
One of the Secure Calling Initiative's primary goals is to make secure telephony simple enough to use that non-technical users will use it regularly. Sugar said that the goal has not yet been achieved, but he is optimistic. Already, the SIP Witch and GNU ZRTP stack is simple enough that organizations and governments set it up for site-wide usage, he said, and although it will get easier in the future, the quickest path forward for casual users may be on mobile devices instead. The Java-based ZRTP4J library is aimed at such portable use, and Sugar has been working implementing ZRTP over GSM cellular radios.
Looking forward, Sugar spoke about Zimmermann's "PBX Enrollment" feature, an extension to the Asterisk server that allows it to perform ZRTP key- and SAS-exchanges. Again, though, Zimmermann's code is not available under the GPL, so it cannot be incorporated into the GPL-licensed version of Asterisk.
Finally, Sugar took questions from the audience, including several on the problem of extending secure calling to multi-party conference calls. Secure multi-party calling remains unsolved, he said. Conference calling involves mixing multiple audio streams, which means decrypting them. The current secure calling models involve point-to-point media streams designed to be secure against eavesdropping; the key exchange protocol does not allow for more than two parties to determine the "shared secret" session key that encrypts the audio channel.
One possible solution would involve separate secure channels between every pair of callers, mixing the audio entirely on the client side, but the bandwidth required increases geometrically with the number of call participants. Nevertheless, Sugar said, there is interesting work being done in multi-party conferencing, including 3-D audio positioning that gives every individual caller a virtual location by mixing the stereo audio signal accordingly. The result is a multi-party conversation that is considerably easier to follow than the all-speakers-from-one-point audio found on conventional land-line party calls.
Secure calling with free software is easier today than ever before, but Sugar and GNU Telephony are not content with its availability. Sugar recently started work at Canonical, and is working on making ZRTP libraries and SIP Witch available for Debian and Ubuntu. They are expected to ship with Ubuntu 9.10, but Sugar also provides packages through his Personal Package Archive. With luck, the increased exposure through Ubuntu will encourage more people to try ZRTP-secured calls and, perhaps, eventually make them commonplace.
System Applications
Audio Projects
ncmpc 0.15 released
Version 0.15 of ncmpc, a client of the Music Player Daemon, has been announced. "Changes include an improved build, updated lyricwiki plugin, fixed bugs and a tweaked display."
Rockbox 3.4 released
Rockbox 3.4 - a replacement firmware for a number of digital media players - has been released. New features include a number of added codecs, a pitch detector plugin, a time-stretching feature, the ability to control music players on a PC from the Rockbox device, and more. See the release notes for details.
Database Software
PostgreSQL Weekly News
The September 27, 2009 edition of the PostgreSQL Weekly News is online with the latest PostgreSQL DBMS articles and resources.SQLObject 0.10.8 released
Version 0.10.8 of SQLObject, an object-relational mapper, has been announced. "I'm pleased to announce version 0.10.8, a minor bugfix release of 0.10 branch of SQLObject."
SQLObject 0.11.2 released
Version 0.11.2 of SQLObject, an object-relational mapper, has been announced. "I'm pleased to announce version 0.11.2, a minor bugfix release of 0.11 branch of SQLObject."
Interoperability
Samba Team Blog #2
The Samba Team has published Blog #2. "The Team attended the Storage Network Industry Association plugfest last week. If you haven't been to one, a plugfest is a technical event where engineers from many different companies get together and participate in fixing bugs, working together and making our systems interoperate."
Web Site Development
Sneaky web server 0.1 announced
Version 0.1 of Sneaky web server has been announced. "A fast portable pure-python multithreaded experimental WSGI web server in 300 lines of code".
Tinyproxy version 1.6.5 is now available
Version 1.6.5 of Tinyproxy has been announced, it includes several bug fixes. "Tinyproxy is a light-weight HTTP proxy daemon for POSIX operating systems. It is distributed using the GNU GPL license version 2 or above. Designed from the ground up to be fast and yet small, it is an ideal solution for use cases such as embedded deployments where a full featured HTTP proxy is required, but the system resources for a larger proxy are unavailable."
Desktop Applications
Audio Applications
libtheora 1.1 released
The libtheora 1.1 release has been announced. It looks like a fairly major step forward for the theora codec. "This release incorporates all the work we've been doing over the last year, and the encoder has been completely rewritten, although some of the code had its genesis way back in 2003. It also brings substantial performance and robustness improvements to the 1.0 decoder."
Data Visualization
matplotlib 0.99.1 released
Version 0.99.1 of matplotlib, a data visualization package, is out with several bug fixes. See the CHANGELOG file for details.Veusz 1.5 released
Version 1.5 of Veusz has been announced, it includes new capabilities and bug fixes. "Veusz is a Qt4 based scientific plotting package. It is written in Python, using PyQt4 for display and user-interfaces, and numpy for handling the numeric data. Veusz is designed to produce publication-ready Postscript/PDF output. The user interface aims to be simple, consistent and powerful."
Desktop Environments
FVWM 2.5.28 released
Version 2.5.28 of FVWM has been announced, it includes new features and bug fixes. See the change log for details. (Thanks to Christoph Fritz).Celebrating the release of GNOME 2.28
Version 2.28 of GNOME has been announced. "Today, the GNOME Project celebrates the release of GNOME 2.28, the latest version of the popular, multi-platform free desktop environment and of its developer platform. Released on schedule, to the day, GNOME 2.28 builds on top of a long series of successful six months releases to offer the best experience to users and developers."
Linux garden gets a new GNOME with version 2.28 (ars technica)
ars technica takes a look at the recently released GNOME 2.28. "The developers behind the open source GNOME desktop environment have announced the official release of version 2.28. This version brings a handful of noteworthy improvements such as a new Bluetooth configuration tool and user interface refinements in numerous applications. One of the most significant changes is the adoption of Apple's WebKit HTML rendering engine for GNOME's Epiphany Web browser."
GNOME Software Announcements
The following new GNOME software has been announced this week:- Cheese 2.28.0.1 (documentation work)
- Clutter 1.0.6 (new features, bug fixes and documentation work)
- Ekiga 3.2.6 (bug fixes and code cleanup)
- F-Spot 0.6.1.3 (bug fixes and translation work)
- GLib 2.22.1 (bug fixes and translation work)
- Glom 1.12.0 (new features and bug fixes)
- GNOME Commander 1.2.8.2 (bug fixes)
- GNOME Nettool 2.28.0 (bug fixes and translation work)
- goobox 2.0.1 (bug fixes and translation work)
- gtkaml 0.2.8 (bug fixes and documentation work)
- gtk-engines 2.18.4 (bug fixes)
- gtksourceview 2.8.1 (unspecified)
- krb5-auth-dialog 0.13 (new features, bug fixes and translation work)
- Libgee 0.5.0 (new features, bug fixes and code cleanup)
- PyGobject 2.20.0 (new features)
- PyPoppler 0.12.1 (new features)
- Rygel 0.4 (new features, bug fixes and code cleanup)
- Rygel 0.4.1 (bug fixes)
- Sysprof 1.1.2 (new features)
- tracker 0.7.0 (new features and bug fixes)
- Vala 0.7.7 (new features and bug fixes)
- Vala Toys for gEdit 0.6.0 (new features and bug fixes)
KDE Software Announcements
The following new KDE software has been announced this week:- ColorCode 0.3 (unspecified)
- Kipi-plugins 0.7.0 (unspecified)
- KTorrent 3.2.4 (bug fixes)
- Necromant's Mount Manager 0.2 (unspecified)
What I Did On My Summer Holiday (KDE.News)
KDE.News has a look at the 37 Google Summer of Code projects that were completed for various KDE programs. Screen shots and brief interviews with the students are included. "Much of the work done during these projects is already merged into trunk and will be available for the users with the KDE 4.4 release in January 2010."
Xorg Software Announcements
The following new Xorg software has been announced this week:- intel-gpu-tools 1.0.2 (new features and bug fixes)
- libdrm 2.4.14 (new features, bug fixes and code cleanup)
- libXdmcp 1.0.3 (code cleanup, bug fixes and code cleanup)
- libXmu 1.0.5 (bug fixes and documentation work)
- pixman 0.16.2 (bug fixes)
- xbacklight 1.1.1 (new features, bug fixes and documentation work)
- xcursor-themes 1.0.2 (build fix and code cleanup)
- xf86-video-geode 2.11.6 (code reversion)
- xf86-video-intel 2.9.0 (new features and bug fixes)
- xinput 1.4.99.3 (new features and documentation work)
- xkeyboard-config 1.7 (new feature and bug fixes)
- xorg-server 1.6.4 (new features and bug fixes)
- xorg-server 1.6.99.903 (new features and bug fixes)
- xproto 7.0.16 (bug fixes and code cleanup)
Games
Albow 2.1 released
Version 2.1 of Albow has been announced, it adds some new capabilities. "Albow is a library for creating GUIs using PyGame that I have been developing over the course of several PyWeek competitions. I am documenting and releasing it as a separate package so that others may benefit from it, and so that it will be permissible for use in future PyGame entries."
Graphics
Inkscape 0.47pre3 is out
Version 0.47pre3 of the Inkscape vector graphics editor has been announced. "The presumably last prerelease of 0.47 is out. Please fetch the files, test and let us know about bugs you run into. Date of final version's release is currently estimated as two weeks away from now."
GUI Packages
FLTK 1.1.10rc2 is now out
Version 1.1.10 rc2 of FLTK has been announced. "1.1.10 *will* be the last 1.1 release. After releasing the 1.1.10 final version, no more STDs against 1.1 will be possible. I will not reopen 1.1. There will be no 1.1.11. Nope. None. Nix."
PyQt 4.6 released
Version 4.6 of PyQt has been announced. "The highlights of this release include: - alternate, more Pythonic, APIs have been implemented for QDate, QDateTime, QString, QTextStream, QTime, QUrl and QVariant. Applications may select a particular API. By default Python v3 uses the new versions and Python v2 uses the old versions. - Qt properties can be initialised, and signals connected using keyword arguments passed when creating an instance. Properties and signals can also be set using the QObject.pyqtConfigure() method."
Interoperability
Wine 1.1.30 announced
Version 1.1.30 of Wine has been announced. Changes include: "- Support for OpenAL. - Many improvements in HTML and JavaScript support. - Many common controls fixes and improvements. - More Direct3D 10 work. - Better MAPI support. - Various bug fixes."
Miscellaneous
agenda2pdf 1.0 released
Version 1.0 of agenda2pdf has been announced. "This is a simple script which generates a book agenda file in PDF format, ready to be printed or loaded on an ebook reader. You can choose among different sections. Each section have pdf links to other parts of the agenda. I've created it for using with my iLiad eBook reader."
Languages and Tools
Caml
Caml Weekly News
The September 29, 2009 edition of the Caml Weekly News is out with new articles about the Caml language.
Python
CodeInvestigator 0.16.0 released
Version 0.16.0 of CodeInvestigator is out, it includes a number of bug fixes. "CodeInvestigator is a tracing tool for Python programs. Running a program through CodeInvestigator creates a recording. Program flow, function calls, variable values and conditions are all stored for every line the program executes. The recording is then viewed with an interface consisting of the code. The code can be clicked: A clicked variable displays its value, a clicked loop displays its iterations."
Cython 0.11.3 released
Version 0.11.3 of Cython, a C extension language for Python, has been announced. "We are happy to announce the release of Cython 0.11.3, which is the accumulation of numerous bugfixes and other work since the beginning of the summer. Some new features include a cython freeze utility that allows one to compile several modules into a single executable (Mark Lodato) and the ability to enable profiling Cython code with Python profilers using the new cython.profile directive. We also had two successful summer of code projects, but neither is quite ready to be merged in at this time. This will probably be the last minor release before Cython 0.12."
Distribute 0.6.2 released
Version 0.6.2 of Distribute, a Python packaging system, has been announced. "This release is the first release that is compatible with Python 3, kudos to Martin von Löwis, Lennart Regebro and Alex Grönholm and the ones I am missing, on this work !"
Jython 2.5.1 final is out
Version 2.5.1 of Jython, an implementation of Python in Java, has been announced. "Jython 2.5.1 fixes a number of bugs, including some major errors when using coroutines and when using relative imports, as well as a potential data loss bug when writing to files in append mode."
python-colormath 1.0.5 Released
Version 1.0.5 of python-colormath has been announced. "An error in the CIE2000 Delta E equation has been found and corrected, necessitating the immediate release of python-colormath 1.0.5. All users of the 1.x series are encouraged to upgrade to avoid this mathematical error."
python-daemon 1.5.1 released
Version 1.5.1 of python-daemon has been announced. "Since version 1.4.8 the following significant improvements have been made: * Raise specific errors on failures from the library, distinguishing different conditions better. * Write the PID file using correct OS locking and permissions. * Implement 'PIDLockFile' as subclass of 'lockfile.LinkFileLock'..."
python-fedex 1.0 Released
Version 1.0 of python-fedex has been announced. "This GPLv3 module is a very light wrapper around the excellent suds SOAP module and FedEx's Web Services WSDLs. The purpose of this module is to prepare the WSDL objects for the user to populate and manipulate as needed, as well as handling sending and light processing for common errors. python-fedex leaves the user to read FedEx's documentation to understand all of the fields exposed by python-fedex."
SIP 4.9 released
Version 4.9 of SIP, a Python bindings generator, has been announced. "The main focus of this release is to allow alternate, incompatible wrappings of classes and functions to be defined which can then be selected by an application at runtime. This allows application developers to manage the migration from an old, deprecated API to a new one."
Python-URL! - weekly Python news and links
The September 26, 2009 edition of the Python-URL! is online with a new collection of Python article links.
Build Tools
ControlTier 3.4.8 released
Version 3.4.8 of ControlTier has been announced. "ControlTier is a cross-platform build and deployment framework and toolkit. ControlTier coordinates service management activities across multiple nodes and application tiers. It supplements and replaces homegrown service management and deployment scripts with a well-defined set of lifecycle commands that abstract the details of various types of deployments."
Libraries
mds-utils 1.1.0 released
Version 1.1.0 of mds-utils has been announced. "It's a C++ library composed by different utilities. Amongst its features it contains classes to treat a FILE* as a C++ stream. It contains also some utilities for developing C++ Python extensions."
Page editor: Forrest Cook
Announcements
Non-Commercial announcements
Adobe CMap and AGLFN data now free software
Paul Wise reports that Adobe Character Map (CMap) has been released under the terms of the BSD license and Adobe Glyph List For New Fonts (AGLFN) data will soon follow. "Please note that while this means that modifications are technically allowed, they are still strongly discouraged for compatibility reasons. Adobe has assigned an emailable maintainer (currently Ken Lunde) for these files so there is no reason that modifications should not be done upstream."
Commercial announcements
MIPS Joins the Open Handset Alliance
MIPS Technologies has joined the Open Handset Alliance. "MIPS Technologies, Inc., a leading provider of industry-standard processor architectures and cores, announced it has joined the Open Handset Alliance(tm), a group of more than 45 technology and mobile companies working to offer consumers a richer, less expensive, and better mobile experience. The Open Handset Alliance developed Android, the first complete, open and free mobile platform."
Red Hat reports second quarter results
Red Hat's financial results from the second quarter of 2009 have been published. "Total revenue for the quarter was $183.6 million, an increase of 12% from the year ago quarter. Subscription revenue for the quarter was $156.3 million, up 15% year-over-year. 'IT organizations continue to move ahead with purchases of high value solutions, and Red Hat is capitalizing on this demand as a result of our strong customer relationships and proven value proposition. These factors contributed to our better than expected total revenue in the second quarter, and drove annual subscription revenue growth of 15% for both the quarter and first half of fiscal year 2010.'"
Legal Announcements
Google shutting down independent Android image developers?
There are reports that Google has sent a cease-and-desist letter to CyanogenMod, perhaps the most active independent creator of alternative images for Android phones since JesusFreke left the scene. The issue would appear to be the packaging of Google's closed-source applications - things like maps, the market application, gmail, etc. That is all stuff that Android phone owners already are licensed to run. If some sort of understanding is not reached, this action could have the effect of significantly chilling outside development for Android phones. Or perhaps it will just motivate the development of free alternatives for those few applications.
New Books
Designing Social Interfaces--New from O'Reilly
O'Reilly has published the book Designing Social Interfaces by Christian Crumlish and Erin Malone.Learning Python, Python Pocket Reference 4th Ed.
O'Reilly has published the books Learning Python and Python Pocket Reference 4th Ed. by Mark Lutz.
Resources
CE Linux Forum Newsletter
The September, 2009 edition of the CE Linux Forum Newsletter is out. "In this month's CE Linux Forum newsletter: * ELC Europe Program Announced, Registration Open * SMACK white paper published * CELF BOF and Plenary Meeting announced * Japan Linux Symposium coming".
Is Linux Code Quality Improving? (internetnews.com)
internetnews.com analyzes the latest Coverity Scan report on open-source software. "Coverity has seen an overall 16 percent reduction in the defect density found in the projects it has scanned over the last three years. Yet while the defect density has declined, the most recent Coverity Scan Open Source Report notes that the most common defect types are holding steady. For the last two years, the most common defect type reported by Coverity in its open source scan is something known as a 'NULLPointer Deference'."
Updegrove: Further Reflections on the CodePlex Foundation: The Glass Half Full
Linux Foundation lawyer Andy Updegrove has posted a new article about the CodePlex Foundation. "Two weeks ago, I wrote a critical analysis of the governance structure of the CodePlex Foundation, a new open source-focused foundation launched by Microsoft. But what about the business premise for the Foundation itself? Lets say that CodePlex does restructure in such a way as to create a trusted, safe place for work to be done to support the open source software development model. Is there be a need for such an organization, and if so, what needs could it help meet."
Calls for Presentations
Black Hat DC call for papers
A call for papers has gone out for Black Hat DC. "It will be held February 2-3, 2010 at the Hyatt Regency Crystal City in D.C. the CFP closes December 1, 2009."
Upcoming Events
ACM Conference on Computer and Communications Security
The ACM Conference on Computer and Communications Security will take place on November 9-13 in Chicago, IL. "Featuring 58 technical papers, on Applied Cryptography, Attacks, RFID, Privacy, Anonymization, Formal Techniques, Cloud Security, Security of Mobile Services, Security for Embedded and Mobile Devices, Systems and Networks Security, Software Security, Designing Secure Systems, Malware and Bots topics. The program also includes 5 tutorials, 12 workshops, and poster/demo session."
Embedded Linux Conference Europe Program Announced
The Embedded Linux Conference Europe has announced its program for the event, which is being held in Grenoble, France, October 15 and 16. Highlights include keynotes from Jon Masters on porting Linux and Phillipe Gerum on the state of realtime Linux. "Authors of some of the most useful and important resource books in the Linux industry will be there, as well as developers and experts from companies like: Philips, Sony, ST Microelectronics, Free Electrons, Mentor Graphics, MontaVista, Pengutronix, and Wind River. [...] The conference will host over 40 sessions, including presentations, Birds-of-a-Feather sessions, keynotes and tutorials." Click below for the full announcement.
NLUUG Autumn Conference - The Open Web (KDEDot)
KDE.News has announced the NLUUG Autumn Conference. "On October 29 the dutch NLUUG will organise a conference about 'The Open Web'. In 18 talks and one keynote we hope to give you the best from a wide range of topics. Things you can expect are cool stuff you can do with HTML5, integrating geoinformation in applications with Geoclue, comet, the social desktop (integrating information from web services and all contacts into applications and your desktop) and much more."
PyPy Sprint announced
The next PyPy Sprint will be held in Düsseldorf, Germany on November 6-13. "At the sprint we intend to work on the JIT generator in PyPy and on applying it to PyPy Python interpreter. The precise work that will be done is not fixed, as we don't know in which state the JIT will be in November."
XMMS2 Conference 2010
The XMMS2 Conference 2010 has been announced. "Where will be in Malmö Sweden. Purple Scout have allowed us to use their offices to host the conference. We have big screen projector, nice beer fridges, lots of sofas and several different types of Rock Band, so it will fit us perfectly! Date is not 100% set in stone, so we would like some input for that."
Events: October 8, 2009 to December 7, 2009
The following event listing is taken from the LWN.net Calendar.
Date(s) | Event | Location |
---|---|---|
October 7 October 9 |
Jornadas Regionales de Software Libre | Santiago, Chile |
October 8 October 10 |
Utah Open Source Conference | Salt Lake City, Utah, USA |
October 9 October 11 |
Maemo Summit 2009 | Amsterdam, The Netherlands |
October 10 October 12 |
Gnome Boston Summit | Cambridge, MA, USA |
October 10 | OSDN Conference 2009 | Kiev, Ukraine |
October 12 October 14 |
Qt Developer Days | Munich, Germany |
October 15 October 16 |
Embedded Linux Conference Europe 2009 | Grenoble, France |
October 16 October 17 |
Pycon Poland 2009 | Ustron, Poland |
October 16 October 18 |
Pg Conference West 09 | Seattle, WA, USA |
October 16 October 18 |
German Ubuntu conference | Göttingen, Germany |
October 18 October 20 |
2009 Kernel Summit | Tokyo, Japan |
October 19 October 22 |
ZendCon 2009 | San Jose, CA, USA |
October 21 October 23 |
Japan Linux Symposium | Tokyo, Japan |
October 22 October 24 |
Décimo Encuentro Linux 2009 | Valparaiso, Chile |
October 23 October 24 |
Ontario GNU Linux Fest | Toronto, Ontario, Canada |
October 23 October 24 |
PGCon Brazil 2009 | Sao Paulo, Brazil |
October 24 October 25 |
PyTexas | Fort Worth, TX, USA |
October 24 October 25 |
FOSS.my 2009 | Kuala Lumpur, Malaysia |
October 24 | Florida Linux Show 2009 | Orlando, Florida, USA |
October 24 | LUG Radio Live | Wolverhampton, UK |
October 25 | Linux Outlaws and Ubuntu UK Podcast OggCamp | Wolverhampton, UK |
October 26 October 28 |
Techno Forensics and Digital Investigations Conference | Gaithersburg, MD, USA |
October 26 October 28 |
GitTogether '09 | Mountain View, CA, USA |
October 26 October 28 |
Pacific Northwest Software Quality Conference | Portland, OR, USA |
October 27 October 30 |
Linux-Kongress 2009 | Dresden, Germany |
October 28 October 30 |
Hack.lu 2009 | Luxembourg |
October 28 October 30 |
no:sql(east). | Atlanta, USA |
October 29 | NLUUG autumn conference: The Open Web | Ede, The Netherlands |
October 30 November 1 |
YAPC::Brasil 2009 | Rio de Janeiro, Brazil |
October 31 | Linux theme day with ubuntu install party | Ede, Netherlands |
November 1 November 6 |
23rd Large Installation System Administration Conference | Baltimore, MD, USA |
November 2 November 6 |
ApacheCon 2009 | Oakland, CA, USA |
November 2 November 6 |
Ubuntu Open Week | Internet, Internet |
November 3 November 6 |
OpenOffice.org Conference | Orvieto, Italy |
November 4 November 5 |
Linux World NL | Utrecht, The Netherlands |
November 5 | Government Open Source Conference | Washington, DC, USA |
November 6 November 8 |
WineConf 2009 | Enschede, Netherlands |
November 6 November 10 |
CHASE 2009 | Lahore, Pakistan |
November 6 November 7 |
PGDay.EU 2009 | Paris, France |
November 7 November 8 |
OpenFest 2009 - Biggest FOSS conference in Bulgaria | Sofia, Bulgaria |
November 7 November 8 |
OpenRheinRuhr | Bottrop, Germany |
November 7 November 8 |
Kiwi PyCon 2009 | Christchurch, New Zealand |
November 9 November 13 |
ACM CCS 2009 | Chicago, IL, USA |
November 10 November 11 |
Linux Foundation End User Summit | Jersey City, New Jersey |
November 12 November 13 |
European Conference on Computer Network Defence | Milan, Italy |
November 13 November 15 |
Free Society Conference and Nordic Summit | Göteborg, Sweden |
November 14 | pyArkansas | Conway, AR, USA |
November 16 November 19 |
Web 2.0 Expo | New York, NY, USA |
November 16 November 20 |
INTEROP | New York, NY, USA |
November 16 November 20 |
Ubuntu Developer Summit for Lucid Lynx | Dallas, TX, USA |
November 17 November 20 |
DeepSec IDSC | Vienna, Austria |
November 19 November 22 |
Piksel 09 | Bergen, Norway |
November 19 November 21 |
Firebird Conference 2009 | Munich, Germany |
November 19 November 20 |
CONFIdence 2009 | Warsaw, Poland |
November 20 November 21 |
PostgreSQL Conference 2009 Japan | Tokyo, Japan |
November 21 | Baltic Perl Workshop 2009 | Riga, Latvia |
November 25 November 27 |
Open Source Developers Conference 2009 | Brisbane, Australia |
November 27 November 29 |
Ninux Day 2009 | Rome, Italy |
December 1 December 5 |
FOSS.IN/2009 | Bangalore, India |
December 4 | Italian PostgreSQL Day 2009 | Pisa, Tuscany, Italy |
December 5 December 7 |
Fedora Users and Developers Conference | Toronto, Canada |
If your event does not appear here, please tell us about it.
Web sites
First KDialogue Is Now Open (KDEDot)
KDE.News has announced the launch of the KDialogue site. "Today, the KDE Community Forums, in collaboration with "People Behind KDE", have launched a new initiative to give the community an opportunity to get to know each other a bit closer: KDialogue. KDialogue is, in short, a way to ask one of the community members about their personal and KDE related life. At fixed intervals, a KDE contributor will be asked to participate in a dialog."
Audio and Video programs
Two recent Ardour podcasts
Paul Davis, creator of the Ardour digital audio workstation project is featured in two podcasts. "FLOSS Weekly and Open Source Musician recently each did a long podcast with Paul about Ardour and all things related. The questions and overall direction of each one are different, so if you have to spare to listen to them, check both of them out."
Page editor: Forrest Cook