LWN.net Weekly Edition for May 14, 2009
On GNOME and its Foundation: an interview with Luis Villa
LWN recently posted a brief article on the GNOME Foundation's plea for support to help it get through a difficult year. Some of the comments on that news questioned the role of the foundation and its executive director. In response, the Foundation offered to make a board member - Luis Villa - available for an interview. Luis quickly answered our questions, despite being in the middle of final exams at the time; some people, it seems, will do anything to get out of studying. The result is an interesting view into the state of the GNOME project and where it is heading.LWN: Could you tell us about your involvement with GNOME and the board?
What does the GNOME board do?
On the support side, we take a look at what our community and corporate partners are working on, and try to match people, projects, and resources. The biggest part of that, historically, has been getting everyone together at GUADEC. In the past few years we've been trying to expand that - we've done more events and hackfests; we've helped out with marketing; we've started giving grants for certain kinds of hacking (primarily a11y [accessibility]); and we've tried to make resources available to spur work on GNOME Mobile and other subprojects.
On the stewardship side, the Foundation owns the GNOME trademark, controls GNOME funds, and generally manages other resources (technically we own several servers, for example, though in practice they all live in other people's colos.) And technically most GNOME teams (like the release team) report to the board, though in practice we have a very, very light hand on the tiller.
One thing we don't do, very explicitly, is technical leadership. That comes from the community.
With all this under the Foundation's purview, the board ends up making a number of small decisions that matter to GNOME, and in practice, we do a lot of the work of the Foundation as well.
The GNOME Foundation recently posted a budget and announced that, if funding is not found from somewhere, the foundation would have to cut either the executive director position or the activities budget. In your opinion, how dire is the budget forecast, and how did this situation come to be?
How it came to be is fairly straightforward. After we cut our last director's salary from the budget, we ran a large surplus for several years. It was hard for us as an essentially all-volunteer organization to actually spend this money - organizing events and doing coordination is really time-consuming, and frankly isn't something that we (as hackers) are terribly great at even if it were our full-time job. At the same time, we felt there was a need there for more events, resources, etc., and there seemed to be a willingness on the part of our corporate partners to invest even more if we could give them a way to do it.
So last year the board felt that it was time to expand. We grew our investments in things like hackfests. We also decided to hire a new ED who could help us do more for our developer community and for our users, and help us grow financially. We knew that this extra salary and extra spending would put us in the red for a few years. But we thought that this was a classic 'spend money to make money' situation- we thought the investment in events and in Stormy would allow us to reach more sponsors and would bring more value to our existing sponsors.
Our timing, obviously, couldn't have been worse - we hired Stormy in July, just as the recession began to break. So the investment hasn't paid off like we thought it would. We have increased the number of sponsors we've got, and many of our existing sponsors have increased their level of investment, so it hasn't been all bad, but definitely not enough. And obviously under the economic circumstances it isn't going to get any easier. Hence the message to our membership you referred to.
Stormy has been the executive director since last July. Can you summarize what she has done for the Foundation since then? Why does the Foundation need an executive director?
We're seeing lots of the former and some of the latter already with Stormy, and I fully expect to see more of it. I won't bore your readers with the full list, but among other things she's helped us expand our fundraising, helped organize events (inc. GUADEC and hackfests), improved communications with our advisory board, helped restart our marketing group, dealt with some legal questions, helped broker a deal to upgrade our bugzilla, and worked on a plan to hire a sysadmin. So I think our initial decision to make this investment and take the risk was the right one. Of course, whether it makes sense long-term is still an open question - we will have to balance our budget eventually.
Some commenters on LWN have suggested Stormy's first responsibility should be to raise enough money to pay for her own existence. Does the GNOME board see things that way?
In the past, you've expressed concerns that a poorly-handled GNOME 3 initiative could encounter the same difficulties as KDE 4. How do you feel about where the GNOME 3 effort is going?
I think GNOME 3 ran the same risk as KDE 4 when we were focusing on gtk 3 as the driver behind GNOME 3. But we're focusing now on what users are going to see - on the new Shell, and on Zeitgeist. I don't think either of those are perfect, by any stretch, but I think they have at least the potential to offer a really compelling answer to the question of 'why should I use this?' The KDE team, by the way, is moving in that direction as well - I think their social desktop work, for example, has the potential to offer a very compelling story for users. If I were them, once that is mature and well-integrated I'd go ahead and call that KDE 5. Whether GNOME or KDE, that kind of user-focused, problem-solving feature is way more important than what version of the toolkit you build on.
The recent discussion of the one-slider GNOME volume control has brought back charges that the GNOME project values simplicity over giving control to the user. Is that your view of the GNOME project? Why do you think GNOME continues to have that reputation?
The long, and more serious answer is, well, long. There are a couple aspects of our philosophy that cause this problem:
(1) One aspect of our philosophy is that we always prefer to fix underlying problems instead of papering them over in the UI. As someone put it c. 2001, 'many options in a lot of our tools are really a switch that means 'work around this bug.'' Our philosophy is that you should fix the bug instead of adding the option. As a result, some of our software, particularly when it is very new, can be a real pain if it turns out you were relying on those bugs or on workarounds for those bugs.
Network Manager was like that for a long time - it worked on the majority of hardware and use cases, but certainly not all of it, so people kept screaming for new options. But the developers stuck with it, introducing new features only when they were sure they could do it as automagically as possible, and fixing bugs at lower levels instead of hacking around them at the UI level. And the entire Linux platform - for GNOME users and for non-GNOME users - is better now because we've forced wireless drivers to fix their bugs instead of providing workarounds in the UI. As a result, we've now got a tool that is reliable for virtually everyone and simple to use. Still not perfect, but I think comparable in ease-of-use and power with anything on any OS. I think the volume control will eventually be the same way, though admittedly it seems rough enough that I'm not sure I would have shipped it quite yet if it were my call.
(2) Another aspect of our philosophy is that options have a cost. For developers, they have a cost in QA; they have a cost in debugging; they have a cost in maintenance. Everyone who has done QA in free software has piles of stories about the horrors of debugging something because all the options weren't set just right. So we think that overall we make more software, and better software, by focusing in this way. More importantly, for users, options have a cognitive cost. It takes time and mental effort to figure these things out; time and effort that could be better spent doing the things you use a computer for - working on projects; talking with your friends; or whatever. You or I, who are experts and have used Linux as part of our day job every day for over a decade now, don't notice this cost. But for people who view Linux as a means to an end - getting their other work done - these costs are present every time they try to mess with the system. Again, why does my girlfriend want to see 8 volume switches when she goes to play her music? She just wants one, just like she just wanted her networking to work - and now it does.
(3) Finally, we believe that you can't make software that pleases everyone. You can make software that pleases experts, but most of the time non-experts hate that software. (Office, for example, was like this for a long time.) We're unabashedly trying to make software that works well for average users and not experts. We hope, obviously, that experts will use it, like it, and help us make it even better. (For example, you could help us work on a better plugin infrastructure so that we could move more options into plugins, like Firefox does ;) But if you like spending hours tweaking things so that you feel like you have more 'control', then yeah - it might be better for everyone if we just agree to disagree.
Obviously, I think these are all reasonable and important parts of our software philosophy; I think it means we make better software. If everyone understood them, we would still have some disagreements, but the disagreements would be made on more substantive grounds, with better understanding of the tradeoffs involved. We'd really want to see people criticize us on solid grounds - like, did we switch to the new volume control too early? how can we enable experts in ways that don't have big costs? - rather than on what we think of as fairly unreasonable grounds like 'I want my switches back.' For those who do want to understand this philosophy better, I'd recommend reading chapter five of the 37 Signals book 'Getting Real' - I don't agree with all of it, but that's the best reference I can think of for how we feel about features.
Is there anything else you'd like to tell LWN's readers?
Past that... I'm sure I'll think of something about an hour after the article goes up ;)
Your hour starts now :). Thanks to Luis for taking the time to answer our questions in such depth.
Open fonts at Libre Graphics Meeting 2009
École Polytechnique in Montreal played host to the fourth annual Libre Graphics Meeting (LGM) May 6 through 9, gathering around 100 developers and users of free graphics software from across the globe to collaborate, discuss, and learn. One of the biggest topics of the week was free and open fonts: their licensing, design, and integration with the free software desktop. In just a few short months, the release of Firefox 3.5 will push the issue into the forefront courtesy of Web Fonts, and the free software community aims to be ready.
![[Dave Crossland]](https://static.lwn.net/images/lgm2009-1_sm.jpg)
Dave Crossland and Nicholas Spalinger of the Open Font Library (OFLB) project each delivered a talk about OFLB (Crossland on the project's web site relaunch, and Spalinger on the challenges it faces moving forward), but the importance of free-as-in-freedom fonts permeated into several other talks as well. Developer Pierre Marchand demonstrated changes in an upcoming revision of his FontMatrix application, and the World Wide Web Consortium's (W3C) Chris Lilley spoke about Web Fonts and other developments in CSS3.
Additionally, the "users" represented at LGM included graphic artists, but also professionals deeply invested in free font support for open source software — including XeTeX creator and Mozilla's font specialist Jonathan Kew, Brussels-based design agency Open Source Publishing, and Kaveh Bazargan, whose company uses free software to handle typesetting and file conversion for major academic publishing houses like the Institute of Physics and Nature.
A free font and free software primer
As with software, the main front in the battle over free fonts is licensing. Historically, digital type foundries like Adobe and Monotype have sold proprietary fonts to graphic design houses and publishers under very restrictive licensing terms that prohibit all redistribution. Freely redistributable fonts have existed for years, but licensing them in a free software context can be complicated, too.
When the font is used solely to produce printed output, licensing is not a problem, but when the font must be embedded inside a another digital file (such as a PDF) incompatibilities arise because fonts contain executable code (such as hinting, which algorithmically adjusts the width and height of glyph strokes to align with the pixel grid of the display device to optimize sharpness) in addition to glyphs themselves. Including the font inside another document that contains executable code — such as PDF or PostScript — makes the resulting document a derivative work of the font.
A "font exception clause" for the GPL was written to allow font designers to license their creations under GPL-compatible terms without activating the GPL for all documents embedding the font. That solution did not catch on with type designers for a number of reasons, including the naming conventions of the type design world — where derivative fonts customarily do not reuse the upstream font's name to avoid confusion. Nonprofit linguistics organization SIL International created the simpler, font-specific Open Font License (OFL) to address designers' concerns while permitting redistribution, modification, and extension. The Open Font Library project was started to foster the creation and distribution of high-quality free fonts under the OFL.
OFLB has grown steadily since its inception, presently hosting around 100 fonts, but the project anticipates a sea change when Firefox 3.5 is publicly released this spring. Firefox 3.5 will add support for Web Fonts via the @font-face CSS rule, which allows a web page to specify text display using any font accessible using an HTTP URI. Before @font-face, the only fonts available for selection through CSS were the ten "core fonts for the Web" from Microsoft: Andale Mono, Arial, Comic Sans, Courier New, Georgia, Impact, Times New Roman, Trebuchet MS, Verdana, and the always popular Webdings.
Because commercial type foundries by and large still object to redistribution of their products — even for display purposes only — the advent of @font-face marks a tremendous opportunity for OFLB and free fonts in general.
OFLB gets a redesigned site
Crossland previewed OFLB's newly visually- and technologically-revamped web site. Donations paid for a professional redesign to appeal to graphic designers regardless of their interest in free software principles, and the new site runs on the ccHost content management system developed by Creative Commons.
The OFLB site will allow type designers to upload their fonts for public consumption; users will search and download them, and can re-upload "remixes" of the originals. Font "remixes" are expected to center around filling in missing glyphs, allowing the OFLB community to flesh out support for non-Latin alphabets, but remixes that make aesthetic changes to the original are also supported. In keeping with the OFL, remixes and originals will be cross-linked to each other, but remixes will have to choose a distinct name.
The new site will foster WebFont usage by allowing direct linking to its resources in @font-face directives. Each font's page contains the required CSS code snippet for simple copy-and-pasting into a page or template. OFLB has also worked to get its online library directly integrated into the font editing application FontForge. Crossland noted that although proprietary web page design software like Dreamweaver is popular with graphic designers, no such GUI tool is common for free software users, who tend to create sites with content management systems (CMS). The project is interested in integrating OFLB support into open sources CMSes such as Wordpress or Drupal that support theming, but nothing is in the works yet.
![[Pierre Marchand]](https://static.lwn.net/images/lgm2009-2_sm.jpg)
Between talks, discussion turned to the possibility of integrating features from Marchand's FontMatrix into the OFLB site. FontMatrix is a tool for maintaining large collections of fonts, selectively activating only those needed so as to conserve memory and make selection easier within design applications, but Marchand has added more and more diagnostic features to the program with each revision. The new version of FontMatrix he demonstrated can explore font metadata in depth, allowing searching through font collections based on such facets as language support, style, weight, license, and creator. The OFLB site could re-use some of that code to empower visitors to search its font collection in ways more powerful than today's tag-based browsing.
Growing the free font tent
![[Collaboration Lab]](https://static.lwn.net/images/lgm2009-4_sm.jpg)
Spalinger's OFLB talk focused on the challenges the project faces, including the possibility that users will attempt to upload fonts to the site that they do not own, such as proprietary fonts from commercial foundries. The project is debating how best to manage the site to ensure that only properly attributed, OFL-licensed work is submitted. Lilley observed that it may not be the project's legal responsibility to police the site, but only to respond appropriately when a type designer registers a complaint. Crossland concurred with that sentiment, but added that the project also wants to establish a bright line between its service, which aims to provide a designer-friendly, high-quality collection, and the scores of low-quality "free font" sites that garner little credibility or trust because of their policies.
Crossland added that one possibility would be to approach commercial foundries and offer to perform font fingerprinting on their products using FontMatrix's tools, then alert the foundries if a possible match was uploaded. Kew thought this approach unlikely to succeed, suggesting instead that it was better to do the reverse: make a public feed available of the fingerprints of the OFLB fonts, then respond to questions and concerns of the foundries if they detect a problem.
Other concerns include proposals for font file formats that include DRM — such as Microsoft's Embedded OpenType — and how best to encourage font designers to collaboratively extend OFLB fonts (such as adding new alphabets) without creating a glut of remixes for each source font that are never merged back into the upstream original.
Conclusion
Back in April, Mark Pilgrim famously ranted at the foundries for their stubbornness and refusal to acknowledge the importance of WebFonts. Crossland referenced Pilgrim's comments in his talk, observing that the ability of @font-face to disrupt the legacy foundries' business model was a golden opportunity for OFLB and, by extension, free software. The foundries think that @font-face will cannibalize sales, but the end users who see the type displayed via @font-face were never the foundries' customers to begin with. The graphic designers are the customers, and graphics designers love fonts. If the foundries offer them nothing for use in WebFonts, OFLB may well be their only option.
Other LGM sessions over the four-day event featured updates from major open source graphics and design applications like Scribus, Inkscape, and Gimp, research and technical demonstrations, and debates on critical issues such as usability, the rise of non-free web applications, and combining free software with profitability. All of the conference presentations and Q&A sessions were recorded by Bazargan, and are now available online in multiple video formats.
NLUUG: The bright future of Linux filesystems
As the maintainer for the ext4 file system, Ted Ts'o was the perfect speaker to open the recent NLUUG Spring Conference with the theme "File systems and storage". In his keynote at the conference in the Netherlands, he placed into context some developments and changes in file system and storage technologies.
His central question was: why has there been a flowering of new file
systems showing up in Linux in the last 18 months? New file systems that
have recently become available in the mainline kernel include ext4, btrfs,
and UBIFS. The next Linux
kernel release, 2.6.30, adds three new file systems: Nilfs, Pohmelfs, and exofs (formerly
known as osdfs). Ts'o said that "it's now a fairly exciting time for
file systems
" and he added that this is partly thanks to Sun:
"Sun woke up the field with their file system ZFS and they should
deserve credit for it. Before the appearance of ZFS, the development of
file systems virtually stood still for decades.
" At the moment, the
Linux kernel tree lists 65 file systems, although most of them are
optimized for a specific task and are not much used. Ts'o sees this as an
opportunity for developers to experiment and innovate.
Of course the development of all these file systems doesn't come out of
the blue. They are driven by some new developments in storage technology,
such as the advent of solid state drives (SSDs), data integrity fields, and
4K sectors. SSDs have especially changed a lot in the storage stack:
"The shift from relatively slow hard disks to fast SSDs means that
many assumptions in the storage stack don't hold anymore.
" Even though
Ts'o expects SSDs not to replace HDs completely, he sees the shift as an
interesting opportunity: "This spurs a lot of development, as people
are finally talking about changing storage interfaces.
"
One change that is happening now is the shift from 512-byte physical sectors to 4K in hard drives. The abstraction of 512-byte sector sizes has been here for decades, and it's not easy to change, as the transition affects a lot of subsystems that don't accept a 4K sector size currently. For example, the partitioning system and the bootloader require changes because they both rely on the fact that partitions start from the 63rd sector of the drive, which is misaligned with the 4K sector boundary. A proposed solution is to align 512-byte logical sectors in a way that the first logical sector starts from the second octant (512 bytes) of the physical first 4K sector. However, Microsoft Windows spoils the party because it starts the partition table at a 1M boundary, which is incompatible with this "odd-aligned scheme". According to Ts'o, this is one of the reasons why storage vendors like to talk to open source projects: they want to move forward instead of holding on to legacy solutions. It remains to be seen whether Windows will join the party.
Another change that Ts'o deems important is object-based storage. Instead of
presenting the abstraction of an array of blocks, addressed by their index
in the array (as traditional storage systems do), an object store presents
the abstraction of a collection of objects, addressed by a unique id. If
the operating system uses object-based storage, it stores an object with an
id, without having to know overly low-level details such as the sector or
cylinder of the block on the hard drive. When the operating system wants to
read the object later, it only has to know the object's id. Ts'o sees many
advantages in this approach: "With object-based storage, the
operating system can push more intelligence into the hard disk, which is
better placed anyway to make intelligent decisions and improve
performance.
"
Ts'o also notes that abstractions such as disks, RAID, logical volume
management, and file systems are more and more blending into each
other. "Maybe those different interfaces don't make sense anymore?
ZFS figured this out very well by building all those interfaces under the
umbrella of the file system, and btrfs will do something similar.
"
But he warns that this doesn't mean that people should settle with ZFS or
btrfs: "I hope that developers will keep exploring abstractions to
find the right interfaces.
" Ts'o also expressed his hope that the
license incompatibility between ZFS (CDDL) and Linux (GPL) would get
fixed.
As a typical example of the proliferation of specialized file systems, Jörn Engel talked at the NLUUG conference about LogFS, his scalable file system for flash devices. Because most current file systems are designed for use on rotating drives, and because flash-based storage has some quirks, Engel decided to design a file system explicitly for flash. He started with a fast filesystem (FFS) style design and adjusted a lot of the algorithms to work better with flash. For example, for copy-on-write, FFS rewrites blocks in place after the copy. Because flash storage cannot be simply overwritten, a flash block must be erased and rewritten in two separate steps, a requirement which can cause serious performance problems. Engel's solution was to use a log-structured design instead. Another issue was that the journal is written often to the storage. Because there are limits to the number of times a block of flash memory can be erased and rewritten reliably, Engel's solution is to move the journal from time to time.
Engel said that LogFS is almost ready for use. He is still chasing one hard-to-replicate bug, but, after that, he plans to submit the code for inclusion in the Linux kernel tree. LogFS should be better than JFFS2 on larger devices, because JFFS2 stores no filesystem directory tree on the device. This means that JFFS2 has to perform a time- and memory-consuming scan when it mounts the file system, building the directory tree at that time. Putting the tree on the device, as LogFS does, reduces mount time and memory requirements.
At the NLUUG Spring Conference a lot of recent developments were talked about, not only regarding file systems, as Ts'o showed, but also higher in the storage stack. Michael Adam for example stressed that Samba, which started as a free re-implementation of Microsoft's SMB/CIFS networking protocol, allows for setting up a clustered CIFS server, a feature that current Microsoft servers do not offer.
The NLUUG Spring Conference was an interesting event thanks to the breadth of the topics presented. On the one hand there were introductory talks about the possibilities of ZFS, the virtual filesystem libferris and practical experiences with WebDAV. On the other hand, visitors could get some first-hand and highly specific information about the future direction of projects like DRBD, device-mapper and LogFS. This way, the conference had something for everyone: it gave a broad overview of the current state of the art in file systems and storage, while providing enough technical details for those interested in it. At least your author came home with a better understanding of file systems and storage in the Linux ecosystem.
Security
Random numbers for ASLR
Two weeks ago on this page, we looked at two problems in the Linux implementation of address space layout randomization (ASLR). At the time, neither had been addressed by kernel hackers, but, since then, both have been. One was a rather trivial fix, but the other, regarding random number generation (RNG), led to a bit of a contentious thread on linux-kernel.
There is always some tension between performance and security, and that is part of why there was disagreement about how to fix a, clearly faulty, RNG that is used for ASLR. As noted in our earlier article, recently reported research had shown that the address space of a process could be reconstructed because of the poor RNG used to generate the values. Some part of the early discussion of the problem, along with the patch originally proposed by Matt Mackall, occurred on the private security@kernel.org mailing list. It surfaced on linux-kernel when Linus Torvalds asked for opinions on Mackall's fix:
Quite frankly, the way "get_random_bytes()" works now (it does a _full_ sha thing every time), I think it's insane overkill. But I do have to admit that our current "get_random_int()" is insane _underkill_.
I'd like to improve the latter without going to [quite] the extreme that matt's patch did.
Mackall's patch used get_random_bytes()—a kernel-internal source of random numbers which is equivalent to user space reading from /dev/urandom—to define a get_random_u32(). That function had a much better name as well as better random numbers. But, as Torvalds pointed out, it does a lot more work. Mackall believed that was a reasonable starting point:
Torvalds came up with a patch that improved the randomness of get_random_int() by keeping the calculated hash data between invocations, using that and a bit of noise from the system (pid + jiffies) to feed back into the half_md4_transform() function. Though it still uses the key that changes every five minutes, there is general agreement that this increased the strength of the RNG without impacting performance.
But, half_md4_transform() does a reduced round version of the MD4 hash. MD4 itself is essentially "broken" in that collisions can be trivially generated, so a reduced round version is weaker still. That concerns Mackall:
Instead of using the MD4 variant or his original patch, Mackall suggested using SHA1 as a compromise of sorts. He did some benchmarks on the two and found that the SHA1 variant was about half as fast as the MD4 (.660us vs. .326us), which seemed like an acceptable tradeoff to him. Ingo Molnar disagreed, noting that it came to roughly 1% of the performance of fork() (which uses get_random_int() for the starting stack_canary value). He also wanted to know what threat model necessitated using SHA1. Mackall replied:
In fact, it's been known for over a decade that reduced-round MD4 such as ours is *not one way* and that preimages (aka our hidden state) can be found in less than an hour on a *mid-90s era PC*:
Torvalds, though, is unimpressed by
Mackall's "only twice as
long
" argument: "In the kernel, we tend to never even talk
about how many
_times_ slower something is. We talk about cycles or small
percentages
". In his typical fashion, Torvalds also points out
another flaw he
sees in Mackall's argument:
In other words, YOUR WHOLE ARGUMENT IS TOTALLY INSANE. You talk about "cryptographically secure hashes" for some 8-bit value. Listen to yourself. At that point, any cryptographer will just ridicule you. There's no point in trying to break the randomness, because you'll be much better off just trying a lot of different values.
In the end, Torvalds's code (with a small addition by Molnar) went in to the kernel, without the random driver maintainer's, Mackall's, "Acked-by" tag. While the uses of get_random_int() in today's kernel may not require that much in the way of randomness, it does seem wrong, at some level, to have a function of that name, which doesn't really perform its job. But, clearly this alleviates much, perhaps all, of the ASLR problems that prompted the change. That can only be a good thing.
New vulnerabilities
gnutls: multiple vulnerabilities
Package(s): | gnutls | CVE #(s): | CVE-2009-1415 CVE-2009-1416 | ||||||||||||
Created: | May 11, 2009 | Updated: | May 25, 2009 | ||||||||||||
Description: | From the CVE entries: CVE-2009-1415: lib/pk-libgcrypt.c in libgnutls in GnuTLS before 2.6.6 does not properly handle invalid DSA signatures, which allows remote attackers to cause a denial of service (application crash) and possibly have unspecified other impact via a malformed DSA key that triggers a (1) free of an uninitialized pointer or (2) double free. CVE-2009-1416: lib/gnutls_pk.c in libgnutls in GnuTLS 2.5.0 through 2.6.5 generates RSA keys stored in DSA structures, instead of the intended DSA keys, which might allow remote attackers to spoof signatures on certificates or have unspecified other impact by leveraging an invalid DSA key. | ||||||||||||||
Alerts: |
|
kernel: denial of service
Package(s): | linux-2.6 | CVE #(s): | CVE-2009-1336 | ||||||||||||||||||||||||
Created: | May 7, 2009 | Updated: | July 2, 2009 | ||||||||||||||||||||||||
Description: | The 2.6 kernel has a denial of service vulnerability. The National Vulnerability Database Entry description states: fs/nfs/client.c in the Linux kernel before 2.6.23 does not properly initialize a certain structure member that stores the maximum NFS filename length, which allows local users to cause a denial of service (OOPS) via a long filename, related to the encode_lookup function. | ||||||||||||||||||||||||||
Alerts: |
|
kernel: information disclosure
Package(s): | kernel | CVE #(s): | CVE-2009-0787 | ||||||||
Created: | May 7, 2009 | Updated: | May 13, 2009 | ||||||||
Description: | The kernel has an information disclosure vulnerability. The National Vulnerability Database entry states: The ecryptfs_write_metadata_to_contents function in the eCryptfs functionality in the Linux kernel 2.6.28 before 2.6.28.9 uses an incorrect size when writing kernel memory to an eCryptfs file header, which triggers an out-of-bounds read and allows local users to obtain portions of kernel memory. | ||||||||||
Alerts: |
|
ldns: buffer overflow
Package(s): | ldns | CVE #(s): | CVE-2009-1086 | ||||||||
Created: | May 7, 2009 | Updated: | May 13, 2009 | ||||||||
Description: | ldns has a heap-based buffer overflow vulnerability. From the Debian alert: Stefan Kaltenbrunner discovered that ldns, a library and set of utilities to facilitate DNS programming, did not correctly implement a buffer boundary check in its RR DNS record parser. This weakness could enable overflow of a heap buffer if a maliciously-crafted record is parsed, potentially allowing the execution of arbitrary code. The scope of compromise will vary with the context in which ldns is used, and could present either a local or remote attack vector. | ||||||||||
Alerts: |
|
libmodplug: buffer overflow
Package(s): | libmodplug | CVE #(s): | CVE-2009-1513 | ||||||||||||||||||||
Created: | May 8, 2009 | Updated: | December 4, 2009 | ||||||||||||||||||||
Description: | From the CVE entry: Buffer overflow in the PATinst function in src/load_pat.cpp in libmodplug before 0.8.7 allows user-assisted remote attackers to cause a denial of service and possibly execute arbitrary code via a long instrument name. | ||||||||||||||||||||||
Alerts: |
|
pango: denial of service
Package(s): | pango1.0 | CVE #(s): | CVE-2009-1194 | ||||||||||||||||||||||||||||||||||||||||||||||||||||
Created: | May 8, 2009 | Updated: | February 16, 2010 | ||||||||||||||||||||||||||||||||||||||||||||||||||||
Description: | From the Ubuntu advisory: Will Drewry discovered that Pango incorrectly handled rendering text with long glyphstrings. If a user were tricked into displaying specially crafted data with applications linked against Pango, such as Firefox, an attacker could cause a denial of service or execute arbitrary code with privileges of the user invoking the program. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||
Alerts: |
|
quagga: denial of service
Package(s): | quagga | CVE #(s): | CVE-2009-1572 | ||||||||||||||||||||||||
Created: | May 11, 2009 | Updated: | July 3, 2009 | ||||||||||||||||||||||||
Description: | From the Mandriva advisory: The BGP daemon (bgpd) in Quagga 0.99.11 and earlier allows remote attackers to cause a denial of service (crash) via an AS path containing ASN elements whose string representation is longer than expected, which triggers an assert error (CVE-2009-1572). | ||||||||||||||||||||||||||
Alerts: |
|
squirrelmail: multiple vulnerabilities
Package(s): | squirrelmail | CVE #(s): | CVE-2009-1578 CVE-2009-1579 CVE-2009-1580 CVE-2009-1581 | ||||||||||||||||||||||||||||||||||||||||||||||||
Created: | May 13, 2009 | Updated: | January 14, 2010 | ||||||||||||||||||||||||||||||||||||||||||||||||
Description: | The SquirrelMail 1.4.18 release fixes a number of vulnerabilities: "including a couple XSS exploits, a session fixation issue, and an obscure but dangerous server-side code execution hole." | ||||||||||||||||||||||||||||||||||||||||||||||||||
Alerts: |
|
zsh: buffer overflow
Package(s): | zsh | CVE #(s): | CVE-2009-1214 CVE-2009-1215 | ||||
Created: | May 7, 2009 | Updated: | May 13, 2009 | ||||
Description: | Zsh has a buffer overflow vulnerability. From the Mandriva alert: A stack-based buffer overflow was found in the zsh command interpreter. An attacker could use this flaw to cause a denial of service (zsh crash), when providing a specially-crafted string as input to the zsh shell | ||||||
Alerts: |
|
Page editor: Jake Edge
Kernel development
Brief items
Kernel release status
The current development kernel is 2.6.30-rc5, released by Linus on May 8. "Driver updates (SCSI being the bulk of it, but there are input layer, networking, DRI and MD changes too). Arch updates (mostly ARM "davinci" support, but some x86 and even alpha). And various random stuff (fairly big cifs update, but some smaller ocfs2 and xfs updates, and a fair amount of small one-liners all over)." See the long-format changelog for all the details.
The current stable 2.6 kernel is 2.6.29.3, released with a long list of fixes on May 8. 2.6.27.23 was released at the same time; as promised, updates for the 2.6.28 kernel have ended.
Kernel development news
Quotes of the week
In brief
Editor's note: it's no secret that far more happens on the kernel mailing lists than can ever be reported on this page. As a result, interesting discussions and developments often slip by without a mention here. This article is the beginning of an experimental attempt to improve that situation. The idea is to briefly mention important topics which have not, yet, been developed into a full Kernel Page article. Some items will be followups from previous discussions; others may foreshadow full articles to come.The "In brief" article will probably not appear every week. But, if it works out, it should become a semi-regular feature filling out LWN's kernel coverage. Comments are welcome.
reflink(): the proposed reflink() system call was covered last week. Since then,
there have been some followup postings. reflink() v2, posted on
May 7, maintained the reflink-as-snapshot semantics. When asked about
that decision, Joel Becker responded
"reflink() is a snapshotting call, not a kitchen sink.
" It
seemed
like there was to be no comfort for those wanting reflink-as-copy
semantics.
reflink() v4, posted on the 11th, changed that tune somewhat. In this version, a process which either (1) owns the target file, or (2) has sufficient capabilities will create a link which copies the original security information - reflink-as-snapshot, essentially. A process lacking ownership and privilege, but having read access to the target file, will get a reflink with "new file" security information - reflink-as-copy. The idea is to do the right thing in all situations, but some developers are now concerned about a system call which has different semantics for processes running as root. This conversation has a while to go yet.
devtmpfs was also covered last week. This patch, too, has been reposted; the resulting conversation, again, looks to go on for a while. The return of devfs was always going to be controversial; the first version, after all, inspired flame wars for years before being merged. The devtmpfs developers feel that they need this feature to provide distributions which boot quickly and reliably in a number of situations; others think that there are better solutions to the problem. There is no consensus on merging this code at this time, but it is worth noting that the discussion has slowly shifted away from general opposition and toward fixing problems with the code.
Wakelocks are back, but now the facility has been rebranded suspend block. The core idea is the same: it allows code in kernel or user space to keep the system from suspending for a brief period of time. The user-space API has changed; there is now a /dev/suspend_blocker device which provides a couple of ioctl() calls. Closing the device releases the block, eliminating a potential problem with the wakelock API where a failed process could leave a block in place indefinitely.
There has been relatively little discussion of the new code; either everybody is happy with it now, or nobody has really noticed the new posting yet.
Doctor, it HZ. Much of the kernel is now tickless and equipped with high-resolution timers. So, says Alok Kataria, there is really no need to run x86 systems with a 1ms clock tick anymore. Running with HZ=1000 measurably slows the execution of a CPU-bound loop. So why not lower it?
There are problems with a lower HZ value, though, many of which have, at their source, the same problem which makes HZ=1000 more expensive: the kernel is still not truly tickless. Yes, the periodic clock interrupt is turned off when the processor is idle. But, when the CPU is busy, the clock ticks away as usual. Making the system fully tickless is a harder job than just making the idle state tickless; among other things, it pretty much requires doing away with the jiffies variable and all that depends on it. But, until that happens, lowering HZ will have costs of its own.
Wu Fengguang has been trying for a while to extend /proc/kpageflags, his patch adds a great deal of information about the usage of memory in the system. One might think that adding more useful information would be uncontroversial, but Ingo Molnar continues to oppose its inclusion. Ingo does not like the interface or the fact that it lives in /proc; his preferred solution looks more like an extension to ftrace. More thought toward the creation of uniform instrumentation interfaces is probably a good idea, but the current /proc/kpageflags interface has proved useful. It's also an established kernel ABI, so it's not going away anytime soon. But whether /proc/kpageflags will be extended further remains to be seen.
Seccomp and sandboxing
Back in 2005, Andrea Arcangeli, mostly known for memory management work in those days, wandered into the security field with the "secure computing" (or "seccomp") feature. Seccomp was meant to support a side business of his which would enable owners of Linux systems to rent out their CPUs to people doing serious processing work. Allowing strangers to run arbitrary code is something that people tend to be nervous about; they require some pretty strong assurance that this code will not have general access to their systems.Seccomp solves this problem by putting a strict sandbox around processes running code from others. A process running in seccomp mode is severely limited in what it can do; there are only four system calls - read(), write(), exit(), and sigreturn() - available to it. Attempts to call any other system call result in immediate termination of the process. The idea is that a control process could obtain the code to be run and load it into memory. After setting up its file descriptors appropriately, this process would call:
prctl(PR_SET_SECCOMP, 1);
to enable seccomp mode. Once straitjacketed in this way, it would jump into the guest code, knowing that no real harm could be done. The guest code can run in the CPU and communicate over the file descriptors given to it, but it has no other access to the system.
Andrea's CPUShare never quite took off, but seccomp remained in the kernel. Last February, when a security hole was found in the seccomp code, Linus wondered whether it was being used at all. It seems likely that there were, in fact, no users at that time, but there was one significant prospective user: Google.
Google is not looking to use seccomp to create a distributed computing network; one assumes that, by now, they have developed other solutions to that problem. Instead, Google is looking for secure ways to run plugins in its Chrome browser. The Chrome sandbox is described this way:
It seems that the Google developers thought that seccomp would make a good platform on which to create a "finished implementation" for Linux. Google developer Markus Gutschke said:
The downside is that the sandbox'd code needs to delegate execution of most of its system calls to a monitor process. This is slow and rather awkward. Although due to the magic of clone(), (almost) all system calls can in fact be serialized, sent to the monitor process, have their arguments safely inspected, and then executed on behalf of the sandbox'd process. Details are tedious but we believe they are solvable with current kernel APIs.
There is, however, the little problem that sandboxed code can usefully (and safely) invoke more than the four allowed system calls. That limitation can be worked around ("tedious details"), but performance suffers. What the Chrome developers would like is a more flexible way of specifying which system calls can be run directly by code inside the sandbox.
One suggestion that came out was to add a new "mode" to seccomp. The API was designed with the idea that different applications might have different security requirements; it includes a "mode" value which specifies the restrictions that should be put in place. Only the original mode has ever been implemented, but others can certainly be added. Creating a new mode which allowed the initiating process to specify which system calls would be allowed would make the facility more useful for situations like the Chrome sandbox.
Adam Langley (also of Google) has posted a patch which does just that. The new "mode 2" implementation accepts a bitmask describing which system calls are accessible. If one of those is prctl(), then the sandboxed code can further restrict its own system calls (but it cannot restore access to system calls which have been denied). All told, it looks like a reasonable solution which could make life easier for sandbox developers.
That said, this code may never be merged because the discussion has since moved on to other possibilities. Ingo Molnar, who has been arguing for the use of the ftrace framework in a number of situations, thinks that ftrace is a perfect fit for the Chrome sandbox problem as well. He might be right, but only for a version of ftrace which is not, yet, generally available.
Using ftrace for sandboxing may seem a little strange; a tracing framework is supposed to report on what is happening while perturbing the situation as little as possible. But ftrace has a couple of tools which may be useful in this situation. The system call tracer is there now, making it easy to hook into every system call made by a given process. In addition, the current development tree (perhaps destined for 2.6.31) includes an event filter mechanism which can be used to filter out events based on an arbitrary boolean expression. By using ftrace's event filters, the sandbox could go beyond just restricting system calls; it could also place limits on the arguments to those system calls. An example supplied by Ingo looks like this:
{ "sys_read", "fd == 0" }, { "sys_write", "fd == 1" }, { "sys_sigreturn", "1" }, { "sys_gettimeofday", "tz == NULL" },
These expressions implement something similar to mode 1 seccomp. But, additionally, read() is limited to the standard input and write() to the standard output. The sandboxed process is also allowed to call gettimeofday(), but it is not given access to the time zone information.
The expressions can be arbitrarily complex. They are also claimed to be very fast; Ingo claims that they are quicker than the evaluation of security module hooks. And, if straight system call filtering is not enough, arbitrary tracepoints can be placed elsewhere. All told, it does seem like a fairly general mechanism for restricting what a given process can do.
The problem cannot really be seen as solved yet, though. The event tracing code is very new and mostly unused so far. It is out of the mainline still, meaning that it could easily be a year or so until it shows up in kernels shipped by distributions. The code allowing this mechanism to be used to control execution is yet to be written. So Chrome will not have a sandbox based on anything other than mode 1 seccomp for some time (though the Chrome developers are also evaluating using SELinux for this purpose).
Beyond that, there are some real doubts about whether system call interception is the right way to sandbox a process. There are well-known difficulties with trying to verify parameters if they are stored in user space; a hostile process can attempt to change them between the execution of security checks and the actual use of the data. There are also interesting interactions between system calls and multiple ways to do a number of things, all of which can lead to a leaky sandbox. All of this has led James Morris to complain:
Ingo is not worried, though; he notes that the ability to place arbitrary tracepoints allows filtering at any spot, not just at system call entry. So the problems associated with system call interception are not necessarily an issue with the ftrace-based scheme. Beyond that, this is a specific sort of security problem:
This has the look of a discussion which will take some time to play out. There is sure to be opposition to turning the event filtering code into another in-kernel security policy language. It may turn out that the simple seccomp extension is more generally palatable. Or something completely different could come along. What is clear is that the sandboxing problem is hard; many smart people have tried to implement it in a number of different ways with varying levels of success. There is no assurance that that the solution will be easier this time around.
TuxOnIce: in from the cold?
As flamewars go, the recent linux-kernel thread about TuxOnIce was pretty tame. Likely weary of heated discussions in the past, the participants mostly swore off the flames with a bid to work together on Linux hibernation (i.e. suspend to disk). But, there still seems to be an impediment to that collaboration. The long out-of-tree history for TuxOnIce, combined with lead developer Nigel Cunningham's inability or unwillingness to work with the community means that TuxOnIce could have a bumpy road into the kernel—if it ever gets there at all.
TuxOnIce, formerly known as suspend2 and swsusp2, is a longstanding out-of-tree solution for hibernation. It has an enthusiastic user community along with some features not available in swsusp, which is the current mainline hibernation code. Some of the advantages claimed by TuxOnIce are support for multiple swap devices or regular files as the suspend image destination, better performance via compressed images and other techniques, saving nearly all of the contents of memory including caches, etc. But its vocal users say that the biggest advantage is that TuxOnIce just works for many—some of whom cannot get the current mainline mechanisms to work.
Much of the recent mainline hibernation work, generally done by Rafael Wysocki and Pavel Machek, has focused on uswsusp, which moves the bulk of the suspend work to user space. So, the kernel already contains two mechanisms for doing hibernation, leaving no real chance for a third to be added.
There are clear disagreements about how much and which parts should be in the kernel versus in user space. Machek seems to think that nearly all of the task can be handled in user space, while Cunningham is in favor of the advantages—performance and being able to take advantage of in-kernel interfaces—of an all kernel approach. Wysocki is somewhere in the middle, outlining some of the advantages he sees in the in-kernel solution:
A bigger disconnect, though, is how to proceed. Cunningham would like to see TuxOnIce merged whole as a parallel alternative to swsusp, with an eye to eventually replacing and removing swsusp. Machek and Wysocki are not terribly interested in replacing swsusp, they would rather see incremental improvements—many coming from the TuxOnIce code—proposed and merged. On the one hand, Cunningham has an entire subsystem that he would like to see merged, while the swsusp folks have a subsystem—used by most distributions for hibernation—to maintain.
Cunningham recently posted an RFC for
merging TuxOnIce "with a view to seeking to get it
merged, perhaps in 2.6.31 or .32 (depending upon what needs work before
it can be merged) and the willingness of those who matter
". That
was met with a somewhat heated reply by
Machek. But Wysocki was quick to step in to
try to avoid the flames:
After Cunningham agreed, the discussion turned to how to work together, which is where it seems to have hit an impasse. Wysocki and Cunningham, at least, see some clear advantages in the TuxOnIce code, but, contrary to Cunningham's wishes, having it merged wholesale is likely not in the cards. Cunningham describes his plan as follows:
Not surprisingly, Wysocki and Machek see things differently. Machek is not
opposed to bringing some of TuxOnIce into the mainline: "If we are
talking about improving mainline to allow tuxonice
functionality... then yes, that sounds reasonable.
" Wysocki lays
out an alternative plan that is much more
in keeping with traditional kernel development strategies:
Which unfortunately I don't agree with.
I think we can get _one_ implementation out of the three, presumably keeping the user space interface that will keep the current s2disk binaries happy, by merging TuxOnIce code _gradually_. No "all at once" approach, please.
And by "merging" I mean _exactly_ that. Not adding new code and throwing away the old one.
But, as Cunningham continues pushing for help in getting TuxOnIce merged
alongside swsusp, Wysocki points out that
it requires a great deal of
review to get a huge (10,000+ lines of code) set of patches accepted:
"That would take lot of work and we'd also have to ask many other
busy people
to do a lot of work for us
". Cunningham seems to be
under the misapprehension that kernel hackers will be willing to merge a
subsystem that duplicates another without a clear overriding reason.
Easing what he sees as a necessary
transition from swsusp to TuxOnIce is not likely to be that compelling.
It is clearly frustrating for Cunningham to have a working solution but be unable to get it into the kernel. But it is a direct result of working out of the tree and then trying to present a solution when the kernel has gone in a different direction. It is a common mistake that folks make when dealing with the kernel community. Ray Lee provides a nice answer to Cunningham's frustrations, which points to IBM's device mapper contribution that suffered from a similar reaction. Lee notes that Wysocki has offered extremely valuable assistance:
This way, the external TuxOnIce patch set shrinks and shrinks, until it's eventually gone, with all functionality merged into the kernel in one form or another.
Is your code better than uswsusp? Almost certainly. This isn't about that. This is about making your code better than what it is today, by going through the existing review-and-merge process.
At one point, Cunningham pointed to the
SL*B memory allocators as an
example of parallel
implementations that are all available in the mainline. Various folks
responded that memory allocators are fairly self-contained, unlike
TuxOnIce. Furthermore,
as Pekka Enberg notes: "Yes, so
please don't make the same mistake we did. Once you have
multiple implementations in the kernel, it's extremely hard to get rid
of them.
"
There has been a bit of discussion about the technical aspects of the TuxOnIce patch, mostly centering on the way that it frees up memory to allow enough space to create a suspend image, while still adding the contents of that memory to the suspend image. By relying on existing kernel behavior, which is not necessarily guaranteed for the future, TuxOnIce can save nearly all of the memory contents, whereas swsusp dumps caches and the like to create enough memory to build the suspend image. That means that performance after a resume operation may be impacted as those caches are refilled. Overall, though, the main focus of the discussion has been the way forward; so far, there has been little progress on that front.
This is not the first time that TuxOnIce has gotten to this point. In its earlier guise as swsusp2, Cunningham made several attempts to get it into the mainline. In March of 2004, Andrew Morton asked that it be broken down into smaller, more easily digested, chunks. The same thing happened again near the end of 2004 when Cunningham proposed adding swsusp2 in one big code ball. It doesn't end there, either, between then and now the same request has been made; at this point one might guess that Cunningham simply isn't willing to do things that way.
There is a real danger that the TuxOnIce features that its users like could be lost—or remain out-of-tree—if something doesn't give. Either Cunningham has to recognize that the only plausible way to get TuxOnIce into the kernel is via the normal kernel development path, or someone else has to pick it up and start that process themselves. With no one (other than Cunningham) pushing for its inclusion, there simply is no other way for it to get into the mainline.
Which I/O controller is the fairest of them all?
An I/O controller is a system component intended to arbitrate access to block storage devices; it should ensure that different groups of processes get specific levels of access according to a policy defined by the system administrator. In other words, it prevents I/O-intensive processes from hogging the disk. This feature can be useful on just about any kind of system which experiences disk contention; it becomes a necessity on systems running a number of virtualized (or containerized) guests. At the moment, Linux lacks an I/O controller in the mainline kernel. There is, however, no shortage of options out there. This article will look at some of the I/O controller projects currently pushing for inclusion into the mainline.
For the purposes of this discussion, it may be helpful to refer to your
editor's bad artwork, as seen on the right, for a simplistic look at how
block I/O happens in a Linux system. At the top, we have several sources
of I/O activity. Some requests come from the virtual memory layer, which
is cleaning out dirty pages and trying to make room for new allocations.
Others come from filesystem code, and others yet will originate directly
from user space. It's worth noting that only user-space requests are
handled in the context of the originating process; that creates
complications that we'll get back to. Regardless of the source, I/O
requests eventually find themselves at the block layer, represented by the
large blue box in the diagram.
Within the block layer, I/O requests may first be handled by one or more virtual block drivers. These include the device mapper code, the MD RAID layer, etc. Eventually a (perhaps modified) request heads toward a physical device, but first it goes into the I/O scheduler, which tries to optimize I/O activity according to a policy of its own. The I/O scheduler works to minimize seeks on rotating storage, but it may also implement I/O priorities or other policy-related features. When it deems that the time is right, the I/O scheduler passes requests to the physical block driver, which eventually causes them to be executed by the hardware.
All of this is relevant because it is possible to hook an I/O controller into any level of this diagram - and the various controller developers have done exactly that. There are advantages and disadvantages to doing things at each layer, as we will see.
dm-ioband
The dm-ioband patch by Ryo Tsuruta (and others) operates at the virtual block driver layer. It implements a new device mapper target (called "ioband") which prioritizes requests passing through. The policy is a simple proportional weighting system; requests are divided up into groups, each of which gets bandwidth according to the weight assigned by the system administrator. Groups can be determined by user ID, group ID, process ID, or process group. Administration is done with the dmsetup tool.
dm-ioband works by assigning a pile of "tokens" to each group. If I/O traffic is low, the controller just stays out of the way. Once traffic gets high enough, though, it will charge each group for every I/O request on its way through. Once a group runs out of tokens, its I/O will be put onto a list where it will languish, unloved, while other groups continue to have their requests serviced. Once all groups which are actively generating I/O have exhausted their tokens, everybody gets a new set and the process starts anew.
The basic dm-ioband code has a couple of interesting limitations. One is that it does not use the control group mechanism, as would normally be expected for a resource controller. It also has a real problem with I/O operations initiated asynchronously by the kernel. In many cases - perhaps the majority of cases - I/O requests are created by kernel subsystems (memory management, for example) which are trying to free up resources and which are not executing in the context of any specific process. These requests do not have a readily-accessible return label saying who they belong to, so dm-ioband does not know how to account for them. So they run under the radar, substantially reducing the value of the whole I/O controller exercise.
The good news is that there's a solution to both problems in the form of the blkio-cgroup patch, also by Ryo. This patch interfaces between dm-ioband and the control group mechanism, allowing bandwidth control to be applied to arbitrary control groups. Unlike some other solutions, dm-ioband still does not use control groups for bandwidth control policy; control groups are really only used to define the groups of processes to operate on.
The other feature added by blkio-cgroup is a mechanism by which the owner of arbitrary I/O requests can be identified. To this end, it adds some fields to the array of page_cgroup structures in the kernel. This array is maintained by the memory usage controller subsystem; one can think of struct page_cgroup as a bunch of extra stuff added into struct page. Unlike the latter, though, struct page_cgroup is normally not used in the kernel's memory management hot paths, and it's generally out of sight, so people tend not to notice when it grows. But, there is one struct page_cgroup for every page of memory in the system, so this is a large array.
This array already has the means to identify the owner for any given page in the system. Or, at least, it will identify an owner; there's no real attempt to track multiple owners of shared pages. The blkio-cgroup patch adds some fields to this array to make it easy to identify which control group is associated with a given page. Given that, and given that block I/O requests include the address of the memory pages involved, it is not too hard to look up a control group to associate with each request. Modules like dm-ioband can then use this information to control the bandwidth used by all requests, not just those initiated directly from user space.
The advantages of dm-ioband include device-mapper integration (for those who use the device mapper), and a relatively small and well-contained code base - at least until blkio-cgroup is added into the mix. On the other hand, one must use the device mapper to use dm-ioband, and the scheduling decisions made there are unlikely to help the lower-level I/O scheduler implement its policy correctly. Finally, dm-ioband does not provide any sort of quality-of-service guarantees; it simply ensures that each group gets something close to a given percentage of the available I/O bandwidth.
io-throttle
The io-throttle patches by Andrea Righi take a different approach. This controller uses the control group mechanism from the outset, so all of the policy parameters are set via the control group virtual filesystem. The main parameter for each control group is the maximum bandwidth that group can consume; thus, io-throttle enforces absolute bandwidth numbers, rather than dividing up the available bandwidth proportionally as is done with dm-ioband. (Incidentally, both controllers can also place limits on the number of I/O operations rather than bandwidth). There is a "watermark" value; it sets a level of utilization below which throttling will not be performed. Each control group has its own watermark, so it is possible to specify that some groups are throttled before others.
Each control group is associated with a specific block device. If the administrator wants to set identical policies for three different devices, three control groups must still be created. But this approach does make it possible to set different policies for different devices.
One of the more interesting design decisions with io-throttle is its placement in the I/O structure: it operates at the top, where I/O requests are initiated. This approach necessitates the placement of calls to cgroup_io_throttle() wherever block I/O requests might be created. So they show up in various parts of the memory management subsystem, in the filesystem readahead and writeback code, in the asynchronous I/O layer, and, of course, in the main block layer I/O submission code. This makes the io-throttle patch a bit more invasive than some others.
There is an advantage to doing throttling at this level, though: it allows io-throttle to slow down I/O by simply causing the submitting process to sleep for a while; this is generally preferable to filling memory with queued BIO structures. Sleeping is not always possible - it's considered poor form in large parts of the virtual memory subsystem, for example - so io-throttle still has to queue I/O requests at times.
The io-throttle code does not provide true quality of service, but it gets a little closer. If the system administrator does not over-subscribe the block device, then each group should be able to get the amount of bandwidth which has been allocated to it. This controller handles the problem of asynchronously-generated I/O requests in the same way dm-ioband does: it uses the blkio-cgroup code.
The advantages of the io-throttle approach include relatively simple code and the ability to throttle I/O by causing processes to sleep. On the down side, operating at the I/O creation level means that hooks must be placed into a number of kernel subsystems - and maintained over time. Throttling I/O at this level may also interfere with I/O priority policies implemented at the I/O scheduler level.
io-controller
Both dm-ioband and io-throttle suffer from a significant problem: they can defeat the policies (such as I/O priority) being implemented by the I/O scheduler. Given that a bandwidth control module is, for all practical purposes, an I/O scheduler in its own right, one might think that it would make sense to do bandwidth control at the I/O scheduler level. The io-controller patches by Vivek Goyal do just that.
Io-controller provides a conceptually simple, control-group-based mechanism. Each control group is given a weight which determines its access to I/O bandwidth. Control groups are not bound to specific devices in io-controller, so the same weights apply for access to every device in the system. Once a process has been placed within a control group, it will have bandwidth allocated out of that group's weight, with no further intervention needed - at least, for any block device which uses one of the standard I/O schedulers.
The io-controller code has been designed to work with all of the mainline I/O controllers: CFQ, Deadline, Anticipatory, and no-op. Making that work requires significant changes to those schedulers; they all need to have a hierarchical, fair-scheduling mechanism to implement the bandwidth allocation policy. The CFQ scheduler already has a single level of fair scheduling, but the io-controller code needs a second level. Essentially, one level implements the current CFQ fair queuing algorithm - including I/O priorities - while the other applies the group bandwidth limits. What this means is that bandwidth limits can be applied in a way which does not distort the other I/O scheduling decisions made by CFQ. The other I/O schedulers lack multiple queues (even at a single level), so the io-controller patch needs to add them.
Vivek's patch starts by stripping the current multi-queue code out of CFQ, adding multiple levels to it, and making it part of the generic elevator code. That allows all of the I/O schedulers to make use of it with (relatively) little code churn. The CFQ code shrinks considerably, but the other schedulers do not grow much. Vivek, too, solves the asynchronous request problem with the blkio-cgroup code.
This approach has the clear advantage of performing bandwidth throttling in ways consistent with the other policies implemented by the I/O scheduler. It is well contained, in that it does not require the placement of hooks in other parts of the kernel, and it does not require the use of the device mapper. On the other hand, it is by far the largest of the bandwidth controller patches, it cannot implement different policies for different devices, and it doesn't yet work reliably with all I/O schedulers.
Choosing one
The proliferation of bandwidth controllers has been seen as a problem for at least the last year. There is no interest in merging multiple controllers, so, at some point, it will become necessary to pick one of them to put into the mainline. It has been hoped that the various developers involved would get together and settle on one implementation, but that has not yet happened, leading Andrew Morton to proclaim recently:
Seriously, how are we to resolve this? We could lock me in a room and come back in 15 days, but there's no reason to believe that I'd emerge with the best answer.
At the Storage and Filesystem Workshop in April, the storage track participants appear to have been leaning heavily toward a solution at the I/O scheduler level - and, thus, io-controller. The cynical among us might be tempted to point out that Vivek was in the room, while the developers of the competing offerings were not. But such people should also ask why an I/O scheduling problem should be solved at any other level.
In any case, the developers of dm-ioband and io-throttle have not stopped their work since this workshop was held, and the wider kernel community has not yet made a decision in this area. So the picture remains only slightly less murky than before. About the only clear area of consensus would appear to be the use of blkio-cgroup for the tracking of asynchronously-generated requests. For the rest, the locked-room solution may yet prove necessary.
Patches and updates
Kernel trees
Architecture-specific
Core kernel code
Development tools
Device drivers
Filesystems and block I/O
Memory management
Networking
Security-related
Virtualization and containers
Miscellaneous
Page editor: Jonathan Corbet
Distributions
News and Editorials
Looking forward to Fedora 11
The Fedora 11 Preview release became available April 28th, right on schedule. There is one release candidate planned (sort of) before the final version of Fedora 11 "Leonidas" (slogan "Reign") which is due on May 26, 2009. Let's take a look at some of the highlights in this release.
The release notes tout several major features, including automatic font and mime-type installation and Intel, ATI and Nvidia kernel mode setting. Also, the Nouveau drivers are now the default for Nvidia chipsets. Fingerprint makes fingerprint readers easy to use. The IBus input method system has been rewritten in C and is the new default input method for Asian languages. It should be noted that not everyone is happy with the new unified volume control.
Some other features in this release include the Virt Improved Console which allows the virtual guest console to have the screen default to at least 1024x768 resolution out of the box. MinGW, the Windows cross-compiler, makes its debut. Also, the ext4 filesystem becomes the default for new installations, although the boot partition will still need to be something more old-fashioned, like ext3. Btrfs will also be available for testing.
The feature list is showing that all the planned features are complete and all packages have been rebuilt with GCC 4.4. Fedora 11 targets 20 second startup and improved power management.
In the package management space, RPM has been updated to v4.7 and the presto plugin for yum adds support for downloading deltarpms. Deltarpms may also be used to generate new packages.
Besides MinGW there are some new and improved development tools. Archer is a gdb development branch focusing on better C++ support which also includes Python scripting capabilities. Linux Tools, OProfile, and Valgrind integration have been added to the Eclipse IDE profiling tools. NetBeans IDE 6.5 is a significant update over v6.1 in F10. Python 2.6 is now the default for Python programmers.
TigerVNC (Virtual Network Computing) is now the default for both client and server. We've already mentioned the Improved Virt Console with better screen resolution for clients. Also for new installations of F11 you'll have the ability to use other interface devices in the virtual guest, such as a USB tablet.
Available desktops include Xfce 4.6, KDE 4.2 and GNOME 2.26. There are plenty of interesting applications too. The ABRT (Automatic Bug Reporting Tool) helps non-power users with bug reporting. OpenChange provides native access to Microsoft Exchange. Also shipped are Thunderbird 3, and Firefox 3.1.
There are some security improvements as well. The System Security Services Daemon provides a set of daemons to manage access to remote directories and authentication mechanisms. sVirt integrates SELinux with the Fedora virtualization stack to allow Mandatory Access Control (MAC) security be applied to guest virtual machines. Support for hashes stronger than MD5 and SHA-1 will be available. DBusPolicy has been tweaked to increase the security settings of DBus. DNSSEC (DNS SECurity) provides a mechanism to prove the integrity and authenticity of DNS data.
Fedora 11 supports PowerPC, pSeries and Cell Broadband Engine machines and Sony PlayStation 3, in addition to x86 and x86_64. 32-bit x86 systems will be built for i586 by default. The plan was to install an x86_64 kernel on compatible hardware, even if a 32-bit system is installed. That feature didn't make it in though, so all 32-bit installs will have a 32-bit kernel, regardless of your architecture. The PAE kernel will be used on 32-bit hardware, where appropriate.
For those who want a minimal install, try the text-mode installation option. This streamlined install now omits the more complicated steps that were previously part of the process, and provides you with an uncluttered and straightforward experience. Package selection is now automated in text mode, providing just enough to ensure that the system is operational at the end of the installation process, ready to install updates and new packages.
There are some changes in some of the spins (specialized Fedora installs). Mathematics has freefem++, a finite element analysis package which has been updated to 3.0. In Chemistry there is gabedit, a GUI for a number of computational chemistry packages. The Fedora Electronic Lab (FEL) is Fedora's high-end hardware design and simulation platform. The Perl modules included for F11 bring a new methodology for FEL and extend vhdl and verilog support.
Amateur radio operators also have their own spin, including version 3.10 of fldigi, a digital modem program. Version 1.9 of xfhell includes some improvements in handling the PTT line and additional flexibility in adjusting window sizes, as well as some bug fixes. soundmodem is now back in Fedora 11. soundmodem 0.10 provides a way to use your sound card as a modem for digital applications such as AX.25. HamFax 0.54 is new in F11 as is wxapt, a console application for decoding and saving weather images transmitted in the APT format of NOAA and METEOR satellites.
See the Custom Spins wiki site for more available spins.
Whether your needs are large or small, general or specialized, Fedora 11 brings the latest applications together in one great distribution.
Distribution News
Fedora
Frields: Is this your stop?
Fedora project leader Paul Frields has a blog posting about the upcoming Fedora repository switch for Fedora 11. In it, he points to some helpful advice from Jesse Keating on how to remain on rawhide or move to Fedora 11. "Some people are trying a test release, liking it, and wanting to get off the Rawhide train for Fedora 11. Others want to stay on the train past the junction and be around for the inevitable jumble of falling luggage and bruised elbows when the floodgates open with all-new development heralding the Fedora 12 development cycle. In either case, youll want to be aware of how the junction works."
Deltarpms back in for Fedora 11
The on again, off again status of deltarpms in Fedora 11 has changed again. As reported on Josh Boyer's blog, various technical problems have been dealt with and deltarpms will be available for Fedora 11. "Oh, yeah. That's right. What you see there is indeed deltarpms for the first Fedora 11 updates push. So Paul, you can un-edit your blog post now because we should be ready to go for Fedora 11 GA. We'll probably still have a few hiccups here and there, but the infrastructure is now in place."
The Fedora Directory Server project is now called 389
The Fedora Directory Server Project is now called "389". "We're still in the process of rebranding, re-skinning the web site, etc. In the coming weeks you will see new packages with the 389 branding. Everything else is the same - the team, our mission, only the name has changed. We apologize if this change is disconcerting to some of you, we thank your for your support, and we hope to continue to make the 389 project a success."
Fedora Board Public IRC meeting, 2009-05-05
Click below for a summary of the May 5, 2009 public meeting of the Fedora advisory Board. Topics include Export restrictions and PPC as a primary or secondary architecture.
Slackware Linux
Slackware has KDE 4.2.3 and new xz compression
The Slackware-current changelog for May 8, 2009 mentions the availability of KDE 4.2.3. It also marks a departure from gzip. "Hello folks! This batch of updates includes the newly released KDE 4.2.3, but more noticeably it marks the first departure from the use of gzip for compressing Slackware packages. Instead, we will be using xz, based on the LZMA compression algorithm. xz offers better compression than even bzip2, but still offers good extraction performance (about 3 times better than bzip2 and not much slower than gzip in our testing). Since support for bzip2 has long been requested, support for bzip2 and the original lzma format has also been added (why not?), but this is purely in the interest of completeness -- we think most people will probably want to use either the original .tgz or the new .txz compression wrappers."
New Distributions
Kongoni GNU Linux
Kongoni GNU Linux takes its name from the Shona word for Gnu (also known as a Wildebeest). The name represents the spirit and history of Kongoni, a GNU/Linux operating system of African origin. Kongoni 1.12.2-alpha, based on Slackware 12.2, with significant inspiration from the BSD-Unix systems has been released.
This release includes the first versions of several kongoni specific tools including P.I.G (Ports Installation GUI) which provides a simple graphical tool for installing, managing and even creating software ports and K.I.S.S. The Kongoni Instant Setup System which provides a simple and highly extensible interface for common configuration tasks. The installer has had several notable improvements, making it not only easier to use but also more stable and flexible with support added for installing on ReiserFS file-systems (ext4 support is planned for a future release).
Distribution Newsletters
DistroWatch Weekly, Issue 302
The DistroWatch Weekly for May 11, 2009 is out. "With mobile computing being the next operating system battleground, it's hardly surprising that many industry players are focusing on these increasingly popular devices. One of the most promising among them, Moblin, has been through some major changes recently, both in terms of ownership and development goals. Read our feature story for the roundup of its recent past and probable future to learn more about the project. In the news section, Debian ditches the GNU C Library in favour of the more flexible Embedded GLIBC, Fedora finalises all features for the upcoming Leonidas release which includes delta support for RPMs, Slackware switches to packages compressed with LZMA compression mechanism, and the Ubuntu community looks to create yet another derivative based on the LXDE. Finally, don't miss our tips and trick section which provides a step-by-step guide of upgrading a stable Mandriva Linux 2009.1 to the latest Cooker, Mandriva's bleeding-edge development branch. Happy reading!"
Fedora Weekly News #175
The Fedora Weekly News for the week ending May 10, 2009 is out. "In a small sample of this information-packed issue Announcements points to the "Fedora 11 Bug Blocker Review Day", PlanetFedora explores the relationship between cooking popcorn and releasing software, Ambassadors reports that Fedora is a star not only in Trenton,NJ but also in Jaipur, India. QualityAssurance covers the proposal to drop the production of Alpha releases by Fedora 12 and the "Fedora Bug Workflow". Developments quivers with "Presto A-Go-Go!" Translation takes a look at the "Long Release Notes". Artwork examines "Banners, Posters and T-shirts". The WebComic crowns Leonidas. SecurityAdvisories is short and sweet. Virtualization reports on "Experimental Dom0 on Fedora 11"."
The Mint Newsletter - issue 83
This issue of the Mint Newsletter covers the release of Linux Mint 7 "Gloria" RC1 and more.OpenSUSE Weekly News/71
This issue of the openSUSE Weekly News covers openSUSE Community Week, Jan-Simon Möller: GSoC Introduction openSUSE @ ARM, Katarina Machalkova: Secret AutoYaST feature :), Jigish Gohil: openSUSE in Education, Linux Magazine: KDE 4.3: First Widget for Social Desktop, and more.Ubuntu Weekly Newsletter #141
The Ubuntu Weekly Newsletter for May 9, 2009 is out. "In this issue we cover: Monthly BugSquad Meeting: May 12th, Jaunty Jackalope Release parties, What's a build score, then?, byobu 2.0 released, In The Press and Blogosphere, Ubuntu Podcast #27, Meeting Summaries of Technical Board and Ubuntu Server Teams, and much, much more!"
Newsletters and articles of interest
Ubuntu is the Linux Usability Leader (LinuxPlanet)
Bruce Byfield discusses the usability of Ubuntu on LinuxPlanet. "With the first release in 2004, Ubuntu established itself as one of the most user-friendly GNU/Linux distributions available. Since then, each release has reaffirmed this reputation, although recent versions have coasted a little. However, with the supposedly improved notifications system in the recently-released Jaunty Jackalope (aka 9.0.4), Ubuntu unintentionally raises a new issue in usability -- that is, whether a distribution can or should set the usability agenda by itself?"
The Perfect Server - Mandriva 2009.1 Free (x86_64) [ISPConfig 2]
HowtoForge sets up a server using Mandriva's 2009 Spring edition. "This tutorial shows how to set up a Mandriva 2009.1 Free (x86_64) server that offers all services needed by ISPs and hosters: Apache web server (SSL-capable), Postfix mail server with SMTP-AUTH and TLS, BIND DNS server, Proftpd FTP server, MySQL server, Dovecot POP3/IMAP, Quota, Firewall, etc. In the end you should have a system that works reliably, and if you like you can install the free webhosting control panel ISPConfig 2 (i.e., ISPConfig runs on it out of the box). This tutorial is written for the 64-bit version of Mandriva 2009.1."
Interviews
Interview with Kubuntu developer Jonathan Thomas (Kubuntu-de.org)
Kubuntu-de.org has an interview with Kubuntu developer Jonathan Thomas. "kubuntu-de.org: We'll come back to becoming a MOTU later, now it is time to talk about Kubuntu 9.04 the "Jaunty Jackalope". How was the release cycle? Have there been special problems? Jonathan: I'd describe this release cycle as intense. We've been able to include a lot of great, updated software this cycle that include some neat features and polishes existing features at the same time. Unfortunately, this cycle the graphics drivers for Intel video cards have been a bit more problematic in the past. This is about the biggest problem I've seen with Kubuntu 9.04 so far, and I'd recommend that users who have Intel cards testdrive the live cd for a bit before deciding whether to upgrade or not. Fortunately, the nVidia drivers have gotten better, with performance being far more acceptable in KDE4 and Firefox."
Distribution reviews
What to expect from Fedora 11 (/bin/bash)
The weblog /bin/bash has a review of Fedora 11. "This release has got me more excited than 10. The features as the wiki says it "dwarfs any other release". It looks very promising and the future for Fedora seems brighter. It is definitely a brilliant milestone after 10 releases."
Page editor: Rebecca Sobol
Development
The KDE Social Desktop's first appearance
The KDE Social Desktop was first proposed in Frank Karlitschek's keynote speech at the 2008 Akademy in Belgium. However, it has only recently received widespread attention with the announcement of its first step: a desktop plasmoid scheduled for inclusion in KDE 4.3. The resulting publicity has left as many online commenters praising the concept as criticizing it, with both sides having the potential to improve subsequent development on the project.
The KDE Social Desktop should not be confused with the countless other efforts on every platform to integrate general social network tools more tightly into the desktop. Nor should it be confused with the semantic desktop, the ongoing efforts to add a data layer based on Nepomuk to the KDE desktop for annotating and tracking information. Instead, the Social Desktop is specifically an effort to bring the advantages of the KDE community to the desktop.
As described in the PDF of Karlitschek's slideshow from the 2008 Akademy, the Social Desktop began with his observation that the KDE project has a long history of community-oriented sites such as the now-discontinued KDE-Look.org and the ongoing KDE-Apps.org. He particularly emphasized his own highly-successful meta-portal openDesktop.org, which, now receives some 60 million page impressions per month, and boasts over 100,000 users.
Yet, despite this ability to create active communities, Karlitschek noted, KDE has only a fraction of the general desktop market. Arguing that this organized community is something that Windows lacks, he proposed increasing KDE's market share by making these communities accessible directly from the desktop with what he calls "Open Collaboration Services
". When finished, these services will include such elements as links to developers in the About window of applications, and others to allow users to become fans of an application and follow its development. Similarly, online help could include links to people with specific problems, and hardware dialogs could include links to those with the same hardware. In the same way, to "welcome new users into the family,
" as Karlitschek put it, an applet could show nearby KDE users and events in order to help newbies join the community more easily.
The recently announced openDesktop plasmoid is a simple tool in itself, more a proof of concept than anything astonishingly different. It is a client for the Open Collaboration Service API being developed by Kügler. It also draws as necessary upon a new geo-location engine that can use either a GPS device connected to the computer, or estimate location based on IP Address.
You can download and compile the source code for the plasmoid and the engines it depends upon, but, if you are simply curious, you will learn almost as much with far less effort by watching the Flash video by Sebastian Kügler that was part of the announcement.
To use the plasmoid, you must register with openDesktop.org. It opens on your openDesktop.org user page, and other tabs show friends and nearby users who are currently logged into openDesktop.org, any of whom you can contact. In the setup, you can also decided whether to publish your location, or preserve your privacy by not revealing it. All in all, the plasmoid resembles nothing so much as a dedicated IRC application. However, as limited as it is, the plasmoid marks the accomplishment of the first stage of Karliltschek's plans.
In a comment that accompanies the announcement, Kügler wrote, "The goal is not to write a desktop Facebook client, and I'm not aware of anyone working on this right now. That doesn't mean that it wouldn't be welcome (it certainly is if that's what you want to work on). I'm aware that DigiKam does integrate with Facebook to upload photos.
"
However, Karliltschek emphasizes that the effort is intended to create a specifically KDE community. "Please see this only as a first step,
" he wrote, suggesting that the social desktop could eventually be used for "Free Software Events, Knowledge Base and user Support, document sharing, location-based features and more. We try to create something really new and innovative here.
"
The next step, if things go according to Karliltschek's original presentation will be to encourage KDE sub-projects like Akonadi, the KDE Personal Information Manager, and Decibel, the desktop communications framework, to add support for the Social Desktop to their back ends. That will be followed by integration of the Social Desktop into specific KDE applications. However, if the slowness of integration for Nepomuk is any indication, these steps may take several releases to accomplish, assuming that they succeed at all. Probably, they will be as much a matter of diplomacy as the Social Desktop developers persuade others of the usefulness of their project.
Reactions to the Social Desktop
At least half the reactions to the announcement of the Social Desktop are positive, but offer little except encouragement for the concept. For observers of the free desktop's evolution, the questions poised by the negative comments seem more thoughtful, regardless of their validity.
For example, when the announcement was linked on Slashdot, a poster called speedtux questioned the need for the project. "The 'social desktop' is already here,
" speedtux wrote. "It consists of web sites, site specific browsers, instant messenger apps, feed readers, desktop notification, and widgets. Some people also still use local mail, calendar, and address book apps. What is KDE trying to contribute to that? Even more heavy-weight local apps and new protocols? How are they going to keep up with the rapidly evolving set of protocols and features available through web apps? And why bother?
"
In another comment on the Slashdot discussion, the same poster compared the project to the long-retired CDE graphical interface, arguing that "KDE is repeating the CDE mistake: instead of focusing on what people need right now and doing a really good job at it, KDE is trying to realize some long term pie-in-the-sky technical visions of its developers that no user asked for.
"
However, perhaps the most fruitful reaction appeared on the KDE Plasma-devel mailing list, where Richard Dale pointed out that at least some of the concepts being developed for the Social Desktop overlapped with ones that already existed in Nepomuk's semantic desktop. In particular, Dale suggested that a number of ontologies (that is, higher level organizational concepts for information) such as FOAF (Friend of a Friend) might be as applicable to the social desktop as to the semantic one.
To his credit, Karlitschek promptly welcomed the idea of working with the semantic desktop, a move that might ultimately reduce the amount of new code that the Social Desktop adds to KDE, as well as saving development time. Perhaps, too, it will prove easier to to have both the Social Desktop and the semantic one accepted together, rather than separately.
Is the Social Desktop needed? Could it be what KDE needs to increase its market share? These questions cannot be answered yet, largely because the details of where the KDE Social Desktop will go from here is still unannounced.
However, it may be that the next stages of the project will be stronger for both the enthusiasm and the probing criticism. The enthusiasm may expand the concept into general social networking, while the criticism, if nothing else, will remind the project of the need to allow users to opt out and suggest further collaboration possibilities. In the end, the Social Desktop may be more successful because of its implementation in stages.
System Applications
Database Software
PostgreSQL Weekly News
The May 9, 2009 edition of the PostgreSQL Weekly News is online with the latest PostgreSQL DBMS articles and resources.SQLite release 3.6.14 announced
Release 3.6.14 of the SQLite DBMS has been announced. "Changes associated with this release include the following: * Added the optional asynchronous VFS module. * Enhanced the query optimizer so that virtual tables are able to make use of OR and IN operators in the WHERE clause. * Speed improvements in the btree and pager layers. * Added the SQLITE_HAVE_ISNAN compile-time option which will cause the isnan() function from the standard math library to be used instead of SQLite's own home-brew NaN checker. * Countless minor bug fixes, documentation improvements, new and improved test cases, and code simplifications and cleanups."
SQLObject 0.9.10 announced
Version 0.9.10 of SQLObject, an object-relational database mapper, has been announced. "I'm pleased to announce version 0.9.10, a minor bugfix release of 0.9 branch of SQLObject."
SQLObject 0.10.5 announced
Version 0.10.5 of SQLObject, an object-relational database mapper, has been announced. "I'm pleased to announce version 0.10.5, a minor bugfix release of 0.10 branch of SQLObject."
sqlparse 0.1.1 released
Version 0.1.1 of sqlparse has been announced, it includes bug fixes and other improvements. "sqlparse is a non-validating SQL parser module for Python. The module provides functions for splitting, formatting and parsing SQL statements. Please file bug reports and feature request on the issue tracker."
Filesystem Utilities
TestDisk and PhotoRec 6.11.3 announced
Version 6.11.3 of TestDisk and PhotoRec, a set of disk recovery utilities, has been announced. "This new stable release version fixes - the EXIF parser used by PhotoRec when Jpeg and Tiff files are found - TestDisk EFI GPT partition backup".
Interoperability
SyncEvolution releases version 0.9 beta 1
SyncEvolution, a personal information management synchronization tool, has released the first beta of version 0.9. Fairly big changes are afoot as it moves to the Synthesis SyncML engine (recently released under the LGPL v2.1) and has been adopted by the Moblin project. Lead developer Patrick Ohly describes these and other changes in the release announcement. "The goal is to continue with SyncEvolution and Synthesis not just as open source, but also as open projects, with as much communication on public channels as possible. This is just getting started, so bear with us (and kindly remind us!) while we figure out how to do this properly." Click below for the full announcement.
Mail Software
Pyzor 0.5 released
Version 0.5 of Pyzor, a collaborative, networked system to detect and block spam using identifying digests of messages, has been announced. "With this release, we have aimed to resolve all the outstanding reported bugs and incorporate submitted patches (many of which are also from some time ago). The hope is that this, along with the recent improvements to the public Pyzor server, revitalises the Pyzor project."
Web Site Development
Django 1.1 status update
The Django web platform project has posted a version 1.1 status update "It's well past time for a quick update on the status of Django 1.1: Anyone who's been following our development process and can read a calendar will probably have noticed that we've missed our originally-targeted ship date of April 13th. So we're now about a month behind. The reason for the schedule slip is pretty typical for most software projects: we reached the target date with a number of bugs still open. Putting out buggy code on time simply isn't an option, so we've been working to get these final issues closed before we ship any code."
Miscellaneous
Jopr 2.2 has been released
Version 2.2 of Jopr has been announced, some new functionality has been added. "Jopr is a management platform for everything from the OS level load and network metrics through common databases to application servers and projects. The system includes support for monitoring and/or managing Apache httpd, Apache Tomcat, JBoss Application Server, PostgreSQL, and other popular open source projects. Jopr runs on PostgreSQL 8.2.4+ and Oracle as backend databases and is written in Java."
Python process utility (psutil) 0.1.2 released
Version 0.1.2 of Python process utility has been announced, a number of new capabilities have been added. "psutil is a module providing an interface for retrieving information on running processes and system utilization (CPU, memory) in a portable way by using Python, implementing many functionalities offered by tools like ps, top and Windows task manager. It currently supports Linux, OS X, FreeBSD and Windows."
Zenoss: 2.4 is Now Available (SourceForge)
Version 2.4 of Zenoss has been announced. "Zenoss Core is an enterprise network and systems management application written in Python/Zope. Zenoss provides an integrated product for monitoring availability, performance, events and configuration across layers and across platforms. We are proud to announce the release of Zenoss 2.4. The latest Zenoss version was developed in conjunction with our community of more than 40,000 members who provided product input, monitoring extensions, patches and beta testing."
Desktop Applications
Audio Applications
Audacious 2.0-beta1 released
Version 2.0-beta1 of Audacious has been announced. "Audacious is an advanced audio player. It is free, lightweight, based on GTK2, runs on Linux and many other *nix platforms and is focused on audio quality and supporting a wide range of audio codecs. Its advanced audio playback engine is considerably more powerful than GStreamer. Audacious is a fork of Beep Media Player (BMP), which itself forked from XMMS."
Desktop Environments
GNOME 2.27.1 released
Version 2.27.1 of the GNOME desktop has been announced. "Today marks the beginning of our trip towards 2.28, with the first development release of this cycle. It's also our first release after our git migration and it seems we survived, yay!"
Libglade officially deprecated in favor of GtkBuilder
libglade has been deprecated. "The GNOME Release team has officially deprecated libglade in favor of GtkBuilder. Some reasons: * GtkBuilder is actively maintained. * GtkBuilder can create non-widgets (like treemodels). * It's one less library."
GNOME Software Announcements
The following new GNOME software has been announced this week:- GParted 0.4.5 (new features, bug fixes and translation work)
- librep 0.17.4 (bug fixes and documentation work)
- PyGoocanvas 0.14.1 (bug fixes and git work)
- Tomboy 0.14.2 (bug fixes, documentation and translation work)
- Vala 0.7.2 (new features and bug fixes)
KDE 4.3 Beta 1 released
Version 4.3 Beta 1 of KDE has been announced. "Highlights of KDE 4.3 are... - Integration of many new technologies, such as PolicyKit and Geolocation services - New Window animation effects, a more usable Run Command popup and many new and improved addons in Plasma - Many bugfixes and improvements across all applications and more integration of features coming with the KDE 4 platform"
Qt moves to a public repository
The Qt toolkit now lives in a public, git-based repository. "Launching a public repository is a big milestone for us in Qt Software, as it allows us to work closer with contributors, strengthens the the link to the community, and gives that warm and fuzzy feeling of working with open source. Granted, our releases have been open source, but our development model has not." Among other things, the requirement for written copyright assignments has been eliminated.
KDE Software Announcements
The following new KDE software has been announced this week:- Audex 0.71b1 (new features, bug fixes and translation work)
- cb2Bib 1.2.3 (new features and bug fixes)
- Kipi-Plugins 0.3.0 (unspecified)
- KPhotoAlbum 4.0 (KDE 4 port)
- Kraft 0.32 (new features)
- libkdcraw 0.1.9 (unspecified)
- luckyBackup 0.3 (proposed for KDE integration)
- Minimum Profit 5.1.2 (new features and bug fixes)
- Qsynth 0.3.4 (new features, bug fixes and translation work)
- QTrans 0.2.1.8 (new feature)
- rkward 0.5.0d (bug fixes)
- servicemenu-encfs 0.1 (initial release)
- SMILE 0.9.8 (bug fixes)
- Yakuake 2.9.5 (new features and bug fixes)
Xorg Software Announcements
The following new Xorg software has been announced this week:- xf86-input-vmmouse 12.6.4 (new features, bug fixes and documentation work)
- xf86-video-cirrus 1.3.0 (new features, bug fixes and documentation work)
- xf86-video-geode 2.11.2 (build fixes and bug fixes)
- xf86-video-intel 2.7.1 (bug fixes)
- xinput 1.4.2 (new features and bug fixes)
- xorg-server 1.6.1.901 (new features and bug fixes)
Desktop Publishing
Inforama: Document Automation System 1.2 released (SourceForge)
Version 1.2 of Inforama has been announced. "Document templates, generation and distribution. Create letter templates using OpenOffice and import existing Acrobat forms. Merge data to produce high quality PDF documents and automatically email, print and view. Inforama is a Java based Document Automation system which allows document templates to be created quickly and easily using OpenOffice."
Electronics
Covered: Stable release 0.7.1 now available (SourceForge)
Version 0.7.1 of Covered has been announced. "Covered is a Verilog code coverage utility using VCD/LXT dumpfiles (or VPI simulation interface) and the design to generate line, toggle, memory, combinational logic, FSM state/arc and assertion coverage report metrics viewable via GUI or ASCII format. See package notes for details."
Whirlygig GPL Hardware RNG
The Whirlygig random number generator project has been launched. "Whirlygig is a USB 1.1 device that contains a fast, high quality hardware random number generator. Via a Linux driver, each whirlygig you connect makes available an additional 7Mbits of high quality randomness a second, or 750-850KBytes/sec sustained using the standard /dev/hw_random API. Current status: Prototype is fully working, waiting for PCBs to be fabricated."
Games
Svencoop 4.06 Update
Version 4.1 of SvenCoop, a Half-Life Modification, has been announced. "Team SvenCoop is happy to announce the latest version of Svencoop has been released to the public; Upon being alerted of some exploit / crash issues, the v4.1 patch was pushed even further ahead and is now known as v4.05."
Interoperability
Wine 1.1.21 announced
Version 1.1.21 of Wine has been announced. Changes include: "- Beginnings of shader model 4 support. - Support for copying/pasting images from X11 applications. - A number of GDIPlus improvements. - Various listview fixes. - 64-bit support in winemaker. - Support for building on Mac OS X Snow Leopard. - Various bug fixes."
Mail Clients
SquirrelMail 1.4.18 released
Version 1.4.18 of SquirrelMail has been announced. "The most notable changes for this version are several security fixes, including a couple XSS exploits, a session fixation issue, and an obscure but dangerous server-side code execution hole. However, this version also includes three new languages and more than a few enhancements to things such as the filters plugin, the address book system and other things under the hood."
Medical Applications
GNUmed 0.4.4 released (LinuxMedNews)
LinuxMedNews reports on the release of the GNUmed version 0.4.4 electronic medical record system. "GNUmed EMR for medical offices has been updated to version 0.4.4. Fixes include reenabled path sanity check that fell off when fixing Windows and a fix that makes recent notes in SOAP plugin copy-able for pasting. A new Live-CD has been released as well."
Office Suites
OpenOffice.org 3.1 released
The OpenOffice.org 3.1 release is out. "The biggest single change (half a million lines of code!) and the most visible is the major revamp of OpenOffice.org on-screen graphics. Techies call it anti-aliasing - users just appreciate how much crisper graphics are on screen." There's a lot more; see the OOo 3.1 new features page for the list.
OpenOffice.org Newsletter
The April, 2009 edition of the OpenOffice.org Newsletter is out with the latest OO.o office suite articles and events.
Languages and Tools
Caml
Caml Weekly News
The May 12, 2009 edition of the Caml Weekly News is out with new articles about the Caml language.
Java
JUnique: 1.0.1 (SourceForge)
Version 1.0.1 of JUnique has been announced. "The JUnique library can be used to prevent a user to run at the same time more instances of the same Java application. JUnique implements locks and communication channels shared between all the JVM instances launched by the same user."
PHP
PHP 5.3.0RC2 announced
Version 5.3.0RC2 of PHP has been announced. "This RC focuses on bug fixes and stability improvements, and we hope only minimal changes are required for the next candidate (RC3). Expect an RC3 in 2-3 weeks, although there will not be major changes so now is a good time to start the final testing of PHP 5.3.0 before it gets released, in order to find possible incompatibilities with your project."
Python
CodeInvestigator 0.11.2 released
Version 0.11.2 of CodeInvestigator, a tracing tool for Python programs, has been announced. "There was a major bug introduced in Python 2.6 that affected everyone on 2.6: Comment lines crashed the generate process. Input entry was taking too long in some cases. Special characters were not stored with the default [ASCII] encoding."
ftputil 2.4.1 released
Version 2.4.1 of ftputil has been announced, this release adds a number of bug fixes. "ftputil is a high-level FTP client library for the Python programming language. ftputil implements a virtual file system for accessing FTP servers, that is, it can generate file-like objects for remote files. The library supports many functions similar to those in the os, os.path and shutil modules. ftputil has convenience functions for conditional uploads and downloads, and handles FTP clients and servers in different timezones."
Jython 2.5.0 Release Candidate 1 is out
Release Candidate 1 of Jython 2.5.0, a Java implementation of Python, has been released. "It contains bug fixes and polish since the last beta. One especially nice bit of polish is that JLine (http://jline.sourceforge.net) is enabled by default now, and so using up and down arrows should work out of the box. If no major bugs are found this release will get re-labeled and released as the production version of 2.5.0." Release Candidate 2 of Jython 2.5.0 was released a short time later to address a bug.
Python 3.1 beta 1 released
Version 3.1 beta 1 of Python has been announced. "Python 3.1 focuses on the stabilization and optimization of features and changes Python 3.0 introduced. For example, the new I/O system has been rewritten in C for speed. File system APIs that use unicode strings now handle paths with undecodable bytes in them. Other features include an ordered dictionary implementation and support for ttk Tile in Tkinter."
SfePy 2009.2 released
Version 2009.2 of SfePy has been announced. "SfePy (simple finite elements in Python) is a software, distributed under the BSD license, for solving systems of coupled partial differential equations by the finite element method. The code is based on NumPy and SciPy packages."
Tcl/Tk
Tcl-URL! - weekly Tcl news and links (May 6)
The May 6, 2009 edition of the Tcl-URL! is online with new Tcl/Tk articles and resources.
XML
pyxser 1.0R announced
Version 1.0R of pyxser has been announced. "I'm pleased to announce pyxser-1.0R, a Python-Object to XML serializer and deserializer. This package it's completly written in C and licensed under LGPLv3."
Libraries
libsynthesis SyncML data synchronization engine released
Synthesis AG has announced the release of the SyncML synchronization engine under the LGPL (v2+) license. "libsynthesis is a complete implementation of the SyncML DS standard, including advanced functionality like filtering, suspend & resume, flexible data formats and much more. Unlike other SyncML libraries, it not only abstracts the SyncML protocol, but also provides converters for all data formats used in SyncML and allows direct mapping to database tables such as SQLite3." The Moblin project has already picked up the code for its distribution.
Test Suites
Linux Desktop Testing Project 1.6.0 released
Version 1.6.0 of the Linux Desktop Testing Project (LDTP) has been released. "This release features number of important breakthroughs in LDTP as well as in the field of Test Automation. This release note covers a brief introduction on LDTP followed by the list of new features and major bug fixes which makes this new version of LDTP the best of the breed. Useful references have been included at the end of this article for those who wish to hack / use LDTP."
Version Control
GIT 1.6.3 released
Version 1.6.3 of the GIT distributed version control system has been announced. "With the next major release, "git push" into a branch that is currently checked out will be refused by default. You can choose what should happen upon such a push by setting the configuration variable receive.denyCurrentBranch in the receiving repository. To ease the transition plan, the receiving repository of such a push running this release will issue a big warning when the configuration variable is missing."
GIT 1.6.3.1 released
Version 1.6.3.1 of the GIT distributed version control system has been announced. "Embarrasingly 1.6.3 has a rather grave regression when you switch to a new branch while you have some changes added to the index. A commit you make from that index will record a wrong tree. Please consider this a hotfix and do not use vanilla 1.6.3."
monotone 0.44 released
Version 0.44 of the monotone distributed version control system has been announced. "This is a maintenance release which fixes a couple of bugs and regressions from 0.43 and earlier versions."
qgit 2.3 released
Version 2.3 of qgit has been announced. "QGit is a git history viewer with a good bunch of related features. See handbook (press F1 key), for a detailed list of features and how to use them. This is mainly a mainteinance release with some nice novelity as a new graph look. You can browse through shortlog with contributors credits directly from http://git.kernel.org/?p=qgit/qgit4.git;a=summary"
Page editor: Forrest Cook
Linux in the news
Recommended Reading
Trademarks: The Hidden Menace (The Washington Post)
In an article syndicated from PC World, The Washington Post looks at trademarks for free software. While the article is a bit over the top, it does make several good points about the conflict between freedom and trademarks. It also ignores a legitimate threat that trademarks can reduce: trojaned versions of popular software distributed as the original. "To be fair, at the time of the creation of the Enterprise distro, Red Hat also created the Fedora project to encourage the creation of an entirely unrestricted Linux distro. Novell did the same with the openSUSE project. But I can't help feeling that this was a way of paying-off the community -- throwing meat to the wolves, so they don't bite. With community projects to chew-on, people are less likely to bring-up troubling trademarking or redistribution issues. It seems to have worked too."
Trade Shows and Conferences
Linux Audio Update (Linux Journal)
Dave Phillips continues his outstanding coverage of Linux audio in an article over at Linux Journal. On tap in this column is a quick look at the Linux Audio Conference, held last month in Parma, Italy, as well as updates and new features in multiple audio, and audio-related, software projects. "This conference is a key "meeting of the minds" for Linux audio developers and users. Represented projects included Csound, the Q programming language, Pure Data (Pd), netjack and many others. Former conferences have been characterized by an abundance of fine conversation, music, food and wine, and reports indicate that organizer Fons Adriaensen maintained the tradition in excellent style. Judging from the quality of the papers I've read and the videos I've watched, LAC2009 appears to have been another successful event."
Companies
Enea Launches Embedded Linux jump start solution (SOA World)
SOA World reports on Enea's new Embedded Linux Project Framework. "Enea, one of the industry's leading suppliers of embedded Linux expertise, announces an innovative offering to jump start development of embedded Linux projects where performance and unique requirements are of central importance. The Enea Embedded Linux Project Framework (ELPF) is an entirely new approach that offers the core components, tools and services that are common to virtually all Embedded Linux projects in a single, one-stop package. Additional packages and components that are not widely used are available as required." LinuxDevices analyzes ELPF in more detail.
Interviews
Interview with Greg Dekoenigsberg - Red Hat Community Architect (LinuxQuestions)
LinuxQuestions talks with Greg Dekoenigsberg. "Shortly after we announced the split between Red Hat Enterprise Linux and Fedora, the job of "community manager" came open. At the time, it was largely an evangelist role, but I saw a lot of opportunities in the job. At the time, we'd made a promise -- to give the community a larger role in the development of Fedora -- that I thought we hadn't yet fulfilled. So I took the job, with the goal of helping Red Hat keep its promise to make Fedora a truly community-driven distribution."
Interview with Edward Hervey about the PiTiVI video editor (GnomeDesktop)
GnomeDesktop talks with Edward Hervey about the PiTiVi video editor. "There are many goals for PiTiVi, but I'd say the fundamental goal from which all other goals derive is to be a video editor framework without any limitations (unlike all other editors that have got very specific limitations to what they can do or support). Getting rid of the limitations of formats, devices, filters,... we can support is brought to us through the use of the GStreamer multimedia framework. All other editors have hardcoded this, whereas we can for example be proud in being the only Free editor not tied to any patent-encumbered libraries!"
The smallest unit of freedom: a Fellow (Fellowship of FSFE)
Stian Rødven Eide has interviewed Timo Jyrinki for the Fellowship of Free Software Foundation Europe. "In addition to being the friendly media face of Wikipedia Finland, the team contact for Ubuntu Finland and founder of local advocacy project Vapaa Suomi (Libre Finland), Timo Jyrinki has been involved as an active developer and translator for a wide range of Free Software organisations such as FSFE, Debian, GNOME and Openmoko. He has worked on computer graphics for much of his life, with a particular interest in human-computer interaction, and spends a lot of his current time making improvements to embedded systems. I had a lovely interview with Timo, in which he shared his thoughts on user interfaces, the Free Software situation in Finland and how businesses should let the community lead."
Resources
A Few Facts As Antidote Against Microsoft's anti-ODF FUD Campaign (Groklaw)
Groklaw has some facts about ODF (Open Document Format). "The best antidote against FUD is facts. FUD only works when people don't know any better. So, given some recent anti-ODF FUD in the air, I thought it would be useful to provide some facts. First, I'd like to show you who voted Yes to approve OpenDocument v1.1 as an OASIS Standard in January of 2007. ODF v1.2 is already being adopted by some now, of course, as development has continued, but Microsoft chose to stick with v1.1, so let's do the same. I think you'll find the list dispositive as to who is sincere in this picture. Next time you read some criticism of ODF, then, you can just take a look at the list and ask yourself what it tells you."
Reviews
Canonical aims for the cloud with new Ubuntu One (ars technica)
Ars technica tries out the Ubuntu One beta service. "At the current stage of development, the primary feature of Ubuntu One is file synchronization. The client software creates an Ubuntu One folder in the user's home directory and will keep the contents of this folder synchronized across multiple computers. The software will detect when files are modified on the local filesystem and will upload the changes to the Web service, which will then propagate the data to the rest of the user's computers."
Page editor: Forrest Cook
Announcements
Non-Commercial announcements
Call to contribution for the '2020 FLOSS roadmap'
The 2020 FLOSS roadmap is a collaborative roadmap and with predictions on how the free / libre / open source software ecosystem could evolve over the next ten years. The first version was published last December, but the call for contributions for version 2 is currently open.The FSF's new free software activist internship program
The Free Software Foundation has announced a new internship program. "The program provides opportunities for participants to work closely with FSF staff members for twelve-week terms in core areas of the FSF's work, including campaign and community organizing, free software licensing, systems and network administration, GNU project support, and web development." Applications for the first set of (unpaid) positions are due by May 25.
Gnash version 9.0 summer project seeks donations
The Open Media Now organization is seeking donations for the Gnash summer v9 project. "The Gnash free flash player development team is setting its sights on getting to version 9.0 by the end of the summer and is launching a project later this month to ensure that they meet their goal. The project, known as the Gnash V9 Summer Bash will engage student interns to hammer through a number of ActionScript3 (AS3) Class Libraries that are critical to v9 and v10 functionality. The success of the project will result in Gnash compatibility with a number of high-demand websites -- including educational, major media, and other popular sites."
Linux Fund to raise money for Gnash OpenStreetMap support
The Linux Fund recently announced an effort to improve OpenStreetMap support in the Gnash flash player. "Linux Fund has expanded its partnership with Sandro Santilli of the Gnash media player team to bring OpenStreetMap editing support to this open source Flash player. This work will also improve YouTube compatibility and joins Linux Fund's existing effort to bring the Real Time Messaging Protocol support to Gnash. "I am excited about having Gnash fully support OpenStreetMap because together these projects can really demonstrate what amazing things can be done with free software." says Rob Savoye of the Open Media Now! Foundation."
Commercial announcements
Intel and Nokia announce open source telephony project (oFono)
Intel and Nokia are teaming up for yet another open source telephony effort: oFono. How this fits with Maemo is not exactly clear from the announcement, but the authors of the announcement are Marcel Holtmann, Intel Open Source Technology Center and Aki Niemi, Nokia Devices R&D, Maemo Software. "oFono.org is a place to bring developers together around designing an infrastructure for building mobile telephony (GSM/UMTS) applications. oFono.org is licensed under GPLv2, and it includes a high-level D-Bus API for use by telephony applications of any license. oFono.org also includes a low-level plug-in API for integrating with Open Source as well as third party telephony stacks, cellular modems and storage back-ends." Click below for the full announcement.
MontaVista announces MontaVista Linux 6
MontaVista has released the MontaVista Linux 6 Embedded Linux development environment. "By delivering Market Specific Distributions, combined with the new MontaVista Integration Platform, MontaVista gives commercial device developers unparalleled flexibility to design and deliver products uniquely tailored for their target market." Linux Devices takes a look at the support for new SoC devices in MontaVista Linux 6.
rPath announces a new CEO
rPath has announced the hiring of Mike Torto as its new CEO. "At rPath, we've been very focused on lining up the pieces to support the next stage of our growth. First, we laid out the future of application deployment and system maintenance; and we launched rBuilder 5, the solution for automating the packaging, deployment and maintenance of applications as complete and self-contained systems that are ready to run in any traditional, virtual or cloud-based environment. The last piece of the puzzle is a CEO to scale the business and accelerate our growth in enterprise markets. Mike Torto was a perfect fit for this role."
New Books
Audacity Compact published in English
the book Audacity Compact by Markus Priemer is available in English and German.
Resources
FSFE Newsletter
The April, 2009 edition of the FSFE Newsletter is online with the latest Free Software Foundation Europe news. Topics include:1. Second European Licensing and Legal Workshop for Free Software, 23-24 April 2. FSFE at WIPO's 3rd Session of the Committee on Development and Intellectual Property 3. FSFE amicus brief to European Patent Office on Software Patents 4. The Fellowship interviews: Myriam Schweingruber 5. FSFE welcomes Thomas Jensch, new intern for Zurich office 6. Fellowship vote for GA seats - the election is ongoing 7. Renewal of Fellowship services 8. PDFreaders.org enjoys continued success 9. FSFE invades Austria
R6xx/R7xx 3D programming guide released
Full support for ATI graphics chipsets has been long in coming, but it just got a little closer: AMD has now released the 3D programming guide for the R6xx and R7xx chipsets. It's available (in PDF format) from AMD or X.org.
Contests and Awards
Awards As Far As The Eye Can See (Linux Journal)
Linux Journal reports on the upcoming OSCON awards, nominations are open until May 22. "There are awards, and then there are awards. The Oscars may hold audiences captive for a night, but the Nobel Prize is an award forever. The Open Source community has its share of awards as well, and award season would appear to be upon us, as two of the largest have opened nominations. The O'Reilly Open Source Conference convening for the first time in its new home in the Bay Area will be home not just to one of the über-awards, but to both O'Reilly's own Google-O'Reilly Open Source Awards and the SourceForge.net Community Choice Awards."
KPhotoAlbum competition to create a showcase video (KDEDot)
KDE.News has announced a video competition. "Today the KPhotoAlbum team has launched a competition to create the coolest showcase video for the new KDE 4 version of KPhotoAlbum. Besides fame and glory participant also has the chance to win $100 for the coolest video. Back in 2005 Jesper K. Pedersen (also known as blackie@kde.org to many people) created 4 small videos showing the major features of KPhotoAlbum. Back then recording such a video took a long time, and included among other things audio editing tools, fake VNC servers etc. Today, it is as simple as running recordmydesktop."
Upcoming Events
Black Hat - Like No Other Information Security Conference
The Black Hat USA Training event will be held in Las Vegas, Nevada on July 25-28, 2009 and the Black Hat Briefings USA will follow on July 29-30.Get Ready for openSUSE Community Week
openSUSE Community Week starts soon. "The first openSUSE Community Week is just around the corner. May 11 through May 17 we'll be hosting live sessions in IRC to help grow the openSUSE Community. Community week is all about helping new contributors get started with openSUSE and getting existing contributors together to mentor new contributors, and working together on major projects. We'll be hosting a week of IRC tutorials, Q&A's, and jam sessions on a number of topics."
Announcing PHP TestFest 2009
The 2009 PHP TestFest has been announced, it takes place in April - June 2009. "TestFest is upon us once again. For those who don't know, this is the time of year where User Groups and individuals donate a little of their time and effort to increasing the test coverage of PHP."
Events: May 21, 2009 to July 20, 2009
The following event listing is taken from the LWN.net Calendar.
Date(s) | Event | Location |
---|---|---|
May 19 May 22 |
PGCon PostgreSQL Conference | Ottawa, Canada |
May 19 May 22 |
php|tek 2009 | Chicago, IL, USA |
May 19 May 21 |
Where 2.0 Conference | San Jose, CA, USA |
May 19 May 22 |
SEaCURE.it | Villasimius, Italy |
May 21 | 7th WhyFLOSS Conference Madrid 09 | Madrid, Spain |
May 22 May 23 |
eLiberatica - The Benefits of Open Source and Free Technologies | Bucharest, Romania |
May 23 May 24 |
LayerOne Security Conference | Anaheim, CA, USA |
May 25 May 29 |
Ubuntu Developers Summit - Karmic Koala | Barcelona, Spain |
May 27 May 28 |
EUSecWest 2009 | London, UK |
May 28 | Canberra LUG Monthly meeting - May 2009 | Canberra, Australia |
May 29 May 31 |
Mozilla Maemo Mer Danish Weekend | Copenhagen, Denmark |
May 31 June 3 |
Techno Security 2009 | Myrtle Beach, SC, USA |
June 1 June 5 |
Python Bootcamp with Dave Beazley | Atlanta, GA, USA |
June 2 June 4 |
SOA in Healthcare Conference | Chicago, IL, USA |
June 3 June 5 |
LinuxDays 2009 | Geneva, Switzerland |
June 3 June 4 |
Nordic Meet on Nagios 2009 | Stockholm, Sweden |
June 6 | PgDay Junín 2009 | Buenos Aires, Argentina |
June 8 June 12 |
Ruby on Rails Bootcamp with Charles B. Quinn | Atlanta, GA, USA |
June 10 June 11 |
FreedomHEC Taipei | Taipei, Taiwan |
June 11 June 12 |
ShakaCon Security Conference | Honolulu, HI, USA |
June 12 June 13 |
III Conferenza Italiana sul Software Libero | Bologna, Italy |
June 12 June 14 |
Writing Open Source: The Conference | Owen Sound, Canada |
June 13 | SouthEast LinuxFest | Clemson, SC, USA |
June 14 June 19 |
2009 USENIX Annual Technical Conference | San Diego, USA |
June 17 June 19 |
Open Source Bridge | Portland, OR, USA |
June 17 June 19 |
Conference on Cyber Warfare | Tallinn, Estonia |
June 20 June 26 |
Beginning iPhone for Commuters | New York, USA |
June 22 June 24 |
Velocity 2009 | San Jose, CA, USA |
June 22 June 24 |
YAPC|10 | Pittsburgh, PA, USA |
June 24 June 27 |
LinuxTag 2009 | Berlin, Germany |
June 24 June 27 |
10th International Free Software Forum | Porto Alegre, Brazil |
June 26 June 28 |
Fedora Users and Developers Conference - Berlin | Berlin, Germany |
June 26 June 30 |
Hacker Space Festival 2009 | Seine, France |
June 28 July 4 |
EuroPython 2009 | Birmingham, UK |
June 29 June 30 |
Open Source China World 2009 | Beijing, China |
July 1 July 3 |
OSPERT 2009 | Dublin, Ireland |
July 1 July 3 |
ICOODB 2009 | Zurich, Switzerland |
July 2 July 5 |
ToorCamp 2009 | Moses Lake, WA, USA |
July 3 July 11 |
Gran Canaria Desktop Summit (GUADEC/Akademy) | Gran Canaria, Spain |
July 3 | PHP'n Rio 09 | Rio de Janeiro, Brazil |
July 4 | Open Tech 2009 | London, UK |
July 6 July 10 |
Python African Tour : Sénégal | Dakar, Sénégal |
July 7 July 11 |
Libre Software Meeting | Nantes, France |
July 13 July 17 |
(Montreal) Linux Symposium | Montreal, Canada |
July 15 July 17 |
Kernel Conference Australia 2009 | Brisbane, Queensland, Australia |
July 15 July 16 |
NIT Agartala FOSS and GNU/Linux fest | Agartala, India |
July 18 July 19 |
Community Leadership Summit | San Jose, CA, USA |
July 19 July 20 |
Open Video Conference | New York City, USA |
July 19 | pgDay San Jose | San Jose, CA, USA |
If your event does not appear here, please tell us about it.
Web sites
Linux.com relaunched
The Linux Foundation has announced the relaunch of Linux.com, which it acquired recently. "Linux.com is designed to mirror the Linux community process by hosting a collaborative framework where users and developers can connect and increase the collective Linux knowledge and resources for new and advanced users alike. The site is the central source for informed Linux information, software and documentation covering the server, desktop, mobile, and embedded areas."
Page editor: Forrest Cook