User: Password:
|
|
Subscribe / Log in / New account

LWN.net Weekly Edition for January 21, 2010

LCA: How to destroy your community

By Jonathan Corbet
January 18, 2010
Josh Berkus is well known as a PostgreSQL hacker, but, as it happens, he also picked up some valuable experience during his stint at "The Laboratory for the Destruction of Communities," otherwise known as Sun Microsystems. That experience has been distilled into a "patented ten-step method" on how to free a project of unwelcome community involvement. Josh's energetic linux.conf.au presentation on this topic was the first talk in the "business of open source" miniconf; it was well received by an enthusiastic crowd.

[Josh Berkus] If you are a corporate developer, you're likely to realize early on that free software development communities are a pain. They'll mess up your marketing schemes by, for example, taking the software into countries where you have no presence and no plans. They'll interfere with product roadmaps with unexpected innovation, adding features which you had not planned for the next few years - or, worse, features which were planned for a proprietary version. Free software communities are never satisfied; they are always trying to improve things. They tend to redefine partner and customer relationships, confusing your sales people beyond any help. And they bug you all the time: sending email, expecting you to attend conferences, and so on. Fortunately, there are ways to get rid of this community menace. All that's needed are the following ten steps.

#1 is to make the project depend as much as possible on difficult tools. He noted that most companies have no real trouble employing this technique, since it makes good use of the tools they have around anyway. Community-resistant projects should, for example, use weird build systems not found anywhere else. A proprietary version control system is mandatory. Even better are issue trackers with limited numbers of licenses, forcing everybody to use the same account. It's also important to set up an official web site which is down as often as it's up. It's not enough to have no web site at all; in such situations, the community has an irritating habit of creating sites of its own. But a flaky site can forestall the creation of those sites, ensuring that information is hard to find.

2: Encourage the presence of poisonous people and maximize the damage that they can create. There is a special technique to the management of these people which goes something like this:

  1. Take pains to argue with these people at length and to denounce them on the project lists.

  2. Eventually, they should be banned from the community by fiat; it's important to avoid any sort of community process here.

  3. The banned people will take their flames elsewhere. Follow them and continue to argue with them in those external sites.

  4. Eventually the community will complain about this behavior; respond by letting the poisonous people back in. Then go back to step 1 and do it all over again.

Properly managed, one effective poisonous person, according to Josh, can wipe out a community of hundreds.

3: Provide no documentation. There should be no useful information about the code, build methods, the patch submission process, the release process, or anything else. Then, when people ask for help, tell them to RTFM.

4: Project decisions should be made in closed-door meetings. An OK start is to have online meetings with very short notice, though, for best effect, they should be at a time which is inconvenient in the time zones where most community members are to be found. Better is to have meetings via conference call: that excludes about a third of the planet due to sleep requirements, and, for extra value, also excludes a number of people who are at work who might have been able to participate in an online meeting. Best, though, is to hold meetings in person at the corporate headquarters.

5: Employ large amounts of legalese. Working with the project should involve complex contributor agreements, web site content licensing, non-disclosure agreements, trademark licenses, and so on. For full effect, these documents should all be changed without notice every couple of months or so.

6: The community liaison must be chosen carefully. The optimal choice is somebody reclusive - somebody who has no friends and really doesn't like people at all. Failing that, go with the busiest person on the staff - somebody with both development and management responsibilities, and who is already working at least 70 hours per week. It's important, in this case, to not remove any of this person's other responsibilities when adding the liaison duty. It can also be effective to go with somebody who is unfamiliar with the technology; get a Java person to be the liaison for a Perl-based project. Or, if all else fails, just leave the position vacant for months at a time.

7: Governance obfuscation. Community-averse corporations, Josh says, should learn from the United Nations and create lengthy, complicated processes. Keep the decision-making powers unclear; this is an effective way to turn contributors into poisonous people. Needless to say, the rules should be difficult or impossible to change.

[Josh Berkus] 8: Screw around with licensing. Community members tend to care a lot about licenses, so changing the licensing can be a good way to make them go elsewhere. Even better is to talk a lot about license changes without actually changing anything; that will drive away contributors who like the current license without attracting anybody who might like the alleged new license.

9: Do not allow anybody outside the company to have commit access, ever. There should be a rule (undocumented, of course) that only employees can have commit rights. Respond evasively to queries - "legal issues, we're working on it" is a good one. For especially strong effect, pick an employee who writes no code and make them a committer on the project.

10: Silence. Don't answer queries, don't say anything. A company which masters this technique may not need any of the others; it is the most effective community destroyer of them all.

Josh concluded by noting that he saw all of these techniques employed to great effect by Sun. But Sun is far from alone in this regard. Josh has been told by a veteran of the X Consortium that they, too, had made good use of all ten methods at one point or another. Community-destroying skills are widespread in the industry.

But what if you have a different kind of company, one which wants to encourage and build communities? Doing the opposite of all of the above clearly makes a lot of sense. But, Josh said, it all really comes down to trust. A relationship with a development community can be seen like a marriage: one can spend years building it up, but one affair will kill the trust it is based on. Similarly, a company can lose half of its community in a weekend. Those who would avoid that fate must trust their communities and act in a way which will cause that trust to be returned.

Comments (57 posted)

Getting things done in Linux

January 20, 2010

This article was contributed by Joe 'Zonker' Brockmeier.

If your New Year's Resolution includes something along the lines of "be better organized," choosing a task manager might be in order. Linux doesn't lack for task managers, but good ones are few and far between. To help LWN readers boost productivity, we've picked a few to look at.

Tasque

[Tasque]

Tasque has the advantage of being very simple. It also has the disadvantage of being very simple. It's a good task system for users who want a fairly quick and easy way to manage tasks without the overhead of a system like Getting Things Done (GTD).

Tasque is a very simple task list that syncs with several backends, or just with a local file. One of the main "selling points" of Tasque, aside from simplicity, is being able to use it with multiple data stores. Tasque works with Remember the Milk, the Evolution Data store, local files, and others.

[Preferences]

Task entry and maintenance with Tasque is easy, and it integrates very well with the GNOME desktop. But, Tasque seemed to be a bit laggy when using it in conjunction with the Remember the Milk, though it was very responsive when working with a local data file. Whether the lag was due to RTM or Tasque is unclear. However, over more than a week's time, Tasque was very nearly unusable each time we tried syncing with RTM.

Tasque is shipped by default in openSUSE for GNOME users, and is available for Fedora and in the Ubuntu Universe repository. Source is available via the Tasque pages on the GNOME Web site. It might be a bit too simple for some users, though, which brings us to Tracks.

Tracks: Doing Things Properly

Tracks has the advantage of not only being open source, but also Web-based and available anywhere. Tracks is another tool meant to implement David Allen's Getting Things Done methodology, though it's also suitable for users who don't subscribe to that particular methodology.

[Export options]

Tracks is Rails-based, and not packaged for most major distros. It is available as a BitNami Stack for users who don't have a Web server handy or don't feel like setting up a Rails environment from scratch on their desktop. The BitNami Tracks package features a GUI and command line installer, and requires very little effort.

While Tracks is more heavyweight than Tasque, it's also much more full-featured and capable. Tracks allows users to not only track individual to-dos, but also projects. Since it takes a cue from GTD, it also provides users with a way to track contexts (such as things you do while at the computer, phone calls, errands) and projects via iCal, plain text files, and RSS feeds.

Tracks is sort of multi-user. That is, an instance of Tracks can support multiple users, but each user has her or his own set of actions. It's not suitable for group projects, as there's no way for one user to share projects with another or assign another user actions.

Tracks is ideal for productivity junkies who want to measure progress, or for consultants or employees who want or need to show how they spend their time to clients/employers. Track produces enormous amounts of statistics on completed actions, the number of completed actions in the last 12 months, the days of the week that actions are created and completed, and much more.

[Mobile interface]

Need to get to data via a mobile phone? Tracks also provides a mobile interface via sitename.com/mobile/, without any additional configuration required. Finally, Tracks has an API, so users who have a bit of shell scripting, Ruby, or AppleScript under their belt can get data in and out of tracks using more than the Web interface.

When running under the Bitnami stack on the test machine, Tracks was as snappy as a desktop application — sometimes more-so. The only glitch discovered was that Tracks would throw an Apache error when trying to see recurring tasks. Overall, Tracks seems a very solid application and is quite responsive.

KOrganizer

For KDE users, KOrganizer is the best of the lot. We didn't find any really compelling stand-alone task managers for KDE, but KOrganizer includes a really handy task manager alongside a journaling application and calendar.

[Korgnanizer]

KOrganizer is a bit more heavyweight than the others, but very full featured. To-Do entries can have starting dates and times, notes, attachments, send reminders, and attendees. Users who need or want all of that information should definitely explore KOrganizer.

Or it can be much simpler. The stand-alone To-Do list lets users enter task entries as one-liners with no additional information. It's up to the user to choose the level of complexity. Tasks are displayed in the To-Do list view and the calendar view, if the tasks are assigned a completion date.

[Todo entry]

KOrganizer is very complete for users who are looking for a desktop app without a lot of import/export options. Users who are looking to sync with mobile devices or Web-based services might find KOrganizer a bit limited, but if those aren't on the list of criteria then it's a very handy solution.

Any Linux distro that packages KDE should have KOrganizer in the package repositories, if it's not already installed by default.

Getting Things GNOME

One of the newer task management applications available for Linux is Getting Things GNOME (GTG). As the name suggests, it's a GNOME-based implementation of Getting Things Done.

[GTG]

Despite the low version number, 0.2 was only released in December, GTG is stable and relatively full-featured. Since GTG is new, users will need to either compile from source or look at something like Ubuntu's PPAs or the openSUSE Build Service, where recent releases are available.

Entering tasks is easy, all that's required is a one-liner. Users can opt to tag tasks and add notes, due dates, and so on — but all that's required is a one line entry in the main window. Double click on a task, or click on "New Task" in the GTG main window, and GTG provides a separate window where users can enter additional details about a task.

[GTG subtasks]

For management of larger projects, tasks can also have sub-tasks. So if there's a lot of Yak Shaving involved with getting something done, it's possible to track every step of the grooming. This will come in handy for users in a corporate environment.

GTG also supports plugins, and works with several other GNOME productivity tools, including Tomboy and the Hamster Applet for time tracking. At first, it looked like GTG lacked support for a notification icon, which seemed odd — but support for that is also provided through a plugin.

Overall, GTG is a fairly complete solution and should be sufficient for anyone who's looking to implement GTD on a GNOME desktop.

Summary

For lightweight task management, GTG heads the pack. It's light, fast, and provides just enough functionality that users can implement GTD or their own brand of task management. Users looking for a more complete solution will probably find Tracks the most attractive. It's very flexible and provides a very usable interface, despite being a Web-based application.

No matter what the preference, though, it should be able to find a Linux-based task manager to fit the bill. This is one area that FLOSS tools have very well covered.

Comments (19 posted)

Disney and Sony release open source 3-D modeling utilities

January 20, 2010

This article was contributed by Nathan Willis

Not one, but two, Hollywood movie studios — Disney and Sony Pictures Imageworks — released open source software this past week. The film industry has had a mixed relationship with the free software movement in the past — on the one hand vehemently opposing it on issues like the Digital Millennium Copyright Act, but on the other making heavy use of free software to save money in visual effects render farms. This week's releases are indeed in the visual effects category; tools for 3-D modeling. Disney's project automates texture mapping, and Sony's is a new shading language. Both could benefit the open source community.

Ptex

Disney's Ptex is a library that simplifies the otherwise time-consuming task of mapping the surface of a 3-D model into two dimensions for the purpose of painting it with a "texture," or 2-D image skin. The traditional method unwraps the surface of the model into a flat image template called a UV map (the name refers to the U and V coordinate axes used on the map to distinguish them from the X, Y, and Z coordinate system of the original model). A UV map can be created automatically, but almost always requires manual tweaking by the 3-D artist in order to minimize awkward seams, overlaps, and other artifacts that are difficult to paint. Furthermore, complex models are typically split into multiple texture files, to simplify the resulting maps where the geometry itself is awkward, and to allow for higher resolution textures in important areas.

Ptex speeds up the process in several ways. First, it stores a separate texture for each face of the original 3-D model, eliminating the need for the user to painstakingly unwrap the model and manually adjust the UV map for optimization. Second, it compactly stores all of the per-face textures for the model in a single file, along with adjacency information, thus reducing the I/O load on the animation system. Third, it allows different resolutions for each face, and allows changing face resolutions, both eliminating the need to split a model into multiple textures and allowing the artist to make adjustments directly within the painting application. Finally, it uses the adjacency data to apply filters on the seams between adjoining faces — even those of different resolutions — resulting in a smooth, seamless final texture.

Disney first used Ptex in the 2008 animated short Glago's Guest, and subsequently on "virtually every surface" of the feature-length film Bolt. A presentation available at the Ptex web site describes the time savings for both computing time and artist workflow. Ptex reduced the number of I/O calls to render one model by a factor of 2,000, and cut the number of CPU cycles on another by a factor of 13.

Just as importantly, however, incorporating Ptex into the animation studio's modeling and painting applications speeds up the creative workflow of the team. The need to manually worry about UV mapping every model forced the 3-D modelers to think about how the model would be painted — which should not be their concern — and forced the painters to both learn and, periodically, adjust the UV maps.

The Ptex team has posted a video to YouTube demonstrating how artists interact with a Ptex texture. It is interesting viewing even for those unfamiliar with 3-D modeling and animation; the narrator points out several steps in the process where Ptex allows him to simply paint the model without concern for the underlying geometry while an older system would necessitate stopping or changing the UV map.

The Ptex source code is available under an MIT-style license through Github. The project's web site also includes documentation of the Ptex library API and a file format specification. A 2008 paper is also provided that goes into considerably more detail on the format and filtering. Of note, when the paper was written Ptex only supported one type of 3-D mesh, Catmull-Clark surfaces that use quadrilateral faces. Since then, Ptex has been extended to support additional mesh types, including non-quadrilateral Catmull-Clark meshes and loop subdivision surfaces.

Open Shading Language

Sony's Open Shading Language (OSL) is a language for writing shaders, algorithms used to calculate high-quality simulated lighting effects in the final rendering stage of 3-D animation. Shaders can do everything from model how a surface reflects and scatters light to how light is transformed as it passes through a volume of space (such as fog). High quality shaders can be of arbitrary complexity, thus the necessity of writing them in a programming language.

The most popular shading language is the RenderMan Shading Language (RSL) developed by Pixar. RSL is supported by a wide variety of free and proprietary tools, and several competing languages use the same basic C-like syntax of RSL.

The OSL language specification [PDF] is hosted at Google Code, and is distributed under the BSD license. OSL does away with several distinctions made in RSL and other shading languages, such as treating light sources as a distinct type of shader and transparency as a distinct property. Instead, OSL shaders calculate a "radiance closure," a symbolic representation of how the surface or volume in question affects light. A light source is merely a closure that emits a positive quantity of light, and a semi-transparent material is merely a closure that permits some quantity of light to pass through it.

Closures are also different from RSL-style shaders because, as an abstract representation of a surface or volume's behavior, they can be evaluated from any direction — RSL-style shaders can only be evaluated to a fixed value that represents the final color as calculated for a particular viewing angle. OSL permits closures to be saved for later evaluation and sampled or referenced without re-rendering. An OSL scene can be described as a connected network of closures that are "integrated" as-needed by the renderer. Depending on the viewing angle and lighting for a particular rendered pixel, some closures may not be evaluated at all, and these dependencies are determined automatically.

The practical upshot of OSL is that shader writers can focus more on modeling the physical properties of a surface or volume, and not explicitly concern themselves with implementation details imposed by other, more structured shading languages. The trade-off is that OSL requires significant changes to the renderer to support its significantly different approach. So far, no such renderer is available outside Sony.

The implementation currently available from the project provides a handful of sample OSL shaders, a compiler for translating OSL shaders into an intermediate bytecode form, a library that interprets OSL shaders, and a sample program called testshade that can execute a shader to produce a single test image.

The OSL Introduction page warns visitors about the pre-production state of the code, but notes that Sony has implemented OSL in its production workflow and is beginning to use it for real projects. OSL is currently slower than its other shaders and the team has isolated some missing functionality that needs to be added to the language.

Nevertheless, the OSL team is confident that it will meet its production goals, and promises to roll out updates to the public code in the coming weeks and months. Further out, it is interested in building alternate back-ends that could translate OSL shaders into code executable on GPUs, and experimenting with the LLVM compiler as an alternative to the present bytecode approach.

Open source integration

Considering the freshness of both projects, it is no surprise that neither has a clear future with existing open source 3-D tools like Blender. Both have attracted considerable attention in the community, though, in discussions at sites like Blenderartists.org and BlenderNation. The reaction has been positive, as is usually the case with a new open source utility.

As one might expect, artists who use Blender seem more excited about the prospect of utilizing Ptex, since it frees them from the decidedly un-artistic step of UV-mapping. Less clear is what to make of OSL, which still has a ways to go before it can be evaluated in real-world circumstances. Sony has demonstrated a good track record with its open source projects, however — it released open source tools for database management, voxel storage, and Python string handling in 2009, and has continued to update its code.

Open source developers who only remember the Hollywood movie studios from their legal assaults on DVD decryption and Bittorrent technology would do well to investigate the moves made by the animation studios. However upper management feels, the animators and tool developers there clearly share common ground with the community, and letting that go to waste would certainly be a mistake.

Comments (5 posted)

Page editor: Jonathan Corbet

Security

BackTrack 4: the security professional's toolbox

January 20, 2010

This article was contributed by Koen Vervloesem

After a beta period of almost a year, the developers of BackTrack have released the long-awaited successor to version 3. This specialized Linux distribution keeps its focus on security tools for penetration testers and security professionals, but also expands into a new direction: forensic investigations. It comes as a live distribution that is also installable on hard drive, and provides hundreds of open source security tools in a categorized menu hierarchy.

While previous releases were based on Slackware-derivative SLAX, BackTrack 4 (code name "pwnsauce") is based on Ubuntu 8.10 ("Intrepid Ibex"). However, this is not a typical Ubuntu spin-off with a pre-chosen package set and some eye candy glued on top: many of the tools have received a custom configuration or patches to accommodate the needs of security professionals. Therefore, the developers have set up their own package repositories for updates. Under the hood lies a 2.6.30 kernel with a variety of patched wireless drivers to "enhance wireless injection attacks" as well as some older wireless drivers for stability.

BackTrack 4 can be downloaded as a 1.5 GB ISO file or as 2 GB VMware image. Actually, the ISO file is all you need in most circumstances: it can be burned to a DVD, written to a USB stick with tools such as Unetbootin or launched as a virtual machine in VirtualBox, VMware, Xen, KVM, and so forth. Instead of using it as a live system, BackTrack 4 can now also be installed from within the live environment, thanks to Ubuntu's Ubiquity installer. The project's website lists tutorials for a couple of installation types, including an installation to hard disk, a dual boot installation, or a persistent installation on a USB stick.

Working with BackTrack

[frame buffer console]

After choosing the default option in the GRUB menu, BackTrack starts with a stylish frame buffer console. One can start working right away on the command line, or fire up a graphical desktop environment with startx. This presents the user with a KDE 3 desktop which has some nice tweaks. For example, there is a Run box embedded in the panel at the bottom, which allows applications to be run without invoking a terminal first. However, some of the tweaks are annoying. For example, the KDE desktop welcomes the user with a very loud startup tune and many system sounds are set at an equally loud level. Also keep in mind that, for the sake of security, networking is disabled by default, so the user has to fire it up manually with a /etc/init.d/networking start command.

The purpose of BackTrack is to present a collection of hundreds of open source security tools. It would be out of the scope of this article to list them all. Luckily, all these tools are well organized in different submenus [BackTrack menu] of the "Backtrack" menu: "Information Gathering", "Network Mapping", "Vulnerability Identification", "Web Application Analysis", "Radio Network Analysis", "Penetration", "Privilege Escalation", "Maintaining Access", "Digital Forensics", "Reverse Engineering", "Voice Over IP", and "Miscellaneous". Each submenu is further subdivided into subcategories. Most of the tools are command line utilities, but a nice feature is that the menu items open a terminal window with the relevant tool showing its usage info (e.g. with the --help option).

The start menu has also some general menus like "Internet", "Graphics", "Multimedia", "System", "Utilities", etc. containing "normal" programs. The nice thing about it is that even some of these programs have a custom configuration. For example, Firefox is configured with the NoScript extension, protecting the penetration tester against malicious JavaScript on hacker websites he probably visits, the Tamper Data extension to view and modify HTTP headers, and the HackBar tool bar to help find and test SQL injections and cross-site scripting (XSS) holes. Moreover, the bookmarks tool bar is filled with some relevant web sites, such as the BackTrack web site and the Metasploit Project. Installing other software is possible with Synaptic or apt-get, which have access to the BackTrack repository, and getting an up-to-date BackTrack is as simple as an apt-get update && apt-get upgrade command.

With each release, BackTrack adds some new software. Starting with BackTrack 4, the distribution supports accelerated password cracking assisted by graphics cards. The Pyrit WPA cracking tool does this using NVIDIA's CUDA. Another newcomer is OpenVAS: previous releases of BackTrack didn't ship with the vulnerability scanner Nessus because of license issues, but BackTrack 4 finally makes up for this with the inclusion of the GPL-licensed OpenVAS.

Forensics

BackTrack 4 adds a new focus, indicated by the new boot menu item "Start BackTrack Forensics". Traditionally, BackTrack wasn't suitable for forensic purposes because it automatically mounts available drives and uses the swap partition it finds on the hard drive. In a forensic investigation of a computer this is obviously a recipe for disaster as it changes last mount times, and also wipes out hidden data in the swap partition which could be important. BackTrack 4 still does all that by default, but not if you start it with the forensics option in the boot menu.

The BackTrack developers have also expanded their collection of tools in the "Digital Forensics" menu. All of this means that BackTrack is now not only useful for penetration testers and security professionals, but also more and more for forensic experts. Of course if used in a forensic investigation it is of utmost importance that BackTrack not go through an unattended boot, as this will use the standard boot mode which 'contaminates' the machine. To be really on the safe side, forensic experts should change the default boot option to the forensic one.

Conclusion

Although BackTrack documentation itself is scarce and fragmentary, this is not a big issue, because it's more about the tools than about the distribution. For people wanting to train their penetration testing skills, the developers offer a "Penetration testing With BackTrack" course. Upon completion of this course, students become eligible to take a certification challenge in an unfamiliar lab. After successful completion of this hands-on challenge, they receive the Offensive Security Certified Professional (OSCP) certification.

More than ever, BackTrack is an excellent Linux distribution for security professionals. With the move from a SLAX-based live cd to a full-blown Ubuntu-based Linux distribution, it's much easier to update the system, install other software or customize the distribution. New tools like OpenVAS and Pyrit are a welcome addition to the security professional's toolbox. In addition, with the increased focus on forensics, the distribution will surely find some use outside the traditional penetration testers' scene.

Comments (none posted)

New vulnerabilities

aria2: denial of service

Package(s):aria2 CVE #(s):CVE-2009-3617
Created:January 14, 2010 Updated:January 20, 2010
Description: From the CVE entry:

Format string vulnerability in the AbstractCommand::onAbort function in src/AbstractCommand.cc in aria2 before 1.6.2, when logging is enabled, allows remote attackers to execute arbitrary code or cause a denial of service (application crash) via format string specifiers in a download URI. NOTE: some of these details are obtained from third party information.

Alerts:
Gentoo 201001-06 aria2 2010-01-13

Comments (none posted)

bash: multiple vulnerabilities

Package(s):bash CVE #(s):CVE-2010-0002 CVE-2008-5374
Created:January 14, 2010 Updated:September 23, 2011
Description: From the Mandriva alert:

A vulnerability have been discovered in Mandriva bash package, which could allow a malicious user to hide files from the ls command, or garble its output by crafting files or directories which contain special characters or escape sequences (CVE-2010-0002). This update fixes the issue by disabling the display of control characters by default.

Additionally, this update fixes the unsafe file creation in bash-doc sample scripts (CVE-2008-5374).

Alerts:
Gentoo 201210-05 bash 2012-10-19
CentOS CESA-2011:1073 bash 2011-09-22
Scientific Linux SL-bash-20110721 bash 2011-07-21
Red Hat RHSA-2011:1073-01 bash 2011-07-21
Red Hat RHSA-2011:0261-01 bash 2011-02-16
Mandriva MDVSA-2010:004 bash 2010-01-13

Comments (none posted)

bind: multiple vulnerabilities

Package(s):bind CVE #(s):CVE-2010-0097 CVE-2010-0290
Created:January 20, 2010 Updated:June 28, 2010
Description:

From the Red Hat advisory:

A flaw was found in the BIND DNSSEC NSEC/NSEC3 validation code. If BIND was running as a DNSSEC-validating resolver, it could incorrectly cache NXDOMAIN responses, as if they were valid, for records proven by NSEC or NSEC3 to exist. A remote attacker could use this flaw to cause a BIND server to return the bogus, cached NXDOMAIN responses for valid records and prevent users from retrieving those records (denial of service). (CVE-2010-0097)

The original fix for CVE-2009-4022 was found to be incomplete. BIND was incorrectly caching certain responses without performing proper DNSSEC validation. CNAME and DNAME records could be cached, without proper DNSSEC validation, when received from processing recursive client queries that requested DNSSEC records but indicated that checking should be disabled. A remote attacker could use this flaw to bypass the DNSSEC validation check and perform a cache poisoning attack if the target BIND server was receiving such client queries. (CVE-2010-0290)

Alerts:
Debian DSA-2054-2 bind9 2010-06-15
Gentoo 201006-11 bind 2010-06-01
Slackware SSA:2010-176-01 bind 2010-06-28
Debian DSA-2054-1 bind9 2010-06-04
rPath rPSA-2010-0018-1 bind 2010-03-15
Mandriva MDVSA-2010:021 bind 2010-01-20
Fedora FEDORA-2010-0868 bind 2010-01-20
Fedora FEDORA-2010-0861 bind 2010-01-20
Ubuntu USN-888-1 bind9 2010-01-20
CentOS CESA-2010:0062 bind 2010-01-20
Red Hat RHSA-2010:0062-02 bind 2010-01-20

Comments (none posted)

gcc: arbitrary code execution

Package(s):gcc CVE #(s):CVE-2009-3736
Created:January 14, 2010 Updated:March 22, 2010
Description: from the Red Hat security update:

A flaw was found in the way GNU Libtool's libltdl library looked for libraries to load. It was possible for libltdl to load a malicious library from the current working directory. In certain configurations, if a local attacker is able to trick a local user into running a Java application (which uses a function to load native libraries, such as System.loadLibrary) from within an attacker-controlled directory containing a malicious library or module, the attacker could possibly execute arbitrary code with the privileges of the user running the Java application.

Alerts:
Gentoo 201412-08 insight, perl-tk, sourcenav, tk, partimage, bitdefender-console, mlmmj, acl, xinit, gzip, ncompress, liblzw, splashutils, m4, kdm, gtk+, kget, dvipng, beanstalkd, pmount, pam_krb5, gv, lftp, uzbl, slim, iputils, dvbstreamer 2014-12-11
SuSE SUSE-SR:2010:006 2010-03-15
Mandriva MDVSA-2010:056 openoffice.org 2010-03-05
Fedora FEDORA-2010-2341 mingw32-libltdl 2010-02-21
CentOS CESA-2010:0039 gcc 2010-01-15
CentOS CESA-2010:0039 gcc 2010-01-15
CentOS CESA-2010:0039 gcc 2010-01-14
Fedora FEDORA-2010-2943 mingw32-libltdl 2010-02-26
Mandriva MDVSA-2010:035 openoffice.org 2010-02-11

Comments (none posted)

glibc: encrypted password disclosure via NIS

Package(s):glibc CVE #(s):CVE-2010-0015
Created:January 20, 2010 Updated:October 28, 2010
Description:

From the Debian advisory:

Christoph Pleger has discovered that the GNU C Library (aka glibc) and its derivatives add information from the passwd.adjunct.byname map to entries in the passwd map, which allows local users to obtain the encrypted passwords of NIS accounts by calling the getpwnam function.

Alerts:
Ubuntu USN-1396-1 eglibc, glibc 2012-03-09
SUSE SUSE-SA:2010:052 glibc 2010-10-28
Mandriva MDVSA-2010:112 glibc 2010-06-08
Mandriva MDVSA-2010:111 glibc 2010-06-08
Debian DSA-1973-1 glibc 2010-01-19

Comments (none posted)

gzip: arbitrary code execution

Package(s):gzip CVE #(s):CVE-2009-2624
Created:January 20, 2010 Updated:March 8, 2010
Description:

From the Debian advisory:

Thiemo Nagel discovered a missing input sanitation flaw in the way gzip used to decompress data blocks for dynamic Huffman codes, which could lead to the execution of arbitrary code when trying to decompress a crafted archive. This issue is a reappearance of CVE-2006-4334 and only affects the lenny version.

Alerts:
Gentoo 201412-08 insight, perl-tk, sourcenav, tk, partimage, bitdefender-console, mlmmj, acl, xinit, gzip, ncompress, liblzw, splashutils, m4, kdm, gtk+, kget, dvipng, beanstalkd, pmount, pam_krb5, gv, lftp, uzbl, slim, iputils, dvbstreamer 2014-12-11
rPath rPSA-2010-0013-1 gzip 2010-03-07
Ubuntu USN-889-1 gzip 2010-01-20
Mandriva MDVSA-2010:020 gzip 2010-01-20
Debian DSA-1974-1 gzip 2010-01-20
Fedora FEDORA-2010-0884 gzip 2010-01-22
Fedora FEDORA-2010-0964 gzip 2010-01-22

Comments (none posted)

gzip: arbitrary code execution

Package(s):gzip CVE #(s):CVE-2010-0001
Created:January 20, 2010 Updated:October 17, 2011
Description:

From the Red Hat advisory:

An integer underflow flaw, leading to an array index error, was found in the way gzip expanded archive files compressed with the Lempel-Ziv-Welch (LZW) compression algorithm. If a victim expanded a specially-crafted archive, it could cause gzip to crash or, potentially, execute arbitrary code with the privileges of the user running gzip. This flaw only affects 64-bit systems. (CVE-2010-0001)

Alerts:
Gentoo 201412-08 insight, perl-tk, sourcenav, tk, partimage, bitdefender-console, mlmmj, acl, xinit, gzip, ncompress, liblzw, splashutils, m4, kdm, gtk+, kget, dvipng, beanstalkd, pmount, pam_krb5, gv, lftp, uzbl, slim, iputils, dvbstreamer 2014-12-11
Mandriva MDVSA-2011:152 ncompress 2011-10-17
Debian DSA-2074-1 ncompress 2010-07-21
Pardus 2010-86 ncompress 2010-06-24
rPath rPSA-2010-0013-1 gzip 2010-03-07
CentOS CESA-2010:0061 gzip 2010-01-22
Red Hat RHSA-2010:0061-02 gzip 2010-01-20
Ubuntu USN-889-1 gzip 2010-01-20
Mandriva MDVSA-2010:020 gzip 2010-01-20
Mandriva MDVSA-2010:019 gzip 2010-01-20
Debian DSA-1974-1 gzip 2010-01-20
CentOS CESA-2010:0061 gzip 2010-01-20
CentOS CESA-2010:0061 gzip 2010-01-20
Fedora FEDORA-2010-0884 gzip 2010-01-22
Slackware SSA:2010-060-03 gzip 2010-03-02
Fedora FEDORA-2010-0964 gzip 2010-01-22

Comments (none posted)

kernel: multiple vulnerabilities

Package(s):kernel CVE #(s):CVE-2006-6304 CVE-2009-3556 CVE-2009-4020 CVE-2009-4141 CVE-2009-4272
Created:January 20, 2010 Updated:November 5, 2012
Description:

From the Red Hat advisory:

the RHSA-2009:0225 update introduced a rewrite attack flaw in the do_coredump() function. A local attacker able to guess the file name a process is going to dump its core to, prior to the process crashing, could use this flaw to append data to the dumped core file. This issue only affects systems that have "/proc/sys/fs/suid_dumpable" set to 2 (the default value is 0). (CVE-2006-6304, Moderate)

The fix for CVE-2006-6304 changes the expected behavior: With suid_dumpable set to 2, the core file will not be recorded if the file already exists. For example, core files will not be overwritten on subsequent crashes of processes whose core files map to the same name.

the RHBA-2008:0314 update introduced N_Port ID Virtualization (NPIV) support in the qla2xxx driver, resulting in two new sysfs pseudo files, "/sys/class/scsi_host/[a qla2xxx host]/vport_create" and "vport_delete". These two files were world-writable by default, allowing a local user to change SCSI host attributes. This flaw only affects systems using the qla2xxx driver and NPIV capable hardware. (CVE-2009-3556, Moderate)

a buffer overflow flaw was found in the hfs_bnode_read() function in the HFS file system implementation. This could lead to a denial of service if a user browsed a specially-crafted HFS file system, for example, by running "ls". (CVE-2009-4020, Low)

Tavis Ormandy discovered a deficiency in the fasync_helper() implementation. This could allow a local, unprivileged user to leverage a use-after-free of locked, asynchronous file descriptors to cause a denial of service or privilege escalation. (CVE-2009-4141, Important)

the Parallels Virtuozzo Containers team reported the RHSA-2009:1243 update introduced two flaws in the routing implementation. If an attacker was able to cause a large enough number of collisions in the routing hash table (via specially-crafted packets) for the emergency route flush to trigger, a deadlock could occur. Secondly, if the kernel routing cache was disabled, an uninitialized pointer would be left behind after a route lookup, leading to a kernel panic. (CVE-2009-4272, Important)

Alerts:
SUSE SUSE-SU-2015:0812-1 kernel 2015-04-30
openSUSE openSUSE-SU-2012:1439-1 kernel 2012-11-05
Oracle ELSA-2012-1323 kernel 2012-10-04
Oracle ELSA-2012-1323 kernel 2012-10-03
CentOS CESA-2012:1323 kernel 2012-10-03
Red Hat RHSA-2012:1323-01 kernel 2012-10-02
openSUSE openSUSE-SU-2012:0812-1 kernel 2012-07-03
openSUSE openSUSE-SU-2012:0799-1 kernel 2012-06-28
openSUSE openSUSE-SU-2012:0781-1 kernel 2012-06-22
SUSE SUSE-SA:2010:036 kernel 2010-09-01
SuSE SUSE-SA:2010:023 kernel 2010-05-06
SuSE SUSE-SA:2010:019 kernel 2010-03-30
Red Hat RHSA-2010:0161-01 kernel-rt 2010-03-23
Red Hat RHSA-2010:0149-01 kernel 2010-03-16
SuSE SUSE-SA:2010:016 kernel 2010-03-08
Ubuntu USN-894-1 linux, linux-source-2.6.15 2010-02-05
Red Hat RHSA-2010:0076-01 kernel 2010-02-02
Debian DSA-2004-1 linux-2.6.24 2010-02-27
Debian DSA-2003-1 linux-2.6 2010-02-22
CentOS CESA-2010:0046 kernel 2010-01-20
Red Hat RHSA-2010:0046-01 kernel 2010-01-19
SuSE SUSE-SA:2010:010 kernel 2010-02-08
SuSE SUSE-SA:2010:009 kernel 2010-02-05
Fedora FEDORA-2010-1500 kernel 2010-02-05
CentOS CESA-2010:0076 kernel 2010-02-04

Comments (1 posted)

libthai: arbitrary code execution

Package(s):libthai CVE #(s):CVE-2009-4012
Created:January 15, 2010 Updated:February 1, 2010
Description: From the Debian advisory: Tim Starling discovered that libthai, a set of Thai language support routines, is vulnerable of integer/heap overflow. This vulnerability could allow an attacker to run arbitrary code by sending a very long string.
Alerts:
Ubuntu USN-887-1 libthai 2010-01-18
Mandriva MDVSA-2010:010 libthai 2010-01-16
Debian DSA-1971-1 libthai 2010-01-15
SuSE SUSE-SR:2010:002 virtualbox-ose, NetworkManager-gnome, avahi, acl, libthai 2010-02-01

Comments (none posted)

mysql: multiple vulnerabilities

Package(s):mysql CVE #(s):CVE-2009-4028 CVE-2009-4030
Created:January 18, 2010 Updated:January 14, 2013
Description:

From the Mandriva advisory:

The vio_verify_callback function in viosslfactories.c in MySQL 5.0.x before 5.0.88 and 5.1.x before 5.1.41, when OpenSSL is used, accepts a value of zero for the depth of X.509 certificates, which allows man-in-the-middle attackers to spoof arbitrary SSL-based MySQL servers via a crafted certificate, as demonstrated by a certificate presented by a server linked against the yaSSL library (CVE-2009-4028).

MySQL 5.1.x before 5.1.41 allows local users to bypass certain privilege checks by calling CREATE TABLE on a MyISAM table with modified (1) DATA DIRECTORY or (2) INDEX DIRECTORY arguments that are originally associated with pathnames without symlinks, and that can point to tables created at a future time at which a pathname is modified to contain a symlink to a subdirectory of the MySQL data home directory, related to incorrect calculation of the mysql_unpacked_real_data_home value. NOTE: this vulnerability exists because of an incomplete fix for CVE-2008-4098 and CVE-2008-2079 (CVE-2009-4030).

Alerts:
Ubuntu USN-1397-1 mysql-5.1, mysql-dfsg-5.0, mysql-dfsg-5.1 2012-03-12
Gentoo 201201-02 mysql 2012-01-05
SUSE SUSE-SR:2010:021 mysql, dhcp, monotone, moodle, openssl 2010-11-16
SuSE SUSE-SR:2010:011 dovecot12, cacti, java-1_6_0-openjdk, irssi, tar, fuse, apache2, libmysqlclient-devel, cpio, moodle, libmikmod, libicecore, evolution-data-server, libpng/libpng-devel, libesmtp 2010-05-10
SuSE SUSE-SR:2010:007 cifs-mount/samba, compiz-fusion-plugins-main, cron, cups, ethereal/wireshark, krb5, mysql, pulseaudio, squid/squid3, viewvc 2010-03-30
rPath rPSA-2010-0014-1 mysql 2010-03-07
Debian DSA-1997-1 mysql-dfsg-5.0 2010-02-14
Mandriva MDVSA-2010:012 mysql 2010-01-17
Mandriva MDVSA-2010:011 mysql 2010-01-17
Red Hat RHSA-2010:0109-01 mysql 2010-02-16
CentOS CESA-2010:0109 mysql 2010-03-01
CentOS CESA-2010:0110 mysql 2010-02-17
Red Hat RHSA-2010:0110-01 mysql 2010-02-16
Ubuntu USN-897-1 mysql-dfsg-5.0, mysql-dfsg-5.1 2010-02-10

Comments (none posted)

openssl: denial of service

Package(s):openssl CVE #(s):CVE-2009-4355
Created:January 14, 2010 Updated:April 19, 2010
Description: From the Debian alert:

It was discovered that a significant memory leak could occur in openssl, related to the reinitialization of zlib. This could result in a remotely exploitable denial of service vulnerability when using the Apache httpd server in a configuration where mod_ssl, mod_php5, and the php5-curl extension are loaded.

Alerts:
Gentoo 201110-01 openssl 2011-10-09
Fedora FEDORA-2010-5357 openssl 2010-03-26
Mandriva MDVSA-2010:022 openssl 2010-01-21
CentOS CESA-2010:0054 openssl 2010-01-20
Slackware SSA:2010-060-02 openssl 2010-03-02
Red Hat RHSA-2010:0054-01 openssl 2010-01-19
rPath rPSA-2010-0004-1 openssl 2010-01-14
Ubuntu USN-884-1 openssl 2010-01-14
Debian DSA-1970-1 openssl 2010-01-13

Comments (none posted)

phpMyAdmin: multiple vulnerabilities

Package(s):phpMyAdmin CVE #(s):CVE-2008-7251 CVE-2008-7252 CVE-2009-4605
Created:January 20, 2010 Updated:April 19, 2010
Description:

From the Mandriva advisory:

libraries/File.class.php in phpMyAdmin 2.11.x before 2.11.10 creates a temporary directory with 0777 permissions, which has unknown impact and attack vectors (CVE-2008-7251).

libraries/File.class.php in phpMyAdmin 2.11.x before 2.11.10 uses predictable filenames for temporary files, which has unknown impact and attack vectors (CVE-2008-7252).

scripts/setup.php (aka the setup script) in phpMyAdmin 2.11.x before 2.11.10 calls the unserialize function on the values of the (1) configuration and (2) v[0] parameters, which might allow remote attackers to conduct cross-site request forgery (CSRF) attacks via unspecified vectors (CVE-2009-4605).

Alerts:
Gentoo 201201-01 phpmyadmin 2012-01-04
Debian DSA-2034-1 phpmyadmin 2010-04-17
Mandriva MDVSA-2010:018 phpMyAdmin 2010-01-19

Comments (none posted)

php-ZendFramework: multiple vulnerabilities

Package(s):php-ZendFramework CVE #(s):
Created:January 18, 2010 Updated:January 20, 2010
Description:

From the Zend Framework release notes for 1.97:

The following security vulnerabilities are resolved in these releases:

  • ZF2010-06: Potential XSS or HTML Injection vector in Zend_Json
  • ZF2010-05: Potential XSS vector in Zend_Service_ReCaptcha_MailHide
  • ZF2010-04: Potential MIME-type Injection in Zend_File_Transfer
  • ZF2010-03: Potential XSS vector in Zend_Filter_StripTags when comments allowed
  • ZF2010-02: Potential XSS vector in Zend_Dojo_View_Helper_Editor
  • ZF2010-01: Potential XSS vectors due to inconsistent encodings
Alerts:
Fedora FEDORA-2010-0652 php-ZendFramework 2010-01-15
Fedora FEDORA-2010-0601 php-ZendFramework 2010-01-15

Comments (none posted)

ruby: escape sequence injection

Package(s):ruby CVE #(s):CVE-2009-4492
Created:January 14, 2010 Updated:August 15, 2011
Description: From the Fedora alert:

A security vulnerability is found on WEBrick module in Ruby currently shipped on Fedora 11 that WEBrick lets attackers to inject malicious escape sequences to its logs, making it possible for dangerous control characters to be executed on a victim's terminal emulator.

Alerts:
CentOS CESA-2011:0908 ruby 2011-08-14
CentOS CESA-2011:0909 ruby 2011-06-30
Scientific Linux SL-ruby-20110628 ruby 2011-06-28
Scientific Linux SL-ruby-20110628 ruby 2011-06-28
Red Hat RHSA-2011:0909-01 ruby 2011-06-28
Red Hat RHSA-2011:0908-01 ruby 2011-06-28
Pardus 2010-19 ruby 2010-02-04
Mandriva MDVSA-2010:017 ruby 2010-01-19
Fedora FEDORA-2010-0530 ruby 2010-01-14
Gentoo 201001-09 ruby 2010-01-14
Fedora FEDORA-2010-0533 ruby 2010-01-14
Ubuntu USN-900-1 ruby1.9 2010-02-16

Comments (none posted)

squirrelmail: arbitrary code execution

Package(s):squirrelmail CVE #(s):CVE-2009-1381
Created:January 14, 2010 Updated:January 20, 2010
Description: From the CVE entry:

The map_yp_alias function in functions/imap_general.php in SquirrelMail before 1.4.19-1 on Debian GNU/Linux, and possibly other operating systems and versions, allows remote attackers to execute arbitrary commands via shell metacharacters in a username string that is used by the ypmatch program. NOTE: this issue exists because of an incomplete fix for CVE-2009-1579.

Alerts:
Gentoo 201001-08 squirrelmail 2010-01-13

Comments (none posted)

systemtap: arbitrary code execution

Package(s):systemtap CVE #(s):CVE-2009-4273
Created:January 18, 2010 Updated:April 27, 2010
Description:

From the Red Hat bugzilla entry:

A flaw was found in the "stap-server" network compilation server, an optional part of systemtap. Part of the server is written in bash and does not adequately sanitize its inputs, which are essentially full command line parameter sets from a client. Remote users may be able to abuse quoting/spacing/metacharacters to execute shell code on behalf of the compile server process/user (normally a fully unprivileged synthetic userid).

Alerts:
SuSE SUSE-SR:2010:010 krb5, clamav, systemtap, apache2, glib2, mediawiki, apache 2010-04-27
Fedora FEDORA-2010-1720 systemtap 2010-02-18
Fedora FEDORA-2010-0688 systemtap 2010-01-17
Fedora FEDORA-2010-0671 systemtap 2010-01-17
CentOS CESA-2010:0124 systemtap 2010-03-02
Red Hat RHSA-2010:0124-01 systemtap 2010-03-01
Fedora FEDORA-2010-1373 systemtap 2010-02-18

Comments (none posted)

transmission: cross-site request forgery

Package(s):transmission CVE #(s):CVE-2009-1757
Created:January 18, 2010 Updated:January 20, 2010
Description:

From the Mandriva advisory:

Cross-site request forgery (CSRF) vulnerability in Transmission 1.5 before 1.53 and 1.6 before 1.61 allows remote attackers to hijack the authentication of unspecified victims via unknown vectors (CVE-2009-1757).

Alerts:
Mandriva MDVSA-2010:013 transmission 2010-01-18

Comments (none posted)

virtualbox: multiple vulnerabilities

Package(s):virtualbox CVE #(s):CVE-2009-3692 CVE-2009-3940
Created:January 14, 2010 Updated:March 11, 2010
Description: From the Gentoo alert:

* A shell metacharacter injection in popen() (CVE-2009-3692) and a possible buffer overflow in strncpy() in the VBoxNetAdpCtl configuration tool. * An unspecified vulnerability in VirtualBox Guest Additions (CVE-2009-3940).

Alerts:
Mandriva MDVSA-2010:059 virtualbox 2010-03-10
Gentoo 201001-04 virtualbox 2010-01-13
SuSE SUSE-SR:2010:002 virtualbox-ose, NetworkManager-gnome, avahi, acl, libthai 2010-02-01

Comments (none posted)

Page editor: Jake Edge

Kernel development

Brief items

Kernel release status

The current development kernel is 2.6.33-rc4, released on January 12. There have been 366 non-merge commits to Linus's tree since -rc4, mostly bug fixes. One might expect an -rc5 soon, probably just after publication.

Stable updates: Three stable kernels were released on January 18: 2.6.32.4, 2.6.31.12 , and 2.6.27.44. Each has multiple fixes (2.6.32.4 includes 52 patches), including some security issues. Greg Kroah-Hartman also released a status report on the stable trees, which notes that there will almost certainly be no more 2.6.31 updates, that 2.6.27 is only viable for another 6-8 months, and that he will be maintaining 2.6.32 as a "long-term" stable release.

Comments (none posted)

Quotes of the week

Since 32 bits means that any machine with 1 GB more means HIGHMEM, the number of non-embedded machines that should run 32-bit kernels today is functionally the null set. Unfortunately Linux distros have not properly promoted 64-bit kernels for 32-bit distros; although pure 64 bits is better, it would be a *helluva* lot better if people stuck on 32 bits for compatibility reasons had a saner alternative.
-- H. Peter Anvin

Can people now accept that the reason we have rather more complex models for security policy is because generally speaking these "oh so simple" little magic switches don't actually work or solve any real world problems.
-- Alan Cox

Comments (27 posted)

devtmpfs to lose EXPERIMENTAL tag

By Jake Edge
January 20, 2010
In a fairly short period of time, devtmpfs has gone from a controversial proposal in May to being merged into the mainline for 2.6.32. It was merged as an experimental feature, though, which is something the devtmpfs developers would like to see change. Kay Sievers posted a patch that would remove the experimental designation as well as make it the default: "All major distros enable devtmpfs on recent systems, so remove the EXPERIMENTAL flag, and enable it by default to reflect how it is used today."

Comments on the patch indicate that there is little complaint about removing the experimental designation, but making it the default was not particularly popular. Arjan van de Ven complained that enabling devtmpfs by default violated a kernel convention: "we use 'default y' only for those things that used to be on, and are now turned into a config option." Sievers, at least, had never heard of that convention, but is willing to follow it—if it exists.

Alan Cox pointed out that existing distributions do not use devtmpfs, only those in development, but Sievers sees no harm for older systems:

And it should not harm any old system if it is enabled. If initramfs is used it's completely invisible, if a custom kernel with kernel-mounted rootfs is used, the udev boot script usually over-mounts the devtmpfs at /dev with an empty tmpfs, like it has always done it before.

It is unclear whether this change is meant for 2.6.33 or is just being floated early for the 2.6.34 merge window, but the removal of EXPERIMENTAL seems to have no real opposition. Whether it becomes the default or not looks to be up in the air, but in a fairly short period of time, devtmpfs has cemented its place in the mainline kernel.

Comments (none posted)

Kernel development news

LCA: Why filesystems are hard

By Jonathan Corbet
January 20, 2010
The ext4 filesystem is reaching the culmination of a long development process. It has been marked as stable in the mainline kernel for over a year, distributions are installing it by default, and it may start to see more widespread enterprise-level deployment toward the end of this year. At linux.conf.au 2010, ext4 hacker Ted Ts'o talked about the process of stabilizing ext4 and why filesystems take a long time to become ready for production use.

In general, Ted says, people tend to be overly optimistic about how quickly a filesystem can stabilize. It is not a fast process, for a number of fairly clear reasons. In general, there are some aspects of software which can make it hard to test and debug. These include premature optimization ("the root of all evil"), the presence of large amounts of internal state, and an environment involving a lot of parallelism. Any of these features will make code more difficult to understand and complicate the testing environment.

Filesystems suffer from all of these problems. Users demand that a general-purpose filesystem be heavily optimized for a wide variety of workloads; this optimization work must be done at all levels of the code. The entire job of a filesystem is to store and manage internal state. Among other things, that makes it hard for developers to reproduce problems; specific bugs are quite likely to be associated with the state of a specific filesystem which a user may be unwilling to share even in the absence of the practical difficulties implicit in making hundreds of gigabytes of data available to developers. And parallelism is a core part of the environment for any general-purpose filesystem; there will always be many things going on at once. All of these factors combine to make filesystems difficult to stabilize.

What it comes down to, Ted says, is that filesystems, like fine wines, have to age for a fair period of time before they are ready. But there's an associated problem: the workload-dependent nature of many filesystem problems guarantees that filesystem developers cannot, by themselves, find all of the bugs in their code. There will always be a need for users to test the code and report their experiences. So filesystem developers have a strong incentive to encourage users to use the code, but the more ethical developers (at least) do not want to cause users to lose data. It's a fine line which can be hard to manage.

So what does it take to get a filesystem written and ready for use? As part of the process of seeking funding for Btrfs development, Ted talked to [Ted Ts'o] veterans of a number of filesystem development projects over the years. They all estimated that getting a filesystem to a production-ready state would require something between 75 and 100 person-years of effort - or more. That can be a daunting thing to tell corporate executives when one is trying to get a project funded; for Btrfs, Ted settled for suggesting that every company involved should donate two engineers to the cause. Alas, not all of the companies followed through completely; vague problems associated with an economic crisis got in the way.

An associated example: Sun started working on the ZFS filesystem in 2001. The project was only announced in 2005, with the first shipments happening in 2006. But it is really only in the last year or so that system administrators have gained enough confidence in ZFS to start using it in production environments. Over that period of time, the ZFS team - well over a dozen people at its peak - devoted a lot of time to the development of the filesystem.

So where do things stand with ext4? It is, Ted says, an interesting time. It has been shipping in community distributions for a while, with a number of them now installing it by default. With luck, the long term support and enterprise distributions will start shipping it soon; enterprise-level adoption can be expected to follow a year or so after that.

Over the last year or so, There have been something between 60 and 100 ext4 patches in each mainline kernel release. Just under half of those are bug fixes; many of the rest are cleanup patches. There's also a small amount of new feature and performance enhancement work still. Ted noted that the number of bug fixes has not been going down in recent releases. That, he says, is to be expected; the user community for ext4 is growing rapidly, and more users will find (and report) more bugs.

A certain number of those bugs are denial of service problems; many of those are system crashes in response to a corrupted on-disk filesystem image. A larger share of the problems are race conditions and, especially deadlocks. There are a few problems associated with synchronization; one does not normally notice these at all unless the system crashes at the wrong time. And there are a few memory leaks, usually associated with poorly-tested error-handling paths.

The areas where the bulk of these bugs can be found is illuminating. There have been problems in the interaction between the block allocator and the online resize functionality - it turns out that people do not resize filesystems often, so this code is not always all that heavily tested. Other bugs have come up in the interaction between block pre-allocation and out-of-space handling. Online defragmentation has had a number of problems, including one nasty security bug; it turned out that nobody had really been testing that code. The FIEMAP ioctl() command, really only used by one utility, had some problems. There were issues associated with disk quotas; this feature, too, is not often used, especially by filesystem developers. And there have been problems with the no-journal mode contributed by Google; the filesystem had a number of "there is always a journal" assumptions inherited from ext3, but, again, few people have tested this feature.

The common theme here should be clear: a lot of the bugs turning up in this stage of the game are associated with little-used features which have not received as much testing as the core filesystem functions. The good news is that, as a result, most of the bugs have not actually affected that many users.

There was one problem in particular which took six months to find; about once a month, it would corrupt a filesystem belonging to a dedicated and long-suffering tester. It turned out that there was a race condition which could corrupt the disk if two processes were writing the same file at the same time. Samba, as it happens, does exactly that, whereas the applications run by most filesystem developers do not. The moral of the story: just because the filesystem developer has not seen problems does not mean that the code is truly safe.

Another bug would only strike if the system crashed at just the wrong time; it had been there for a long time before anybody noticed it. How long? The bug was present in the ext3 filesystem as well, but nobody ever reported it.

There have also been a number of performance problems which have been found and fixed. Perhaps the most significant one had to do with performance in the writeback path. According to Ted, the core writeback code in the kernel is fairly badly broken at the moment, with the result that it will not tell the filesystem to write back more than 1024 blocks at a time. That is far too small for large, fast devices. So ext4 contains a hack whereby it will write back much more data than the VFS layer has requested; it is justified, he says, because all of the other filesystems do it too. In general, nobody wants to touch the writeback code, partly because they fear breaking all of the workarounds which have found their way into filesystem-specific code over the years.

Ted concluded by noting that, in one way, filesystems are easy: the Linux kernel contains a great deal of generic support code which does much of the work. But the truth of the matter is that they are hard. There are lots of workloads to support, the performance demands are strong, and there tend to be lots of processes running in parallel. The creation of a new filesystem is done as a labor of love; it's generally hard to justify from a business perspective. This reality is reflected in the fact that almost nobody is investing in filesystem work currently, with the one high-profile exception being Sun and its ZFS work. But, Ted noted, that work has cost them a lot, and it's not clear that they have gotten a return which justifies that investment. Hopefully the considerable amount of work which has gone into Linux filesystem development will have a more obvious payback.

Comments (24 posted)

Uprobes: not quite there yet

By Jonathan Corbet
January 20, 2010
Tracing support in Linux has made a great deal of progress over the course of the last year or so. One important feature still lacks support in the mainline kernel, though: seamless, combined tracing of user-space execution along with the kernel. The subsystem which is meant to support this feature - utrace - has run into a number of roadblocks on its way into the mainline. Now a higher-level feature, uprobes, has been proposed as a solution for dynamic probing of user-space programs. All told, the combination shows a lot of progress toward inclusion, but the resulting discussion suggests that there are still problems to be overcome before this code will be merged.

This version of uprobes is actually two independent modules which address the problem at different levels. The lower-level piece is called "UBP," for user-space break points; its job is to handle the actual placement of probes into user-space processes. The developers reasoned that there might be additional users of user-space probes in the future, so the facilities for the placement and removal of those probes were carved out separately.

On top of UBP is the actual uprobes code, which handles higher-level details. Uprobes arbitrates between multiple users of breakpoints, even if two users want to place a probe at the same location. It uses utrace to ensure that processes are not running in an area which is about to have a probe inserted, and deals with the case of multiple processes running the same code where some are being traced and others are not. The uprobe code is also in charge of actually calling the probe function when a probe is hit and recovering properly if that function behaves poorly.

This separation is the first point of contention; Peter Zijlstra (who has been the main reviewer of this code so far) sees uprobes as an unnecessary glue layer which could be eliminated. Peter would rather see any needed features pushed down into UBP, after which the higher-level code could be dropped. The uprobes developers disagree, though, saying that the functions implemented at that level are necessary and cannot really be eliminated. This part of the discussion kind of died out, but it doesn't look like the developers are inclined to make major changes here.

The next problem is with the implementation of the probes themselves. When a probe is placed in a user-space program, the instruction at the probed location is overwritten by a breakpoint. When the breakpoint is hit, the probe handler function is invoked; once it returns, the replaced instruction must be executed somehow. A simple implementation would put that instruction back into its original location, single-step through it, then restore the breakpoint once again. That approach fails, though, if there is a second process (or thread) running the probed code. If that second process executes through the probed area while the probe has been removed, the associated event will be lost.

So the uprobes developers took a separate approach, called "single-step out of line" or "execute out of line" (XOL). A separate region of memory is set up for the purpose of holding instructions which have been displaced by probe breakpoints. When one of those instructions is to be executed, it is run (again, in single-step mode) out of this separate area; after that, control returns after the probe location. This solution allows a probe to work with multiple processes at the same time.

The problem is this: the memory containing the XOL instructions must be in the probed process's address space. So the XOL code adds a virtual memory area (VMA) to the process, reserving a range of address space for this purpose. This works, but it strikes some observers as inelegant at best, and potentially disruptive at worst. Currently, the layout of a process's address space is almost entirely under the control of the process itself. The injection of a special kernel VMA can perturb the process's control of its address space, causing other VMAs to move or conflicting with an attempt by the process to place a VMA at a specific location. Debuggers are often known to distort application behavior (leading to "heisenbugs" which disappear when somebody attempts to observe them directly), but tracing, which is meant to work on production systems, should really minimize such distortions. Peter also dislikes the precedent of kernel code messing with a process's address space. Finally, on 32-bit systems, losing even a small amount of address space to a kernel function is likely to be unwelcome in a number of situations.

Solving this problem is not necessarily easy. Peter seems to favor emulating the displaced instruction, but that would require the implementation of a full instruction emulator in the kernel. That code would be large, architecture-specific, and error prone. There was some discussion of trying to run the instruction in kernel space, but doing that securely appears to be a challenging task. After an extended discussion, the prevailing opinion seemed to be something like that expressed by Pekka Enberg:

I guess we're looking at few megabytes of the address space for normal scenarios which doesn't seem too excessive... I don't like the idea but if the performance benefits are real (are they?), maybe it's a worthwhile trade-off.

In the end, perhaps the kernel developers will hold their noses and merge this approach, but chances are they'll need to talk about it for a while yet first.

The uprobes code comes with an ftrace plugin which provides an interface to user space for the placement and management of probes. The problem here is that the kernel developers have, for all practical purposes, decided that there will be no more ftrace plugins added to the kernel. New features are supposed to go through the perf events subsystem instead, which is seen as having a better-designed interface. So the current ftrace plugin will almost certainly have to be redone for perf events before this code can go in.

The ftrace plugin also associates user-space probes with specific process of interest. Peter argues that it makes more sense to hook probes onto executable files, then make the process association by way of the VMA structure when the file is mapped. Existing features in the kernel, perhaps supplemented with a simple hook or two, would make it easy for uprobes to find processes running code from a file and to deal with process comings and goings while the probes are in place. The uprobes developers have not said as much, as of this writing, but it seems likely that the API could be reworked in those terms.

Then, there is the nagging issue of the utrace layer, which has not yet found its way into the mainline. It has recently been added to linux-next, but there is some discomfort with that and it's not clear if it will remain there or not.

All of this may seem like a lot of obstacles to the merging of this code, but it also represents a step forward. The road into the mainline has been long for utrace; a final detour or two seems about par for the course. The existence of uprobes as an in-kernel user of utrace might help its cause, once uprobes itself passes muster. Assuming consensus on these issues can be reached, it should be possible to make a last round of changes and be quite close to getting the code merged - though it might be difficult to get this done for the 2.6.34 merge window. But, if things go well, we should have user-space probing not too much later than that.

Comments (9 posted)

Secrets of the Ftrace function tracer

January 20, 2010

This article was contributed by Steven Rostedt

Probably the most powerful tracer derived from Ftrace is the function tracer. It has the ability to trace practically every function in the kernel. It can be run not just for debugging or analyzing, but also to learn and observe the flow of the Linux kernel.

Two previous articles, Debugging the Linux Kernel Using Ftrace parts I and II, explain some of the basic features of Ftrace and the function tracer; this article is written with the assumption that the reader has already read them. As with the previous articles, the examples in this article expect that the user has already changed to the debug file system tracing directory. The kernel configuration options that are need to be enabled to follow the examples in this article are:

  • CONFIG_FUNCTION_TRACER
  • CONFIG_DYNAMIC_FTRACE
  • CONFIG_FUNCTION_GRAPH_TRACER

Note, the CONFIG_HAVE_FUNCTION_TRACER, CONFIG_HAVE_DYNAMIC_FTRACE, and CONFIG_HAVE_FUNCTION_GRAPH_TRACER options are enabled when the architecture supports the corresponding feature. Do not confuse them with the listed options. The features are only enabled when the listed configuration options are enabled and not when only the _HAVE_ options are.

As shown in the previous articles, here is a quick example of how to enable the function tracer.

   [tracing]# echo function > current_tracer
   [tracing]# cat trace
          <idle>-0     [000] 1726568.996435: hrtimer_get_next_event <-get_next_timer_interrupt
          <idle>-0     [000] 1726568.996436: _spin_lock_irqsave <-hrtimer_get_next_event
          <idle>-0     [000] 1726568.996436: _spin_unlock_irqrestore <-hrtimer_get_next_event
          <idle>-0     [000] 1726568.996437: rcu_needs_cpu <-tick_nohz_stop_sched_tick
          <idle>-0     [000] 1726568.996438: enter_idle <-cpu_idle
          ...

The above shows you the process name (<idle>), PID (0) the CPU that the trace executed on ([000]), a time-stamp in seconds with the decimal places down to microseconds (1726568.996435) the function being traced (hrtimer_get_next_event) and the parent that called that function (get_next_timer_interrupt).

Function filtering

Running the function tracer can be overwhelming. The amount of data may be vast, and very hard to get a hold of by the human brain. Ftrace provides a way to limit what functions you see. Two files exist that let you limit what functions are traced:

   set_ftrace_filter
   set_ftrace_notrace

These filtering features depend on the CONFIG_DYNAMIC_FTRACE option. As explained in the previous articles, when this configuration is enabled all of the mcount caller locations are stored and at boot time are converted into NOPs. These locations are saved and used to enable tracing when the function tracer is activated. But this also has a nice side effect: not all functions must be enabled. The above files will determine which functions gets enabled and which do not.

When any function is listed in the set_ftrace_filter, only those functions will be traced. This will help the performance of the system when the trace is active. Tracing every function incurs a large overhead, but when using the set_ftrace_filter, only those functions listed in that file will have the NOPs changed to call the tracer. Depending on which functions are being traced, just having a couple of hundred functions enabled is hardly noticeable.

The set_ftrace_notrace file is the opposite of set_ftrace_filter. Instead of limiting the trace to a set of functions, functions listed in set_ftrace_notrace will not be traced. Some functions show up quite often and not only does tracing these functions slow down the system, they can fill up the trace buffer and make it harder to analyze the functions you care about. Functions such as rcu_read_lock() and spin_lock() fall into this category.

The process to add functions to these files typically uses bash redirection. Using the symbol '>' will remove all existing functions in the file and add what is being echoed into the file. Appending to the file using '>>' will keep the existing functions and add new ones.

   [tracing]# echo sys_read > set_ftrace_filter
   [tracing]# cat set_ftrace_filter
   sys_read
   [tracing]# echo sys_write >> set_ftrace_filter
   [tracing]# cat set_ftrace_filter
   sys_write
   sys_read
   [tracing]# echo sys_open > set_ftrace_filter
   [tracing]# cat set_ftrace_filter
   sys_open

To remove all functions just echo a blank line into the filter file.

   [tracing]# echo sys_read sys_open sys_write > set_ftrace_notrace 
   [tracing]# cat set_ftrace_notrace
   sys_open
   sys_write
   sys_read
   [tracing]# echo > set_ftrace_notrace
   [tracing]# cat set_ftrace_notrace
   [tracing]#

The functions listed in these files can also be set on the kernel command line. The options ftrace_notrace and ftrace_filter will preset these files by listing a comma delimited set of functions.

   ftrace_notrace=rcu_read_lock,rcu_read_unlock,spin_lock,spin_unlock
   ftrace_filter=kfree,kmalloc,schedule,vmalloc_fault,spurious_fault

Functions added by the kernel command line set what will be in the corresponding filter files. These options only pre-load the files, functions can still be removed or added using the bash redirection as explained above.

The functions listed in set_ftrace_notrace take precedence. That is, if a function is listed in both set_ftrace_notrace and set_ftrace_filter, that function will not be traced.

Wildcard filters

A list of functions that can be added to the filter files is shown in the available_filter_functions file. This list of functions was derived from the list of stored mcount callers previously mentioned.

   [tracing]# cat available_filter_functions | head -8
   _stext
   do_one_initcall
   run_init_process
   init_post
   name_to_dev_t
   create_dev
   T.627
   set_personality_64bit

You can grep this file and redirect the result into one of the filter files:

   [tracing]# grep sched available_filter_functions > set_ftrace_filter
   [tracing]# cat set_ftrace_filter | head -8
   save_stack_address_nosched
   mce_schedule_work
   smp_reschedule_interrupt
   native_smp_send_reschedule
   sys32_sched_rr_get_interval
   sched_avg_update
   proc_sched_set_task
   sys_sched_get_priority_max

Unfortunately, adding lots of functions to the filtering files is slow and you will notice that the above grep took several seconds to execute. This is because each function name written into the filter file will be processed individually. The above grep produces over 300 function names. Each of those 300 names will be compared (using strcmp()) against every function name in the kernel, which is quite a lot.

   [tracing]# wc -l  available_filter_functions 
   24331 available_filter_functions

So the grep above caused set_ftrace_filter to generate over 300 * 24331 (7,299,300) comparisons!

Fortunately, these files also take wildcards; the following glob expressions are valid:

  • value* - Select all functions that begin with value.
  • *value* - Select all functions that contain the text value.
  • *value - Select all functions that end with value.

The kernel contains a rather simple parser, and will not process value*value in the expected way. It will ignore the second value and select all functions that start with value regardless of what it ends with. Wildcards passed to the filter files are processed directly for each available function, which is much faster than passing in individual functions in a list.

Because the star (*) is also used by bash, it is best to wrap the input with quotes:

   [tracing]# echo set* > set_ftrace_filter
   [tracing]# cat set_ftrace_filter
   #### all functions enabled ####
   [tracing]# echo 'set*' > set_ftrace_filter
   [tracing]# cat set_ftrace_filter | head -5
   set_personality_64bit
   set_intr_gate_ist
   set_intr_gate
   set_intr_gate
   set_tsc_mode

The filters can also select only those functions that belong to a specific module by using the 'mod' command in the input to the filter file:


   [tracing]# echo ':mod:tg3' > set_ftrace_filter
   [tracing]# cat set_ftrace_filter |head -8
   tg3_write32
   tg3_read32
   tg3_write_flush_reg32
   tw32_mailbox_flush
   tg3_write32_tx_mbox
   tg3_read32_mbox_5906
   tg3_write32_mbox_5906
   tg3_disable_ints

This is very useful if you are debugging a single module, and only want to see the functions that belong to that module in the trace.

In the earlier articles, enabling and disabling recording to the ring buffer was done using the tracing_on file and the tracing_on() and tracing_off() kernel functions. But if you do not want to recompile the kernel, and you want to stop the tracing at a particular function, set_ftrace_filter has a method to do so.

The format of the command to have the function trace enable or disable the ring buffer is as follows:

   function:command[:count]

This will execute the command at the start of the function. The command is either traceon or traceoff, and an optional count can be added to have the command only execute a given number of times. If the count is left off (including the leading colon) then the command will be executed every time the function is called.

A while back, I was debugging a change to the kernel I made that was causing a segmentation fault to some programs. I was having a hard time catching the trace, because by the time I was able to stop the trace after seeing the segmentation fault, the data had already been overwritten. But the backtrace on the console showed that the function __bad_area_nosemaphore was being called. I was then able to stop the tracer with the following command:


  [tracing]# echo '__bad_area_nosemaphore:traceoff' > set_ftrace_filter
  [tracing]# cat set_ftrace_filter
  #### all functions enabled ####
  __bad_area_nosemaphore:traceoff:unlimited
  [tracing]# echo function > current_tracer

Notice that functions with commands do not affect the general filters. Even though a command has been added to __bad_area_nosemaphore, the filter still allowed all functions to be traced. Commands and filter functions are separate and do not affect each other. With the above command attached to the function __bad_area_nosemaphore, the next time the segmentation fault occurred, the trace stopped and contained the data I needed to debug the situation.

Removing functions from the filters

As stated earlier, echoing in nothing with '>' will clear the filter file. But what if you only want to remove a few functions from the filter?

   [tracing]# cat set_ftrace_filter > /tmp/filter
   [tracing]# grep -v lock /tmp/filter > set_ftrace_filter

The above works, but as mentioned, it may take a while to complete if there were several functions already in set_ftrace_filter. The following does the same thing but is much faster:

   [tracing]# echo '!*lock*' >> set_ftrace_filter

The '!' symbol will remove functions listed in the filter file. As shown above, the '!' works with wildcards, but could also be used with a single function. Since '!' has special meaning in bash it must be wrapped with single quotes or bash will try to execute what follows it. Also note the '>>' is used. If you make the mistake of using '>' you will end up with no functions in the filter file.

Because the commands and filters do not interfere with each other, clearing the set_ftrace_filter will not clear the commands. The commands must be cleared with the '!' symbol.

   [tracing]# echo 'sched*' > set_ftrace_filter
   [tracing]# echo 'schedule:traceoff' >> set_ftrace_filter
   [tracing]# cat trace | tail -5
   schedule_console_callback
   schedule_bh
   schedule_iso_resource
   schedule_reallocations
   schedule:traceoff:unlimited
   [tracing]# echo > set_ftrace_filter
   [tracing]# cat set_ftrace_filter
   #### all functions enabled ####
   schedule:traceoff:unlimited
   [tracing]# echo '!schedule:traceoff' >> set_ftrace_filter
   [tracing]# cat set_ftrace_filter
   #### all functions enabled ####
   [tracing]#

This may seem awkward, but having the '>' and '>>' only affect the functions to be traced and not the function commands, actually simplifies the control between filtering functions and adding and removing commands.

Tracing a specific process

Perhaps you only need to trace a specific process, or set of processes. The file set_ftrace_pid lets you specify specific processes that you want to trace. To just trace the current thread you can do the following:

   [tracing]# echo $$ > set_ftrace_pid

The above will set the function tracer to only trace the bash shell that executed the echo command. If you want to trace a specific process, you can create a shell script wrapper program.

   [tracing]# cat ~/bin/ftrace-me
   #!/bin/sh
   DEBUGFS=`grep debugfs /proc/mounts | awk '{ print $2; }'`
   echo $$ > $DEBUGFS/tracing/set_ftrace_pid
   echo function > $DEBUGFS/tracing/current_tracer
   exec $*
   [tracing]# ~/bin/ftrace-me ls -ltr

Note, you must clear the set_ftrace_pid file if you want to go back to generic function tracing after performing the above.

   [tracing]# echo -1 > set_ftrace_pid

What calls a specific function?

Sometimes it is useful to know what is calling a particular function. The immediate predecessor is helpful, but an entire backtrace is even better. The function tracer contains an option that will create a backtrace in the ring buffer for every function that is called by the tracer. Since creating a backtrace for every function has a large overhead, which could live lock the system, care must be taken when using this feature. Imagine the timer interrupt on a slower system where it is run at 1000 HZ. It is quite possible that having every function that the timer interrupt calls produce a backtrace could take 1 millisecond to complete. By the time the timer interrupt returns, a new one will be triggered before any other work can be done, which leads to a live lock.

To use the function tracer backtrace feature, it is imperative that the functions being called are limited by the function filters. The option to enable the function backtracing is unique to the function tracer and activating it can only be done when the function tracer is enabled. This means you must first enable the function tracer before you have access to the option:

   [tracing]# echo kfree > set_ftrace_filter
   [tracing]# cat set_ftrace_filter
   kfree
   [tracing]# echo function > current_tracer
   [tracing]# echo 1 > options/func_stack_trace
   [tracing]# cat trace | tail -8
    => sys32_execve
    => ia32_ptregs_common
                cat-6829  [000] 1867248.965100: kfree <-free_bprm
                cat-6829  [000] 1867248.965100: <stack trace>

    => free_bprm
    => compat_do_execve
    => sys32_execve
    => ia32_ptregs_common
   [tracing]# echo 0 > options/func_stack_trace
   [tracing]# echo > set_ftrace_filter

Notice that I was careful to cat the set_ftrace_filter before enabling the func_stack_trace option to ensure that the filter was enabled. At the end, I disabled the options/func_stack_trace before disabling the filter. Also note that the option is non-volatile, that is, even if you enable another tracer plugin in current_tracer, the option will still be enabled if you re-enable the function tracer.

The function_graph tracer

The function tracer is very powerful, but it may be difficult to understand the linear format that it produces. Frederic Weisbecker has extended the function tracer into the function_graph tracer. The function_graph tracer piggy-backs off of most of the code created by the function tracer, but adds its own hook in the mcount call. Because it still uses the mcount calling methods most of the function filtering explained above also applies to the function_graph tracer, with the exception of the traceon/traceoff commands and set_ftrace_pid (although the latter may change in the future).

The function_graph tracer was also explained in the previous articles, but the set_graph_function file was not described. The func_stack_trace used in the previous section can see what might call a function, but set_graph_function can be used to see what a function calls:

   [tracing]# echo kfree > set_graph_function
   [tracing]# echo function_graph > current_tracer
   [tracing]# cat trace
   # tracer: function_graph
   #
   # CPU  DURATION                  FUNCTION CALLS
   # |     |   |                     |   |   |   |
    0)               |  kfree() {
    0)               |    virt_to_cache() {
    0)               |      virt_to_head_page() {
    0)   0.955 us    |        __phys_addr();
    0)   2.643 us    |      }
    0)   4.299 us    |    }
    0)   0.855 us    |    __cache_free();
    0)   ==========> |
    0)               |    smp_apic_timer_interrupt() {
    0)               |      apic_write() {
    0)   0.849 us    |        native_apic_mem_write();
    0)   2.853 us    |      }
   [tracing]# echo > set_graph_function

This displays the call graph performed only by kfree(). The "==========>" shows that an interrupt happened during the call. The trace records all functions within the kfree() block, even those functions called by an interrupt that triggered while in the scope of kfree().

The function_graph tracer shows the time a function took in the duration field. In the previous articles, it was mentioned that only the leaf functions, the ones that do not call other functions, have an accurate duration, since the duration of parent functions also includes the overhead of the function_graph tracer calling the child functions. By using the set_ftrace_filter file, you can force any function into becoming a leaf function in the function_graph tracer, and this will allow you to see an accurate duration of that function.

   [tracing]# echo smp_apic_timer_interrupt > set_ftrace_filter
   [tracing]# echo function_graph > current_tracer
   [tracing]# cat trace | head
   # tracer: function_graph
   #
   # CPU  DURATION                  FUNCTION CALLS
   # |     |   |                     |   |   |   |
    1)   ==========> |
    1) + 16.433 us   |  smp_apic_timer_interrupt();
    1)   ==========> |
    1) + 25.897 us   |  smp_apic_timer_interrupt();
    1)   ==========> |
    1) + 24.764 us   |  smp_apic_timer_interrupt();

The above shows that the timer interrupt takes between 16 and 26 microseconds to complete.

Function profiling

oprofile and perf are very powerful profiling tools that take periodic samples of the system and can show where most of the time is spent. With the function profiler, it is possible to take a good look at the actual function execution and not just samples. If CONFIG_FUNCTION_GRAPH_TRACER is configured in the kernel, the function profiler will use the function graph infrastructure to record how long a function has been executing. If just CONFIG_FUNCTION_TRACER is configured, the function profiler will just count the functions being called.


   [tracing]# echo nop > current_tracer
   [tracing]# echo 1 > function_profile_enabled
   [tracing]# cat trace_stat/function 0 |head
     Function                               Hit    Time            Avg
     --------                               ---    ----            ---
     schedule                             22943    1994458706 us     86931.03 us 
     poll_schedule_timeout                 8683    1429165515 us     164593.5 us 
     schedule_hrtimeout_range              8638    1429155793 us     165449.8 us 
     sys_poll                             12366    875206110 us     70775.19 us 
     do_sys_poll                          12367    875136511 us     70763.84 us 
     compat_sys_select                     3395    527531945 us     155384.9 us 
     compat_core_sys_select                3395    527503300 us     155376.5 us 
     do_select                             3395    527477553 us     155368.9 us 

The above also includes the times that a function has been preempted or schedule() was called and the task was swapped out. This may seem useless, but it does give an idea of what functions get preempted often. Ftrace also includes options that allow you to have the function graph tracer ignore the time the task was scheduled out.

   [tracing]# echo 0 > options/sleep-time
   [tracing]# echo 0 > function_profile_enabled
   [tracing]# echo 1 > function_profile_enabled
   [tracing]# cat trace_stat/function0  | head
     Function                               Hit    Time            Avg
     --------                               ---    ----            ---
     default_idle                          2493    6763414 us     2712.962 us 
     native_safe_halt                      2492    6760641 us     2712.938 us 
     sys_poll                              4723    714243.6 us     151.226 us  
     do_sys_poll                           4723    692887.4 us     146.704 us  
     sys_read                              9211    460896.3 us     50.037 us   
     vfs_read                              9243    434521.2 us     47.010 us   
     smp_apic_timer_interrupt              3940    275747.4 us     69.986 us   
     sock_poll                            80613    268743.2 us     3.333 us    

Note that the sleep-time option contains a "-" and is not sleep_time.

Disabling the function profiler and then re-enabling it causes the numbers to reset. The list is sorted by the Avg times, but using scripts you can easily sort by any of the numbers. The trace_stat/function0 only represents CPU 0. There exists a trace_stat/function# for each CPU on the system. All functions that are traced and have been hit are in this file.

   [tracing]# cat trace_stat/function0  | wc -l
   2978

Functions that were not hit are not listed. The above shows that 2978 functions were hit since I started the profiling.

Another option that affects profiling is graph-time (again with a "-"). By default it is enabled. When enabled, the times for a function include the times of all the functions that were called within the function. As you can see from the output in the above examples, several system calls are listed with the highest average. When disabled, the times only include the execution of the function itself, and do not contain the times of functions that are called from the function:

   [tracing]# echo 0 > options/graph-time
   [tracing]# echo 0 > function_profile_enabled
   [tracing]# echo 1 > function_profile_enabled
   [tracing]# cat trace_stat/function0  | head
     Function                               Hit    Time            Avg
     --------                               ---    ----            ---
     mwait_idle                           10132    246835458 us     24361.96 us 
     tg_shares_up                        154467    389883.5 us     2.524 us    
     _raw_spin_lock_irqsave              343012    263504.3 us     0.768 us    
     _raw_spin_unlock_irqrestore         351269    175205.6 us     0.498 us    
     walk_tg_tree                         14087    126078.4 us     8.949 us    
     __set_se_shares                     274937    88436.65 us     0.321 us    
     _raw_spin_lock                      100715    82692.61 us     0.821 us    
     kstat_irqs_cpu                      257500    80124.96 us     0.311 us    

Note that both sleep-time and graph-time also affect the duration times displayed by the function_graph tracer.

Conclusion

The function tracer is very powerful with lots of different options. It is already available in mainline Linux, and hopefully will be enabled by default in most distributions. It allows you to see into the depths of the kernel and with its arsenal of features, gives you a good idea of why things are happening the way they do. Start using the function tracer to open up the black box that we call the kernel. Have fun!

Comments (6 posted)

Patches and updates

Kernel trees

Architecture-specific

Core kernel code

Development tools

Device drivers

Filesystems and block I/O

Memory management

Networking

Security-related

Page editor: Jonathan Corbet

Distributions

News and Editorials

LCA: Cooperative management of package copyright and licensing data

By Jonathan Corbet
January 20, 2010
Kate Stewart is the manager of the PowerPC team at Freescale. As such, she has a basic customer service problem to solve: people who buy a board from Freescale would like to have some sort of operating system to run on it. That system, of course, will be Linux; satisfying this requirement means that Freescale must operate as a sort of Linux distributor. At her linux.conf.au talk, Kate talked about a new initiative aimed at helping distributors to ensure that they are compliant with the licenses of the software they are shipping.

Early GPL enforcement actions against companies like Cisco were, arguably, misplaced: Cisco was just gluing its nameplate onto hardware (and [Kate Stewart] software) supplied to it by far-eastern manufacturing operations. The original GPL violation was committed by the original manufacturers who incorporated GPL-licensed software and failed to live up to the source distribution requirements. There was a clear purpose behind targeting companies like Cisco, though: the unpleasantness of dealing with GPL compliance problems was meant to get them to require compliance from their suppliers, which were otherwise harder to reach. Companies seem to have gotten the message; Kate noted that the supply chain is now routinely requiring certification of license compliance from suppliers. So Freescale needs to stay on top of license compliance in order to be able to sell its products; your editor suspects this may be a more powerful motivation than the mere need to avoid copyright infringement.

One common worry related to license compliance, of course, is that somebody might have somehow included proprietary code into a freely-licensed package. More common, though, are simple license compatibility issues, such as the inclusion of a GPL-licensed file in an ostensibly BSD-licensed package. Finding this kind of problem requires the examination of every file distributed with a package - and there are a lot of packages with a great many files out there. It's a lot of work.

Freescale is certainly not the only Linux distributor, and it is not the only one facing this problem; anybody who is distributing software (free or otherwise) is (or at least should be) going through a similar process. That leads to a lot of duplicated work which really could be shared. At the first LinuxCon event in September 2009, a number of interested parties got together to try to figure out if there was a way that the license validation and compliance work could be carried out in a more community-oriented manner.

The problem may seem simple, but there are a lot of details to deal with, starting with the large number of ways of analyzing projects. At one end, commercial tools provided by companies like Black Duck and Palamida can automate the task of finding a number of common licensing problems. But there are also many homegrown tools and spreadsheets in use throughout the industry. The end result is predictable: lots of incompatible data, inconsistent work, and duplicated effort.

Given that, it's not surprising that this new (and, apparently, still unnamed) project is starting with an attempt to standardize the encoding of information about packages. This information comes at a number of levels:

  • The identification of the project as a whole, including metadata on the results of any analysis which has been done. Included here is a formal name for the package, its published location, the stated license (and any possible alternative licenses), how the package is used (is it a standalone program or a library?), the copyright holders and dates of copyright, etc.

  • Package-specific facts: the version that was analyzed, hashes for each of the included files, how the information about the package was generated, and so on. There will also be the equivalent of a "signed off by" tag whereby people doing analysis on a package would certify their results.

  • File-specific information for every file found in the package: its full path name, the type of the file, the license governing it, copyright information, and so on.

Once the process of standardizing the encoding of this information has been completed, the project can move on to the second phase, which is the creation of a common site to host information stored in that format. The idea here is to make it easy to look up and share information on specific packages, and to make any known problems publicly visible.

All of that, in turn, has a goal beyond the simple sharing of work: they would also like to improve the quality of the next generation of packages. By making public review of licensing information easier, it is hoped that problems will be found (and fixed) sooner. One gets the sense that companies like Freescale are getting tired of finding licensing issues in packages which are scheduled to ship in a few days. A related goal is to make package maintainers more aware of where their code is coming from. As licensing issues are found in a public review process, maintainers will, hopefully, begin to pay more attention and these issues will become less common.

The project is still in an early stage; there is a mailing list set up on the FOSSBazaar site, but not a whole lot else. The dreaded regular conference call will be established in the near future. The group hopes to create a proposed standard within the next few months; the Linux Foundation will be helping with legal review to ensure that all of the appropriate bases are covered. The current plan is to get the first version of the standard published in August, 2010.

During the question period, Andrew Bartlett expressed his dislike for the central database concept. Centrally-maintained information, he says, will soon go stale. It would be better to create a format for a license metadata file which could be maintained and shipped with the project itself; he said he would be glad to carry such information with the Samba distribution. That is an idea which will likely be carried back to the working group for consideration.

Licensing is an important component of the free software development process, and ensuring that our licenses are complied with is incumbent upon anybody engaged in software distribution. But all of the associated due diligence work really only has to be done once; like the development of the software itself, it can be managed in a community-oriented manner. The formalization and organization of the associated information is a logical first step toward bringing a community process to this important - if not necessarily fun - task.

Comments (10 posted)

New Releases

AV Linux 3.0R1 Released!

The first revision of AV Linux 3.0 is available. "On the heels of AV Linux 3.0, version 3.0R1 (R1=Revision 1) has been released. I, better than anyone perhaps realize the inconvenience of a new version so quickly, it is my hope that this is the best move in the long run to provide a stable base that has a broader possible range of installation and can be better maintained with updated packages over the course of a longer "shelf life". This fixes many of the installation issues created by 3.0 as well as streamlining and drastically reducing the ISO size down to just over a Gigabyte. My sincere thanks to the AV Linux users who were guinea pigs and helped to test and provide feedback on 3.0R1 before it's release."

Full Story (comments: none)

openSUSE releases the openSUSE Build Service Beta 2

openSUSE has released the second beta of the openSUSE Build Service (OBS). "This release is now feature complete and also the API should be final by now. Biggest changes since beta 1 are: * Switch to Ruby on Rails 2.3.5 * The branch call is doing full copies of packages now, not just _link files anymore * Repository status + dirty flag is calculated and displayed in the web interface (and with osc 0.125) * many bugfixes esp. in api and webui * Workers can get auto configured via SLP."

Comments (none posted)

Open Xange 2010

The Xange team has announced the release of Open Xange 2010: the very best of Xange, only with OSS - Open Source Software. Xange is a Fedora remix with KDE.

Comments (none posted)

Pardus Linux 2009.1 arrives - Update (The H)

The H covers the release of Pardus Linux 2009.1. "The Pardus developers have announced the release of Pardus Linux 2009.1. Pardus is a Turkish distribution sponsored by The National Research Institute of Electronics and Cryptology (UEKAE) and includes several unique features: PiSi (Packages Installed Successfully, as Intended), an efficient and small package management system for installing and managing software implemented using Python, and COMAR, their own COnfiguration MAnageR that includes the Mudar init system for Pardus."

Comments (none posted)

Puredyne 9.10 released

Puredyne 9.10 is out. "Puredyne is a GNU/Linux live distribution aimed at creative people, looking for tools outside the standard. It provides the best experimental creative applications alongside a solid set of graphic, audio and video tools in a fast, minimal package. For everything from sound art to innovative filmmaking." Changes in this release appear to include 64-bit support and the "broth" mechanism designed to make it easy to create derivative distributions.

Full Story (comments: none)

Ubuntu 'Lucid' Alpha 2 released

The second alpha of the Ubuntu 10.04 "Lucid Lynx" release is available for testing. There's a number of changes in this alpha, including the removal of Hal, a 2.6.32 kernel, and no less than three versions of the proprietary NVIDIA drivers. See this page for a detailed view of the changes planned for 10.04 as a whole.

Full Story (comments: 36)

Distribution News

Mandriva Linux

Noteworthy changes in Mandriva Cooker

Frederik Himpe covers some recent changes in Mandriva's development Cooker. "GNOME has been upgraded to the new development release 2.29.5. The Cheese webcam application has been split into different libraries, making it easier for other applications to integrate webcam functionality (like avatar choosers in instant messaging applications). Epiphany now uses an infobar to ask the user for saving website username and password and stores them in the GNOME keyring."

Comments (none posted)

Ubuntu family

Minutes from the Ubuntu Technical Board meeting

Click below for the minutes from the January 12, 2010 meeting of the Ubuntu Technical Board.

Full Story (comments: none)

Developer Membership Board election results

The Ubuntu development team has elected the members of the Developer Membership Board. Click below for the results.

Full Story (comments: none)

Distribution Newsletters

DistroWatch Weekly, Issue 337

The DistroWatch Weekly for January 18, 2010 is out. "With most major distributions in the early stages of preparation for their next stable releases, it seems like a good time to take a look at some of the lesser-known projects. This week we examine Jibbed 5.0.1, a NetBSD-based live CD that boots into an Xfce desktop and includes a number of desktop applications. In the news section, a new community remix of Fedora with media codecs and improved hardware support makes its first appearance, Mandriva updates its development branch with the latest testing builds of GNOME and KDE, the Dreamlinux user community expresses fears over the future of the project, and Arch Linux developers defend the "Arch way" in an interview at OSNews. Also in this week's issue, Jesse Smith explains why free software is sometimes perceived as inferior compared to proprietary applications. Finally, don't miss the statistics section which takes another look at online sales of free operating systems. Happy reading!"

Comments (none posted)

Fedora Weekly News 209

The Fedora Weekly News for January 17, 2010 is out. "This issue starts with announcements from the project, including availability of Open Xange 2010, a Fedora + KDE distro, a change in cmake macro usage, and some feature update pings for Fedora 13. In Ambassador news, details on the FAmSCo chair, vice-chair named. In Quality Assurance news, lots of detail from this past week's QA Team meetings, plus details on an X.org testing request, desktop validation update, and an updated gnome-shell available for testing.In Translation news, a request for submission branches for Anaconda, notice that virt-viewer has been added and is available for translations, and a new coordinator of the Brazilian Portuguese translation team. In Art/Design Team news, notice of the approval of the new Design Spin for Fedora, and updates to the Fedora 13 theming and graphics. This week issue wraps up with the latest security advisories for Fedora 11 and 12. We hope you enjoy Fedora Weekly News 209!"

Full Story (comments: none)

openSUSE Weekly News/106

This issue of the openSUSE Weekly News covers * openSUSE News: OBS supports new branch and merge handling, * Unixmen/srlinuxx: Five useful extensions for Openoffice, * Jussi Kekkonen (Tm_T): KDE Software Compilation 4.4 RC1 Codename "Cornelius" released, * Sirko Kemter: Building an openSUSE Art-Team, * TuxRadar: The best Linux desktop search tools, and more.

Comments (none posted)

Ubuntu Weekly Newsletter #176

The Ubuntu Weekly Newsletter for January 16, 2010 is out. "In this issue we cover: Ubuntu 10.4 Lucid Lynx Alpha 2, Ubuntu Developer Week, Ubuntu User Day, new Ubuntu Women leadership, and Free Culture Showcase."

Full Story (comments: none)

Page editor: Rebecca Sobol

Development

0MQ: A new approach to messaging

January 20, 2010

This article was contributed by Martin Sustrik & Martin Lucina

BSD sockets have been used in thousands of applications over the years, but they suffer from some limitations. The low-level nature of the socket API leads developers to reimplementing the same functionality on top of sockets over and over again. Alternatives exist in the form of various "I/O frameworks" and "enterprise messaging systems" but both of these approaches have their own set of drawbacks. The former are generally bound to a certain programming languages or paradigms, while the latter tend to be bloated, proprietary solutions with resident daemons that hog system resources.

0MQ ("Zero-Em-Queue") is a messaging system that tackles these issues by taking a different approach. Instead of inventing new APIs and complex wire protocols, 0MQ extends the socket API, eliminating the learning curve and allowing a network programmer to master it in a couple of hours. The wire protocols are simplistic, even trivial. Performance matches and often exceeds that of raw sockets.

A client/server example

Let's have a look at a trivial example of 0MQ usage. Say we want to implement an SQL server. The networking part of the code-base is fairly complex; we have to manage multiple connections in a non-blocking fashion, a pool of worker threads is needed to send large result sets in the background allowing the SQL engine to process new requests in the meantime, and so on.

Here's how we can accomplish this using the 0MQ C language bindings:

    #include <zmq.h>

    int main () 
    {
        void *ctx, *s;
        zmq_msg_t query, resultset;

        /* Initialise 0MQ context, requesting a single application thread
           and a single I/O thread */
        ctx = zmq_init (1, 1, 0);

        /* Create a ZMQ_REP socket to receive requests and send replies */
        s = zmq_socket (ctx, ZMQ_REP);

        /* Bind to the TCP transport and port 5555 on the loopback interface */
        zmq_bind (s, "tcp://lo:5555");

        while (1) {
            /* Allocate an empty message to receive a query into */
            zmq_msg_init (&query);

            /* Receive a message, blocks until one is available */
            zmq_recv (s, &query, 0);

            /* Allocate a message for sending the resultset */
            zmg_msg_init (&resultset);

            /* TODO: Process the query here and fill in the resultset */
            
            /* Deallocate the query */
            zmq_msg_close (&query);

            /* Send back our canned response */
            zmq_send (s, &resultset, 0);
            zmq_msg_close (&resultset);
        }
    }

This example shows us several basic principles of 0MQ:

  • 0MQ transports data as messages, represented in this example by zmq_msg_t. 0MQ considers a message to be a discrete unit of transport, and message data as an opaque blob. This is considered a deliberate improvement compared to systems like CORBA with their 1000+ pages of core specification. Developers can always use a third-party library such as Google protocol buffers if they do not wish to write custom serialization code.

  • 0MQ supports different socket types which are specified at socket creation time and implement different messaging patterns. In this example we use ZMQ_REP which stands for the replier pattern, meaning that we wish to receive requests from many sources and send replies back to the original requester.

  • zmq_init() has two arguments related to threads. The first argument is the maximum number of application threads that will access the 0MQ API. The second argument specifies the size of the I/O thread pool 0MQ will create and use to retrieve and send messages in the background. For instance, when sending a large result set the send() function will return immediately and the actual work of pushing the data to the network will be offloaded to an I/O thread in the background.

  • For the sake of brevity, error handling has been omitted in this example. The 0MQ C binding uses standard POSIX conventions, so most functions return 0 on success and -1 on failure, with the actual error code being stored in errno. 0MQ also provides a zmq_strerror() function to handle it's specific error codes.

The client code is equally simple (C++):

    #include <string.h>
    #include <stdio.h>
    #include <zmq.hpp>

    int main ()
    {
        try {
            // Initialise 0MQ context with one application and one I/O thread
            zmq::context_t ctx (1, 1);

            // Create a ZMQ_REQ socket to send requests and receive replies
            zmq::socket_t s (ctx, ZMQ_REQ);

            // Connect it to port 5555 on localhost using the TCP transport
            s.connect ("tcp://localhost:5555");

            // Construct an example message containing our query
            const char *query_string = "SELECT * FROM mytable";
            zmq::message_t query (strlen (query_string) + 1);
            memcpy (query.data (), query_string, strlen (query_string) + 1);

            // Send the query
            s.send (query);

            // Receive the result
            zmq::message_t resultset;
            s.recv (&resultset);

            // TODO: Process the resultset here
        }
        catch (std::exception &e) {
            // 0MQ throws standard exceptions just like any other C++ API
            printf ("An error occurred: %s\n", e.what());
            return 1;
        }

        return 0;
    }

This example uses the ZMQ_REQ socket type, which specifies the requester messaging pattern, meaning that we wish to send requests, which may be possibly load-balanced to all peers listening for such requests, and we wish to receive replies to our requests. The example also nicely shows how the 0MQ C++ language bindings map to the native language features and allow us to use exceptions for error handling.

One socket, many endpoints: A publish/subscribe example

Now, let's have a look at another common pattern used in messaging, where one application (the publisher) publishes a stream of messages while other interested applications (subscribers) receive and process the messages.

The publisher application (Java):

    import org.zmq.*;

    class publisherApp
    {
        public static void main (String [] args)
        {
            // Initialise 0MQ with a single application and I/O thread
            org.zmq.Context ctx = new org.zmq.Context (1, 1, 0);

            // Create a PUB socket for port 5555 on the eth0 interface
            org.zmq.Socket s = new org.zmq.Socket (ctx, org.zmq.Socket.PUB);
            s.bind ("tcp://eth0:5555");

            for (;;) {
                // Create a new, empty, 10 byte message
                byte msg [] = new byte [10];

                // TODO: Fill in the message here

                // Publish our message   
                s.send (msg, 0);
            }
        }
    }

The subscriber application (Python):

    import libpyzmq

    def main ():
        
        # Initialise 0MQ with a single application and I/O thread
        ctx = libpyzmq.Context (1, 1)

        # Create a SUB socket ...
        s = libpyzmq.Socket (ctx, libpyzmq.SUB)

        # ... ask to subscribe to all messages ...
        s.setsockopt (libpyzmq.SUBSCRIBE , "")

        # ... request the tcp transport with the endpoint myserver.lan:5555
        s.connect ("tcp://myserver.lan:5555")

        while True:
            # Receive one message
            msg = s.recv ()

            # TODO: Process the message here
     
    if __name__ == "__main__":
        main ()

As with our previous examples, we have deliberately omitted error handling and processing code for the sake of brevity. Error handling in both the Java and Python bindings is implemented using native exceptions.

These examples demonstrate the following:

  • Message subscriptions using the setsockopt() call. Using this call the subscriber indicates that it is only interested in the subset of messages sent by the publisher starting with the specified string. For example, to subscribe to only those messages beginning with ABC we would use the call:

        s.setsockopt (libpyzmq.SUBSCRIBE , "ABC")
    
  • To boost performance 0MQ subscriptions are simple plain strings rather than regular expressions, however, you can use them for simple prefix-style matching where a subscription to animals. would match messages such as animals.dogs and animals.cats.

Now, let's look at a more complex scenario such as one that is often encountered in the stock trading business. We want to send a high volume message feed of stock prices to multiple applications, some of which are located on our local LAN and others which are located at branch offices connected via slow and expensive WAN links. The message load is so high that sending the feed to each receiver on our LAN, individually using TCP, would exhaust our LAN bandwidth.

For the subscribers located on our local LAN, the ideal solution would be to use a multicast transport. 0MQ supports a reliable multicast protocol known as Pragmatic General Multicast (PGM) which suits this purpose ideally. Many LWN readers may not have heard of PGM — without going into too much detail here we can say that it's an industry standard protocol specified in RFC 3208 and implemented mostly by proprietary messaging and operating system vendors such as Tibco, IBM, and Microsoft. Luckily, the excellent Open Source OpenPGM implementation exists and is used by 0MQ.

Back to our stock trading example: While using a PGM transport is fine on our local LAN, multicast won't work too well for our overseas offices, so we really want to be able to talk to those using plain old TCP. In terms of code, we want something like this to bind our sending socket to both a TCP port as well as a multicast group:

    s.bind ("tcp://eth0:5555");
    s.bind ("pgm://eth0;224.0.0.1:5555");

This example shows off two major features of 0MQ. First, the ability to bind or connect a socket to multiple endpoints by simply calling bind() and/or connect() more than once. Second, the use of different underlying transports for a socket. 0MQ supports several transports, of which the most important are tcp, pgm, and inproc (optimized for sending messages between threads within the same application). Using the same API for in-process and remote communication allows us to easily move computations from local worker threads to remote hosts and vice versa.

While the example above works, it results in each overseas receiver creating a separate TCP connection to the publisher in our main office which may result in messages being transferred multiple times over the WAN link. What we would like instead is a proxy application that would run at each branch office (ideally directly on it's edge router to minimize the number of network hops, thus reducing latency to a minimum) connecting to the publisher and re-distributing the messages on the branch office LAN via reliable multicast.

For the scenario described above we can use zmq_forwarder which is part of the 0MQ distribution. Running zmq_forwarder with a simple XML configuration file that describes the inbound and outbound connections is all that is needed.

Performance

Of course, none of this is any good if your code runs slowly. Assuming recent and well-tuned hardware, the end to end latency for transferring a small message (64 bytes or less) on a 1GbE LAN using 0MQ is approximately 30-40 microseconds. As for throughput, when transferring a single stream of one byte messages 0MQ achieves rates of up to 3-4 million messages per second.

Observant readers will note that achieving these throughput figures with raw BSD sockets is impossible. The 0MQ approach is to use "message batching" (i.e. sending messages in batches rather than one by one) thus avoiding frantic up-and-down traversal of the networking stack. Thanks to smart batching algorithms, message batching has almost no impact on latency.

The memory footprint of 0MQ will be particularly interesting to embedded developers, as it is much smaller that of conventional messaging systems. For instance on Linux/x86, the core code occupies only a couple of pages in resident memory.

Conclusion

While 0MQ is still a young project which is evolving rapidly, it is an interesting and powerful alternative for those who prefer a messaging system which emphasizes simplicity, efficiency, and low footprint over the complex bells and whistles of most current enterprise messaging systems. Particularly noteworthy is 0MQ's extension of the well known socket APIs and it's ambition to be "just a transport" which gets out of the way rather than yet another bloated messaging or RPC framework.

0MQ is licensed under the LGPL and you can download it on the project website. API documentation is provided in the form of traditional UNIX man pages and help can be found on the community mailing list or IRC channel. The project also has a GIT repository and is accepting contributions from the wider community.

As of this writing, 0MQ runs on Linux, Windows, Solaris, and many other POSIX platforms. Language bindings available include C, C++, Common Lisp, Java, Python, Ruby, and more. Unfortunately, packaging is lagging behind, so community contributions in this area would be very helpful. Due to the closed nature of stock trading systems it's hard to get a handle on actual adoption of 0MQ in the wild, but there is one case study available.

Comments (21 posted)

System Applications

Audio Projects

New Music Player Daemon releases

The Music Player Daemon project has announced the release of mpd 0.15.8 and ncmpc 0.16.1, both components include bug fixes.

Comments (none posted)

Database Software

MySQL 5.5.1-m2 released

Version 5.5.1-m2 of MySQL has been announced. "The new features in this release are of beta quality. As with any other pre-production release, caution should be taken when installing on production level systems or systems with critical data."

Full Story (comments: none)

PostgreSQL Weekly News

The January 17, 2010 edition of the PostgreSQL Weekly News is online with the latest PostgreSQL DBMS articles and resources.

Full Story (comments: none)

Embedded Systems

Android, Linux & Real-time Development for Embedded Systems (Embedded.com)

Embedded.com has an overview of Android development. The article looks at Android capabilities, particularly for non-phone applications, as well as what is needed to port it to non-ARM architectures. The information about "real-time" is essentially a suggestion to run Android and Linux along with an RTOS. "However, it would be better to think of Android as being a software platform for the construction of smart phones, as it is freely available and highly configurable. To be more precise, it is a software platform for building connected devices. [...] Android is an application framework on top of Linux. We will look at the details of the layers of the framework, shortly. It is supplied as open source code, but does not bind the user with the constraints of the GPL — there is no requirement for developers to make public any code developed using Android."

Comments (1 posted)

Interoperability

Samba 3.4.5 and 3.3.10 released

Two new releases of Samba are available. Samba 3.4.5: "This is the latest stable release of the Samba 3.4 series." Samba 3.3.10: "This is the latest stable release of the Samba 3.3 series."

Comments (none posted)

Web Site Development

moin 1.9.1 released

Version 1.9.1 of moin, a wiki system, has been announced, it includes important security and bug fixes. "See http://moinmo.in/MoinMoinDownload for the release archive and the change log."

Full Story (comments: none)

Tinyproxy 1.8.0 released

Version 1.8.0 of Tinyproxy has been announced. "Tinyproxy is a light-weight HTTP proxy daemon for POSIX operating systems. It is distributed using the GNU GPL license version 2 or above. Designed from the ground up to be fast and yet small, it is an ideal solution for use cases such as embedded deployments where a full featured HTTP proxy is required, but the system resources for a larger proxy are unavailable."

Full Story (comments: none)

Miscellaneous

Announcing CoffeeSaint, a Nagios status viewer

Folkert van Heusden has announced the CoffeeSaint project. "CoffeeSaint is a fully customizable Nagios status viewer. It grabs the status from a Nagios server and displays it in a fullscreen GUI. It is written in Java so it should run on all platforms (tested on Linux, AIX 6.1 and windows xp) capable of running Java 5. Also works with OpenJDK."

Comments (none posted)

SystemTap 1.1 released

Version 1.1 of SystemTap, an infrastructure for gathering information about a running Linux system, has been announced. Changes include: "better support for gcc 4.5 richer DWARF debuginfo, new preprocessor conditional for kernel 'CONFIG_*' testing, improved (experimental) unprivileged user support, new tapsets, better local-vs-global variable warnings, better return codes, bug fixes, and more..."

Full Story (comments: none)

Desktop Applications

Audio Applications

Audacity 1.3.11 released

Version 1.3.11 of the Audacity audio editor has been announced. "This release fixes a number of bugs reported to us in 1.3.10. Thank you to everyone who sent us feedback."

Comments (none posted)

Calendar Software

Lightning 1.0 beta1 now available

Version 1.0 beta1 of Mozilla Lightning has been announced. "The Calendar Project is proud to report, that (finally) the 1.0 beta1 release of Lightning has been completed and is now available via AMO[1]. Nearly 16 months after the 0.9, this release is more than overdue and we're more than happy to get the nearly 500 bugfixes and improvements into the hands of our users."

Full Story (comments: none)

Data Visualization

DISLIN 10.0 released

Version 10.0 of DISLIN has been announced, the software is only free for non-commercial use. "DISLIN is a high-level and easy to use plotting library for displaying data as curves, bar graphs, pie charts, 3D-colour plots, surfaces, contours and maps. Several output formats are supported such as X11, VGA, PostScript, PDF, CGM, WMF, HPGL, TIFF, GIF, PNG, BMP and SVG."

Full Story (comments: 1)

Desktop Environments

GNOME 2.29.5 released

Version 2.29.5 of GNOME has been announced. "Here is the first GNOME release for year 2010 and fifth development release towards our 2.30 release that will happen in March 2010. Your mission is easy: Go download it. Go compile it. Go test it. And go hack on it, document it, translate it, fix it."

Full Story (comments: none)

GNOME Software Announcements

The following new GNOME software has been announced this week: You can find more new GNOME software releases at gnomefiles.org.

Comments (none posted)

Day one at Camp KDE 2010 (KDE.News)

KDE.News has a report from the first day of Camp KDE. "Saturday, the first day of Camp KDE 2010 in San Diego, started with a short introduction by Jeff Mitchell. Jeff, who was the principal organizer of the conference, introduced us to a bit of history about Camp KDE and then went into some statistics about the KDE community. The conclusion was that if we maintain our current rate of growth we'll have about 6 billion developers by 2050. Continuing with this theme, he spoke about current topics in KDE such as the migration to Git and the upcoming release of KDE SC 4.4. Jeff then introduced the talks to follow, including the work on KDE-on-Windows, KOffice and KDE technology on mobile devices."

Comments (5 posted)

KDE Software Announcements

The following new KDE software has been announced this week: You can find more new KDE software releases at kde-apps.org.

Comments (none posted)

Xorg Software Announcements

The following new Xorg software has been announced this week: More information can be found on the X.Org Foundation wiki.

Comments (none posted)

Financial Applications

SQL-Ledger 2.8.28 released

Version 2.8.28 of SQL-Ledger, a web-based accounting system, has been announced. Changes include: "1. Version 2.8.28 2. fixed missing cc, bcc when converting sales order to invoice 3. added default value for exchangerate on reports 4. added additional fields for customer/vendor search".

Comments (none posted)

Mail Clients

Sylpheed 3.0beta6 (development) released

Development version 3.0beta6 of the Sylpheed mail client has been announced. Changes include: "* The bug that IMAP caches get wrongly deleted was fixed. * The copyright year was updated."

Comments (none posted)

Thunderbird 3.0.1 is now available for download

Thunderbird 3.0.1 has been released. It evidently contains security fixes, but the list of fixes does not clearly indicate which. Nevertheless: "We strongly recommend that all Thunderbird users upgrade to this release."

Full Story (comments: 2)

Music Applications

arpage 0.2 alpha released

Version 0.2 alpha of arpage, a JACK Synchronizedd Arpeggiator, has been announced. "The UI is still dead-boring GTK, but I've read back over the LAD threads regarding audio-oriented UI libraries and I'm thinking of investigating libproaudio with the next release."

Full Story (comments: none)

fishnpitch - a realtime JACK MIDI tuner

The first alpha release of fishnpitch is available. "It's a small command line tool that creates JACK MIDI ports and allows tuning to arbitrary scales (via .scl files) with any MIDI capable synthesizers. Incoming note messages are translated via pitch bend and distributed among several midi channels."

Full Story (comments: none)

MusE 1.0.1 released

Version 1.0.1 of MusE, a multi-track virtual studio, has been announced. "Right on the heels of big One-O we've decided to release a minor update with some corrections, main features being some package improvements and a midi timing issue when running under very high priority."

Full Story (comments: none)

Qtractor 0.4.4 released

Version 0.4.4 of Qtractor, an Audio/MIDI multi-track sequencer, has been announced. "Release highlights: * LV2 plug-in support (NEW) * MIDI event list view (NEW) * Expedite audio/MIDI clip import (NEW) * DSSI plug-in output control ports feedback/update (NEW) * JACK transport, MMC, SPP control options (NEW) * Self-bounce/recording (FIX) * Audio/MIDI drift correction (FIX) * Anti-glitch audio micro-fade-in/out ramp smoothing (FIX)"

Full Story (comments: none)

Web Browsers

Firefox 3.6 Release Candidate 2 is available

Version 3.6 rc2 of Firefox has been announced. "An update to the Firefox 3.6 Release Candidate is now available. This second release candidate is available for free download at http://www.mozilla.com/firefox/all-rc.html and has been issued as an automatic update to all Firefox 3.6 Beta and Release Candidate users."

Full Story (comments: none)

Firefox 3.6 privacy policy review

Firefox 3.6 is undergoing a privacy policy review. "Firefox 3.6 has some new features, like Personas being part of the browser itself. As such, we have taken this opportunity to review and update our privacy policy. Privacy and data protection are important issues to the people of Mozilla. So we are looking at all the terms of this policy and thinking about how to best use our privacy policy to benefit users. This means you should expect more significant changes in the future. For example, we are considering ways to significantly shorten and simplify this policy."

Full Story (comments: none)

Mozilla dumps Firefox 3.7 from schedule, changes dev process (ComputerWorld)

ComputerWorld reports on an interview with Mozilla's Mike Beltzner about changes in how Mozilla plans to deliver new features in Firefox. "Rather than add features to Firefox only in once- or twice-a-year upgrades, Mozilla will quietly insert some functionality via its regular security updates, which appear every four to six weeks, said Mike Beltzner, director of Firefox, in an interview Thursday. [...] That means Firefox 3.7, which was slated for a second quarter release, has been dropped from the development schedule, said Beltzner. The next version of the open-source browser after the almost-ready Firefox 3.6 will be an as-yet-unnamed update at the end of this year or in early 2011."

Comments (67 posted)

Miscellaneous

XYZCommander 0.0.3 released

Version 0.3 of XYZCommander, a pure console visual file manager, has been announced. "New features: * Python2.4 support * Permanents filters * Custom sorting * High-color mode support with urwid >= 0.9.9 * New command line options: -c colors and -s skin * XYZCommander's system prefix can be set using XYZCMD_PREFIX environment variable in case of custom installation, by default it is equal to sys.prefix."

Full Story (comments: none)

Languages and Tools

Caml

Caml Weekly News

The January 19, 2010 edition of the Caml Weekly News is out with new articles about the Caml language.

Full Story (comments: none)

Perl

Perl 5.11.4 is available

Version 5.11.4 of Perl has been announced. "Perl 5.11.4 is the first release of Perl 5.11.x since the code freeze for Perl 5.12.0. It and subsequent releases in the 5.11 series include very limited code changes, almost entirely related to regressions from previous released versions of Perl or which resolve issues we believe would make a stable release of Perl 5.12.0 inadvisable."

Full Story (comments: none)

Python

execnet 1.0.3 released

Version 1.0.3 of execnet has been announced. "execnet is a small and stable pure-python library for working with local or remote clusters of Python interpreters, with ease. It supports seamless instantiation of remote interpreters through the 'ssh' command line binary. The 1.0.3 release is a minor backward compatible release..."

Full Story (comments: none)

py.test 1.2.0 released

Version 1.2.0 of py.test has been announced. "py.test is an advanced automated testing tool working with Python2, Python3 and Jython versions on all major operating systems. It has a simple plugin architecture and can run many existing common Python test suites without modification. It offers some unique features not found in other testing tools. See http://pytest.org for more info. py.test 1.2.0 brings many bug fixes and interesting new abilities".

Full Story (comments: none)

Python 2.5.5 Release Candidate 1 released

Version 2.5.5 Release Candidate 1 of Python has been announced. "This is a source-only release that only includes security fixes. The last full bug-fix release of Python 2.5 was Python 2.5.4. Users are encouraged to upgrade to the latest release of Python 2.6 (which is 2.6.4 at this point). This releases fixes issues with the logging and tarfile modules, and with thread-local variables."

Full Story (comments: none)

Shed Skin 0.3 released

Version 0.3 of Shed Skin has been announced. "I have just released Shed Skin 0.3, an experimental (restricted) Python-to-C++ compiler. Please see my blog for more details about the release: http://shed-skin.blogspot.com/".

Full Story (comments: none)

stream 0.8 released

Version 0.8 of stream has been announced. "Stream is a module that lets one express a list-processing task as a pipeline and provide ways to easily parallelize it."

Full Story (comments: none)

IDEs

Pydev 1.5.4 released

Version 1.5.4 of Pydev has been announced, click below for the release details. "Pydev is a plugin that enables users to use Eclipse for Python, Jython and IronPython development -- making Eclipse a first class Python IDE -- It comes with many goodies such as code completion, syntax highlighting, syntax analysis, refactor, debug and many others.".

Full Story (comments: none)

Test Suites

Linux Desktop Testing Project 2.0.1 released

Version 2.0.1 of the Linux Desktop Testing Project has been announced. "LDTPv2 a complete rewrite of LDTPv1 in Python. This release is dedicated to Eitan Isaacson. Eitan wrote the LDTPv2 framework and important API's in LDTPv2 !"

Full Story (comments: none)

Version Control

monotone 0.46 released

Version 0.46 of the monotone version control system has been announced. "The highlights in this release are bisection support - thanks to Derek Scherger! - and the possibility to call the automation interface over the network - thanks to Timothy Brownawell! Please note that stdio interface has been changed in an backwards-incompatible way."

Full Story (comments: none)

Page editor: Forrest Cook

Announcements

Non-Commercial announcements

EFF Files Comments on Net Neutrality

The EFF is petitioning the FCC on ISP network neutrality. "The Electronic Frontier Foundation (EFF) called on the Federal Communications Commission (FCC) today to close loopholes in its proposed regulations for network neutrality -- loopholes that could let the entertainment industry and law enforcement hinder free speech and innovation. "The central goal of the net neutrality movement is to prevent ISPs from discriminating against lawful content on the Internet," said EFF Civil Liberties Director Jennifer Granick. "Yet the FCC's version of net neutrality specifically allows ISPs to make those discriminations -- opening the door to widespread Internet surveillance and censorship in the guise of copyright protection and addressing the needs of law enforcement.""

Full Story (comments: none)

Shawn Powers suffers a house fire

Linux Journal's Shawn Powers is dealing with a house fire. "One of our SCALE 8x speakers, Linux Journal associate editor Shawn Powers suffered a tragedy this morning when he lost his home and family pets due to a house fire. Many of you may know Shawn from his entertaining Linux Tech Tips as well as informative articles founds both online and in print. Linux Journal has set up a Chip In site".

Full Story (comments: none)

Articles of interest

Linux Foundation CTO leaves for Google (internetnews.com)

internetnews.com covers Ted Ts'o's change of jobs. "Ted [Ts'o], the chief technology officer of the Linux Foundation, has moved on to start a new career at Google. [Ts'o] had been the CTO at the Linux Foundation for the past two years and as such, the move was not unexpected. "Ted's fellowship at the Linux Foundation was a two-year assignment and was completed in December," Jim Zemlin, executive director, the Linux Foundation said in an email sent to InternetNews.com. "It is similar to the fellowships held by Andrew Morton, Andrew Tridgell and Markus Rex.""

Comments (9 posted)

Linux laptop orchestra reprograms musical conventions (CollegiateTimes)

CollegiateTimes takes a look at the Linux Laptop Orchestra. "On Dec. 4, those who attended the premiere event of the Linux Laptop Orchestra may have been asking the same questions if they didn't know what they were getting themselves into. The ensemble, abbreviated as "L2Ork" forgoes the traditional approach to composing and performing music. Instead, members use open-source music software called Pure Data on their computers to program a complex series of notes and chords. While computer-based music is often associated with electronic music, the orchestra's sound is closer to its instrument-based brethren."

Comments (none posted)

Resources

New Ardour manual available online

flossmanuals.net presents a new online manual for Ardour. "Ardour is a full-featured, free and open-source hard disk recorder and digital audio workstation program suitable for professional use. It features unlimited audio tracks and buses, non-destructive, non-linear editing with unlimited undo, and anything-to-anywhere signal routing. It supports standard file formats, such as BWF, WAV, WAV64, AIFF and CAF, and it can use LADSPA, LV2, VST and AudioUnit plugin formats."

Comments (none posted)

FSFE Newsletter

The December, 2009 edition of the FSFE Newsletter is online with the latest Free Software Foundation Europe news. "Despite the temperatures dropping below zero all over Europe and the Christmas holidays approaching, FSFE kept working as usual for software freedom. The major news of December are that we have begun to restructure our website, added Andreas Tolf Tolfsen as webmaster deputy coordinator, and published a statement on the EC's settlement with Microsoft in the browser antitrust case. Read on to learn more about what we did in December. Moreover, let us offer you our best wishes for a great and Free 2010!"

Full Story (comments: none)

Linux Foundation Newsletter

The January, 2010 edition of the Linux Foundation Newsletter has been published. "In this month's Linux Foundation newsletter: * Linux.com Posts Hundreds of Jobs * Linux Foundation Announces 2010 Event Schedule * Get One, Give One Membership Program Continues * Linux Foundation in the News * From the Foundation: When One Linux Project Wins, All Linux Triumphs".

Full Story (comments: none)

Make a paper Tux Linux penguin (DigitalKamera)

DigitalKamera shows how to make Tux the penguin out of paper. "This is a Newyear gift to all Open source Lovers.. We all love penguin right? It is not enough to be a real Linux Geek, if you really want to impress then you must have a Linux mascot in your apparment.. This image tells how to make tux linux penguin in less than 15 minutes. All u have to do is print this image, cut it out and then glue according to the numbers printed on it."

Comments (none posted)

Interviews

FOSDEM speaker interviews installment 3

The third batch of FOSDEM speaker interviews is online. "Today we publish the third batch of interviews with our main track speakers. Here is some interesting reading material to make you curious about the main track talks * Mark Wielaard (SystemTap) * Isabel Drost (Apache Hadoop) * Adrian Bowyer (RepRap) * Christoph Pojer (MooTools)". (Thanks to Koen Vervloesem).

Comments (none posted)

Event Reports

linux.conf.au is Live (OStatic)

OStatic reports that linux.conf.au sessions are available through live feeds. "The 2010 event convened Monday (1/18) in Wellington, New Zealand, its second visit to the Land of the Long White Cloud, and will run through Saturday (1/23). It is also expanding its audience this year by adding a bit of extra "life" to the repertoire: For the first time, even those who can't make it to the conference door can join the festivities, as event organizers are streaming every single session - live."

Comments (4 posted)

Calls for Presentations

1st Call for Papers - AthCon IT Security Conference

A call for papers has gone out for ATHCON2010, the submission deadline is March 1. "From 3rd - 4th June AthCon, the first highly technical information security conference in Greece will take place in Athens at the Jockey's Country Club".

Full Story (comments: none)

Boston Linux Power Management Mini-summit cfp

A call for papers has gone out for the Boston Linux Power Management Mini-summit. "We will hold a Linux Power Management Mini-Summit in Boston on Monday, August 9th, 2010 -- the day before sessions at the Linux Foundation's "LinuxCon Boston 2010"."

Full Story (comments: none)

Linux Foundation Announces 2010 Event Schedule, Posts Call for Participation for Annual Collaboration Summit

The Linux Foundation has announced its 2010 event schedule. "The Linux Foundation today is also opening its Collaboration Summit Call for Participation (CFP) to all members of the Linux and open source software communities. Tracks will include mobile/embedded Linux, High Performance Computing, and filesystems, among others. Summit CFP submissions are due February 19, 2010 by midnight PST and can be submitted online at: http://events.linuxfoundation.org/events/collaboration-summit/cfp."

Comments (none posted)

Upcoming Events

Invitation to the BSP in Tokyo, Japan

The Debian Developer Japan/BSP conference has been announced. "Debian Developer in Japan and Debian JP Project convenes BSP on January 23. Location, Date : * Hosted The University of Tokyo, Komaba 2 campus[0]. * 9:00-21:00 January 23rd, 2010 * Main organizer: Yasuhiro Araki (ar) and me."

Full Story (comments: none)

OOoCon 2011 - Call for Location

A call for location has gone out for OOoCon 2011. "The OpenOffice.org Community is now accepting proposals from Community teams for hosting its annual international conference next year, OOoCon 2011. Hosting OOoCon is challenging, rewarding, exhilarating, exhausting... and can provide a huge publicity boost for OpenOffice.org in your area. There is no fixed date for OOoCon, although we would prefer an autumn date. The deadline for submissions is midnight UTC on 21 February 2010."

Full Story (comments: none)

Events: January 28, 2010 to March 29, 2010

The following event listing is taken from the LWN.net Calendar.

Date(s)EventLocation
February 2 Prague PostgreSQL Developers' Day 2010 Prague, Czech Republic
February 5
February 7
Frozen Perl 2010 Minneapolis, MN, USA
February 6 Super Happy Dev Castle #0 Belfast, N. Ireland, United Kingdom
February 6
February 7
Free and Open Source Developers' European Meeting Brussels, Belgium
February 10 Red Hat Cloud Computing Forum Online, Online
February 11
February 13
Bay Area Haskell Hackathon Mountain View, USA
February 15
February 18
ARES 2010 Conference Krakow, Poland
February 17
February 25
PyCon 2010 Atlanta, GA, USA
February 19
February 21
SCALE 8x - 2010 Southern California Linux Expo Los Angeles, USA
February 19
February 20
GNUnify Pune, India
February 20
February 21
FOSSTER '10 Amritapuri, India
February 22
February 24
O'Reilly Tools of Change for Publishing New York, NY, USA
February 27
February 28
The Debian/GNOME bug weekend Online, Internet
March 1
March 5
Global Ignite week Online, Online
March 2
March 4
djangoski Whistler, Canada
March 2
March 5
FOSSGIS 2010 Osnabrück, Germany
March 2
March 6
CeBIT Open Source Hannover, Germany
March 5
March 6
Open Source Days 2010 Copenhagen, Denmark
March 7
March 10
Bossa Conference 2010 Recife, Brazil
March 13
March 19
DebCamp in Thailand Khon Kaen, Thailand
March 15
March 18
Cloud Connect 2010 Santa Clara, CA, USA
March 16
March 18
Salon Linux 2010 Paris, France
March 17
March 18
Commons, Users, Service Providers Hannover, Germany
March 19
March 21
Panama MiniDebConf 2010 Panama City, Panama
March 19
March 21
Libre Planet 2010 Cambridge, MA, USA
March 19
March 20
Flourish 2010 Open Source Conference Chicago, IL, USA
March 22
March 26
CanSecWest Vancouver 2010 Vancouver, BC, Canada
March 22 OpenClinica Global Conference 2010 Bethesda, MD, USA
March 23
March 25
UKUUG Spring 2010 Conference Manchester, UK
March 25
March 28
PostgreSQL Conference East 2010 Philadelphia, PA, USA
March 26
March 28
Ubuntu Global Jam Online, World

If your event does not appear here, please tell us about it.

Mailing Lists

Python-es mailing list changes home

The Python-es mailing list has been changed. "Due to technical problems with the site that usually ran the Python-es mailing list (Python list for the Spanish speaking community), we are setting up a new one under the python.org umbrella. Hence, the new list will become <python-es@python.org> (the old one was <python-es@aditel.org>)."

Full Story (comments: none)

Web sites

Linux.com Launches job board for members (OStatic)

OStatic reports on the launch of the Linux.com job board. "According to The Job Thread Network, demand for Linux-related jobs as jumped 80% in the last five years. Linux.com plans to roll out a new section of its Web site tomorrow, called Linux Jobs Board, as a way for job seekers with Linux experience to connect with prospective employers. Backed by non-profit organization The Linux Foundation, the new Jobs board will give Linux.com members yet another way to highlight their skills and potentially get snatched up by an IT department."

Comments (1 posted)

Page editor: Forrest Cook


Copyright © 2010, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds